Seymour Cray and the Development of Supercomputers (linuxvoice.com) 54
An anonymous reader writes: Linux Voice has a nice retrospective on the development of the Cray supercomputer. Quoting: "Firstly, within the CPU, there were multiple functional units (execution units forming discrete parts of the CPU) which could operate in parallel; so it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next. It also had an instruction cache of sorts to reduce the time the CPU spent waiting for the next instruction fetch result. Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time." They also discuss modern efforts to emulate the old Crays: "...what Chris wanted was real Cray-1 software: specifically, COS. Turns out, no one has it. He managed to track down a couple of disk packs (vast 10lb ones), but then had to get something to read them in the end he used an impressive home-brew robot solution to map the information, but that still left deciphering it. A Norwegian coder, Yngve Ådlandsvik, managed to play with the data set enough to figure out the data format and other bits and pieces, and wrote a data recovery script."
Re: (Score:2)
No the elves have done everything, it is common knowledge...
https://en.wikipedia.org/wiki/... [wikipedia.org]
From the pic in TFA (Score:3)
Seymour Cray in that suit would make for a good Dr Who
Re: (Score:1)
Dumbing Down (Score:1)
Doesn't this go without saying?
Re: (Score:2)
Unless a third pedant already spoke up.
Re: (Score:2)
And we like it like that.
Re: (Score:3)
"it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next "
Doesn't this go without saying?
No. These days we just go forth and compute and if we predicted flow incorrectly, we throw away the result and compute again
Re:Dumbing Down (Score:5, Interesting)
No; CPUs didn't /have/ to do that. MIPS toyed with both models for a while - initially MIPS was like "we don't interlock pipeline stages, so programmers need to be smart." Then the R4000 came out that attempted to implement that, and it was .. complicated. So it got reverted.
Not all CPUs are like Intel CPUs (which aren't all like earlier intel cpus, which aren't all like 8080s, etc..)
Re: (Score:2)
Intel CPU designs go to great lengths to look very much like earlier Intel CPUs - even if the internals are very different, they are still backwards-compatable with earlier code dating right back to the 80386.
I do wonder what they could achieve if they were to abandon backwards compatibility and just ask people recompile their old code. Probably lots of ARM sales.
Itanium (Score:1)
Itanium [wikipedia.org], I suppose.
Re:Dumbing Down (Score:5, Informative)
"it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next " Doesn't this go without saying?
Back in the day, pipelining - issuing, say, a new multiply instruction every clock, even though several earlier multiplies were still working their way thru the pipeline - was too expensive for most architectures. An instruction might take multiple clock cycles to execute, but in most architectures the multi-clock instruction would tie up the functional unit until the computation was done - you might be able to issue a new multiply every 10 clocks or something. Pipelining takes more gates and more design because you don't want one slow stage to determine the clock rate of the whole design.
Which leads us to the early RISC computers, I can recall an early Sun SPARC architecture that lacked a hardware integer multiply instruction. The idea at the time was every instruction should take one clock, and any instruction that demanded too long a clock should be unrolled in software. So this version of SPARC used shifts and adds to do a multiply. At the time, that was a pure RISC design. One of the key insights in RISC, still useful today, is to separate main memory access from other computations.
The CPU design course I took in the late 80's said Seymour Cray invented that idea of separating loads and stores from computation, because even then, even with static RAM as main memory, accessing main memory was slower than accessing registers. So by separating loading from RAM into registers and storing from registers into RAM, the compiler could pre-schedule loads and stores such that they would not stall functional units. Cray also invented pipelining, another key feature in most modern CPUs (I'm not sure when ARM adopted pipelining, but i'm pretty sure it's in some ARM architectures have it now). Of course Cray had vector registers and the consequent vector SIMD hardware.
I don't think Cray invented out of order execution, but I don't think he needed it; in Cray architectures, it would be the compiler's job to order instructions to prevent stalls. In CISC architectures, OOO is mostly a trick for preventing stalls without the compiler needing to worry about it (also, with many models and versions of the Intel instruction architecture out there, it would be painful to have to compile differently for each and every model). So, for example, the load part of an instruction could be scheduled early enough that the data would be in a register by the time the rest of the instruction needed it.
Anyway, the upshot is modern CPU designs have a bigger debt to Cray than to any other single design.
Re:Dumbing Down (Score:4, Informative)
Cray also invented pipelining
That honour goes to Konrad Zuse some 25 years earlier. His purely mechanical Z1 calculator machine (not Turing complete) had a short pipeline.
Re: (Score:2)
It MIGHT have been a system that would unconditionally execute the next instruction and produce garbage if there was a dependency, leaving the compiler to order them correctly and insert NOPs where needed, but in fact it kept track of that in hardware and inserted the NOPs itself.
Geez, read a book (Score:2)
"Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time."
Oh god, this isn't even remotely correct.
For one, a similar design was used by Cray's earlier machine, the CC 6600, which also had 10 PPAs. And by the 7600, and the 8600. For another, there were dozens of machines with similar designs that predate this, including PEPE and the ILLIAC IV, both of which had hundreds of un
Re: (Score:3)
And now I see the error is in the quote above, because the original article doesn't screw it up.
Re: (Score:2)
Yeah, the context of the quote is severely butchered. That's something an *editor* would normally fix.
Re: (Score:3)
Yeah, the context of the quote is severely butchered. That's something an *editor* would normally fix.
Editors are too expensive...so we get crap insted of reel content, but that's probably how Dice gets their articals through so offten
**end purposeful bad writing editors would normally catch**
Re: (Score:3)
Geez, read the article. ;) They're talking about the 6600 there.
Re: (Score:1)
Sure, but the submitter's text makes it seem like that quote is talking about the first Cray supercomputer rather than that being about his work while still at CDC.
Re: (Score:1)
And aren't these the same?
Re:Geez, read a book (Score:5, Interesting)
I think the first is what we would call the pipeline [wikipedia.org] today and the second means parallel execution units. Using the word "functional units" for both is a bit confusing. Early RISC pipelines had 5 stages that are described in that link (and that brings back some memories - I remember studying it in college).
Funny thing is, I actually read the article to learn about what my first girlfriend's dad did - he was an engineer that worked on that thing (and yeah, she was a total nerd girl). I'm still Facebook friends with her, should point her to the article.
Re: (Score:3)
Yes, the PPs are _not_ "parallel processors", they are "peripheral processors". The 175 had 12 "peripheral processors" with quite limited capabilities and running at 12 MHz instead of the 40 of the main computer, and exclusively responsible ofr handling I/O.
The
Re: (Score:3, Informative)
Nowhere near that. From Cray's (admittedly only distantly related to the original Cray...) web site, http://www.cray.com/company/history
The first Cray®-1 system was installed at Los Alamos National Laboratory in 1976
and cost $8.8 million. It boasted a world-record speed of 160 million floating-point
operations per second (160 megaflops) and an 8 MB (1 million word) main memory.
If I remember right
Re: (Score:3, Informative)
The largest X-MP had 4 CPUs, each with a floating-point adder and multiplier and a clock speed of ~105MHz. So, the peak performance of these machines was 840MFlops. Achieving and sustaining that though was tricky and was only possible in large vector operations.
The impressive part of the architecture was its memory: at peak, the memory subsystem could complete 16 memory references per clock cycle (each delivering 64 bits of data), so the peak memory bandwidth was 13GBps, or roughly what a 64-bit DDR3 soluti
Clarify... (Score:1)
Am past my prime, and maybe getting Old Timers disease...
So can someone remind me, what's the difference between Gene Amdahl and Seymour Cray?
Did they ever meet and have cocktails?
Re: (Score:1)
Born three years apart
Amdahl served in Navy. Cray served in Army
Amdahl worked at IBM, Cray worked at CDC
Amdahl lived to 90, Cray died (in a wrecked Jeep) at 70
Amdahl called Cray "the most outstanding high-performance scientific computer designer in the world."
http://mbbnet.umn.edu/hoff/hoff_sc.html
In 1995 AMDAHL AND CRAY RESEARCH ANNOUNCE RESELLER AGREEMENT FOR BRANDED CRAY SUPERSERVER 6400 SYSTEMS
ftp://ftp.cray.com/announcements/company/OLD/CRAY_AMDAHL_AGMT.950228.txt
It would be hard to imagine that they h
Re: (Score:2)
FWIW, according to the book Portraits in Silicon [google.com], Amdahl and Cray never actually met in person.
Re: (Score:2)
For the most part Amdahl seems like a put your pants on one leg at a time guy, while Cray would figure out how to simply have pants in the same place that he needed to be
My dad worked at CDC. Cray was a notorious eccentric, although at that time (early 1960s) taking your jacket and tie off as you crawled under mainframes was considered evidence of communism.
Re: (Score:2)
Amdahl worked mostly on IBM mainframe clones, and focused on business applications. More emphasis on reliability, and processing currency, integers (counts), and business logic. Example: payroll for a big corporation.
Cray's machines were mostly used for scientific, engineering, research, and military applications. More emphasis on floating point number processing. Example: climate simulations.
"Peripheral Processors", not "Parallel Processors" (Score:3)
This is an error from the original article, not from the summary. If the author didn't even bother to look up what "PP" actually stood for, I don't have a lot of confidence in the rest of the article's scholarship. Heck, ONE CLICK TO WIKIPEDIA would have given her the proper definition.
Re: (Score:3)
I came here to say this. In the early 80's I worked on Control Data Cyber 174C mainframes (we had two). Liquid cooled, about maybe 20 feet long with hinged chassis that swung out like doors (maybe 40" by 6' and about 10" thick) . One chassis was a CPU, two were memory I think, and one was for 10+ Peripheral Processor Units (PPUs) which did 100% of the I/O. A whopping 40 MHz! and a 208 bit memory bus with SECDED.
Re: (Score:3)
I interned (sort of) at Babcock and Wilcox's computing center around 1980. We had several CDC systems, including a 76 ("7600"), which was built in a horseshoe arrangement much like the Cray-1. (The field engineers used its interior as a storage closet.) Me, I was just hauling tapes, card decks and printouts, but I did get to learn a bit about the machines, and a lot more once I got into comp architecture classes in college. It was a great place for a geek.
Re: "Peripheral Processors", not "Parallel Process (Score:1)
I worked with the CDC 6400 at the University of Colorado. It had an Extended Core Storage unit, a separate cabinet full of magnetic cord memory (they came up to 2 M words; I don't know how large CU's was). We gave machine room tours, including opening the door on the ECS, until somebody noticed that the machine sometimes crashed when the door was shut. Don't want to jiggle those cores, I guess. Just found this: http://www.corememoryshield.com/report.html
Re: (Score:3)
Right on. I worked at Control Data, as a CPU logic designer. The PP's were peripheral processors. The article is full of so much egregiously incorrect tripe I won't even bother to type up a correction. My advice to everyone is to completely ignore the article unless you want your head stuffed full of misinformation.
Re: (Score:2)
Two things: CDC 6000, I'm flabbergasted (Score:1)
The multiple functional units idea wasn't new with Cray's supercomputer. He was doing much the same thing in the computers he designed for Control Data Corporation (CDC).
Also, it seems astonishing that there would be no copies of that Cray software around, anywhere, other than on an old disk pack. There are still copies of the software for those CDC machines. Maybe that's because there were so many of them -- relative to the Crays.
Oops times two: mine and the author's (Score:1)
Just started to RTFA, and spotted two goofs.
My goof: The quoted text _was_ about the CDC 6600. That certainly explains the similarity.
The article's goof: Those were peripheral processors, not "parallel processors". They did I/O and occasional odd jobs for the operating system that the CPU wasn't suited for, or too busy to do.
Now to finish it off, and see what else I or the author have to be embarrassed about.