Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Technology

Seymour Cray and the Development of Supercomputers (linuxvoice.com) 54

An anonymous reader writes: Linux Voice has a nice retrospective on the development of the Cray supercomputer. Quoting: "Firstly, within the CPU, there were multiple functional units (execution units forming discrete parts of the CPU) which could operate in parallel; so it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next. It also had an instruction cache of sorts to reduce the time the CPU spent waiting for the next instruction fetch result. Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time." They also discuss modern efforts to emulate the old Crays: "...what Chris wanted was real Cray-1 software: specifically, COS. Turns out, no one has it. He managed to track down a couple of disk packs (vast 10lb ones), but then had to get something to read them in the end he used an impressive home-brew robot solution to map the information, but that still left deciphering it. A Norwegian coder, Yngve Ådlandsvik, managed to play with the data set enough to figure out the data format and other bits and pieces, and wrote a data recovery script."
This discussion has been archived. No new comments can be posted.

Seymour Cray and the Development of Supercomputers

Comments Filter:
  • by OzPeter ( 195038 ) on Friday December 11, 2015 @04:08PM (#51102361)

    Seymour Cray in that suit would make for a good Dr Who

    • by Tablizer ( 95088 )

      Seymour Cray in that suit [TFA pic] would make for a good Dr Who

      ...standing next to a trendy TARDIS 2.0

  • "it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next "

    Doesn't this go without saying?
    • "it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next "
      Doesn't this go without saying?

      No. These days we just go forth and compute and if we predicted flow incorrectly, we throw away the result and compute again

    • Re:Dumbing Down (Score:5, Interesting)

      by adri ( 173121 ) on Friday December 11, 2015 @06:29PM (#51103071) Homepage Journal

      No; CPUs didn't /have/ to do that. MIPS toyed with both models for a while - initially MIPS was like "we don't interlock pipeline stages, so programmers need to be smart." Then the R4000 came out that attempted to implement that, and it was .. complicated. So it got reverted.

      Not all CPUs are like Intel CPUs (which aren't all like earlier intel cpus, which aren't all like 8080s, etc..)

      • Intel CPU designs go to great lengths to look very much like earlier Intel CPUs - even if the internals are very different, they are still backwards-compatable with earlier code dating right back to the 80386.

        I do wonder what they could achieve if they were to abandon backwards compatibility and just ask people recompile their old code. Probably lots of ARM sales.

        • by vrt3 ( 62368 )

          I do wonder what they could achieve if they were to abandon backwards compatibility and just ask people recompile their old code.

          Itanium [wikipedia.org], I suppose.

    • Re:Dumbing Down (Score:5, Informative)

      by elwinc ( 663074 ) on Friday December 11, 2015 @10:55PM (#51103779)

      "it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next " Doesn't this go without saying?

      Back in the day, pipelining - issuing, say, a new multiply instruction every clock, even though several earlier multiplies were still working their way thru the pipeline - was too expensive for most architectures. An instruction might take multiple clock cycles to execute, but in most architectures the multi-clock instruction would tie up the functional unit until the computation was done - you might be able to issue a new multiply every 10 clocks or something. Pipelining takes more gates and more design because you don't want one slow stage to determine the clock rate of the whole design.

      Which leads us to the early RISC computers, I can recall an early Sun SPARC architecture that lacked a hardware integer multiply instruction. The idea at the time was every instruction should take one clock, and any instruction that demanded too long a clock should be unrolled in software. So this version of SPARC used shifts and adds to do a multiply. At the time, that was a pure RISC design. One of the key insights in RISC, still useful today, is to separate main memory access from other computations.

      The CPU design course I took in the late 80's said Seymour Cray invented that idea of separating loads and stores from computation, because even then, even with static RAM as main memory, accessing main memory was slower than accessing registers. So by separating loading from RAM into registers and storing from registers into RAM, the compiler could pre-schedule loads and stores such that they would not stall functional units. Cray also invented pipelining, another key feature in most modern CPUs (I'm not sure when ARM adopted pipelining, but i'm pretty sure it's in some ARM architectures have it now). Of course Cray had vector registers and the consequent vector SIMD hardware.

      I don't think Cray invented out of order execution, but I don't think he needed it; in Cray architectures, it would be the compiler's job to order instructions to prevent stalls. In CISC architectures, OOO is mostly a trick for preventing stalls without the compiler needing to worry about it (also, with many models and versions of the Intel instruction architecture out there, it would be painful to have to compile differently for each and every model). So, for example, the load part of an instruction could be scheduled early enough that the data would be in a register by the time the rest of the instruction needed it.

      Anyway, the upshot is modern CPU designs have a bigger debt to Cray than to any other single design.

    • by sjames ( 1099 )

      It MIGHT have been a system that would unconditionally execute the next instruction and produce garbage if there was a dependency, leaving the compiler to order them correctly and insert NOPs where needed, but in fact it kept track of that in hardware and inserted the NOPs itself.

  • "Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time."

    Oh god, this isn't even remotely correct.

    For one, a similar design was used by Cray's earlier machine, the CC 6600, which also had 10 PPAs. And by the 7600, and the 8600. For another, there were dozens of machines with similar designs that predate this, including PEPE and the ILLIAC IV, both of which had hundreds of un

    • And now I see the error is in the quote above, because the original article doesn't screw it up.

      • by Desler ( 1608317 )

        Yeah, the context of the quote is severely butchered. That's something an *editor* would normally fix.

        • Yeah, the context of the quote is severely butchered. That's something an *editor* would normally fix.

          Editors are too expensive...so we get crap insted of reel content, but that's probably how Dice gets their articals through so offten

          **end purposeful bad writing editors would normally catch**

    • Geez, read the article. ;) They're talking about the 6600 there.

      • by Desler ( 1608317 )

        Sure, but the submitter's text makes it seem like that quote is talking about the first Cray supercomputer rather than that being about his work while still at CDC.

    • And aren't these the same?

      Firstly, within the CPU, there were multiple functional units ... which could operate in parallel; so it could begin the next instruction while still computing the current one ....

      Secondly, the CPU itself contained 10 parallel functional units ... so it could operate on ten different instructions simultaneously.

      • Re:Geez, read a book (Score:5, Interesting)

        by Creepy ( 93888 ) on Friday December 11, 2015 @06:38PM (#51103085) Journal

        I think the first is what we would call the pipeline [wikipedia.org] today and the second means parallel execution units. Using the word "functional units" for both is a bit confusing. Early RISC pipelines had 5 stages that are described in that link (and that brings back some memories - I remember studying it in college).

        Funny thing is, I actually read the article to learn about what my first girlfriend's dad did - he was an engineer that worked on that thing (and yeah, she was a total nerd girl). I'm still Facebook friends with her, should point her to the article.

    • What's a bit confusing is that the article is about Seymour Cray, the creator of super computers, but not about Cray computers. What is described is actually the Control Data computers, starting with the 6600 (I had the joy to learn with a 175).

      Yes, the PPs are _not_ "parallel processors", they are "peripheral processors". The 175 had 12 "peripheral processors" with quite limited capabilities and running at 12 MHz instead of the 40 of the main computer, and exclusively responsible ofr handling I/O.

      The
  • Am past my prime, and maybe getting Old Timers disease...

    So can someone remind me, what's the difference between Gene Amdahl and Seymour Cray?
    Did they ever meet and have cocktails?

    • by Anonymous Coward

      Born three years apart
      Amdahl served in Navy. Cray served in Army
      Amdahl worked at IBM, Cray worked at CDC
      Amdahl lived to 90, Cray died (in a wrecked Jeep) at 70

      Amdahl called Cray "the most outstanding high-performance scientific computer designer in the world."
      http://mbbnet.umn.edu/hoff/hoff_sc.html

      In 1995 AMDAHL AND CRAY RESEARCH ANNOUNCE RESELLER AGREEMENT FOR BRANDED CRAY SUPERSERVER 6400 SYSTEMS
      ftp://ftp.cray.com/announcements/company/OLD/CRAY_AMDAHL_AGMT.950228.txt

      It would be hard to imagine that they h

      • by slew ( 2918 )

        FWIW, according to the book Portraits in Silicon [google.com], Amdahl and Cray never actually met in person.

      • For the most part Amdahl seems like a put your pants on one leg at a time guy, while Cray would figure out how to simply have pants in the same place that he needed to be

        My dad worked at CDC. Cray was a notorious eccentric, although at that time (early 1960s) taking your jacket and tie off as you crawled under mainframes was considered evidence of communism.

    • by Tablizer ( 95088 )

      difference between Gene Amdahl and Seymour Cray?

      Amdahl worked mostly on IBM mainframe clones, and focused on business applications. More emphasis on reliability, and processing currency, integers (counts), and business logic. Example: payroll for a big corporation.

      Cray's machines were mostly used for scientific, engineering, research, and military applications. More emphasis on floating point number processing. Example: climate simulations.

  • This is an error from the original article, not from the summary. If the author didn't even bother to look up what "PP" actually stood for, I don't have a lot of confidence in the rest of the article's scholarship. Heck, ONE CLICK TO WIKIPEDIA would have given her the proper definition.

    • by superid ( 46543 )

      I came here to say this. In the early 80's I worked on Control Data Cyber 174C mainframes (we had two). Liquid cooled, about maybe 20 feet long with hinged chassis that swung out like doors (maybe 40" by 6' and about 10" thick) . One chassis was a CPU, two were memory I think, and one was for 10+ Peripheral Processor Units (PPUs) which did 100% of the I/O. A whopping 40 MHz! and a 208 bit memory bus with SECDED.

      • I interned (sort of) at Babcock and Wilcox's computing center around 1980. We had several CDC systems, including a 76 ("7600"), which was built in a horseshoe arrangement much like the Cray-1. (The field engineers used its interior as a storage closet.) Me, I was just hauling tapes, card decks and printouts, but I did get to learn a bit about the machines, and a lot more once I got into comp architecture classes in college. It was a great place for a geek.

        • I worked with the CDC 6400 at the University of Colorado. It had an Extended Core Storage unit, a separate cabinet full of magnetic cord memory (they came up to 2 M words; I don't know how large CU's was). We gave machine room tours, including opening the door on the ECS, until somebody noticed that the machine sometimes crashed when the door was shut. Don't want to jiggle those cores, I guess. Just found this: http://www.corememoryshield.com/report.html

    • by dbc ( 135354 )

      Right on. I worked at Control Data, as a CPU logic designer. The PP's were peripheral processors. The article is full of so much egregiously incorrect tripe I won't even bother to type up a correction. My advice to everyone is to completely ignore the article unless you want your head stuffed full of misinformation.

  • The multiple functional units idea wasn't new with Cray's supercomputer. He was doing much the same thing in the computers he designed for Control Data Corporation (CDC).

    Also, it seems astonishing that there would be no copies of that Cray software around, anywhere, other than on an old disk pack. There are still copies of the software for those CDC machines. Maybe that's because there were so many of them -- relative to the Crays.

  • Just started to RTFA, and spotted two goofs.

    My goof: The quoted text _was_ about the CDC 6600. That certainly explains the similarity.

    The article's goof: Those were peripheral processors, not "parallel processors". They did I/O and occasional odd jobs for the operating system that the CPU wasn't suited for, or too busy to do.

    Now to finish it off, and see what else I or the author have to be embarrassed about.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...