Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Cray SV1 Named Best Supercomputer for 2001 171

zoombat writes "The BBC reported that the Cray SV1 product line won the Readers' Choice Award for Best Supercomputer for 2001 by the readers of Scientific Computing & Instrumentation magazine. These beasts have some pretty remarkable stats, including a 300 Mhz CPU clock, up to 192 4.8 GFLOPS CPUs or 1229 1.2 GFLOPS CPUs, and up to a terabyte of memory. And they sure know how to paint 'em real nice. Of course, we all know how "scientific" the Readers' Choice Awards are..."
This discussion has been archived. No new comments can be posted.

Cray SV1 Named Best Supercomputer for 2001

Comments Filter:
  • Nostalgia Alert (Score:1, Interesting)

    by Anonymous Coward
    Anyone remember how in the game Populous (I forget which platform, maybe SNES) there was a computer tileset and instead of castles, the highest level of structure was a Cray? Anyone happen to know the model (if you could tell from the graphic)
  • Imagine (Score:1, Redundant)

    by blang ( 450736 )
    a beowolf cluster of those...
  • by JBowz15 ( 451573 ) on Sunday August 12, 2001 @03:43AM (#2113585)
    What I want to know is what supercomputer wins the award for congeniality?
    • And don't forget the swimsuit competition! I can't wait for this month's Scientific Computing and Instrumentation centerfold! Hoo-ah! Did you see the CM5 from last month? Still sexy after all these years....
  • Hmm. their choice for best programming langiage is interesting:
    Programming Language 2001 Readers' Choice Award Winner: Visual Basic
    I've actaully tried doing data nalysis in VB with ADO. It works, but slowly.
  • we all know the only reason the SV1 actually won 'best supercomputer' is because it's watercooled. I mean come on... if it ain't watercooled, it ain't a super computer.
  • Can someone tell me what the difference between a cluster and a network is? Speed? Proximity? Who defines it?
    • A network is a physical construct, a "cluster" is a logical definition.

      Networks exist irrespective of the data that flows through them, a cluster is defined by that very data.


    • Cluster: A computer consisting of many smaller computers. A cluster acts like ONE computer.

      Network: A group of computers connected together for data communication, not necessarily acting like one machine.

      Go to Beowulf.org [beowulf.org] for more information.

  • Since Cray has just as much claim to being a Seattle company as Microsoft does, why not just dump your Win box and get a Cray?

    Now that will stick it to Bill G and help the local economy at the same time ...

  • by green pizza ( 159161 ) on Sunday August 12, 2001 @08:35AM (#2117565) Homepage
    Most sexy belongs to the Thinking Machines CM-5 "Blinking Machines":

    (Nice big CM5)
    http://archive.ncsa.uiuc.edu/Cyberia/MetaComp/Imag es/CM5_lg.jpg [uiuc.edu]

    Makes the SGI Origins (see below) look like freakshows:

    (128 CPU Origin 2000)
    http://gepard.cyf-kr.edu.pl/GRIZZLY/or2.jpg [cyf-kr.edu.pl]

    (A cluster of [many] 128 CPU O2K's)
    http://www.ccic.gov/pubs/blue00/local_images/blue_ mountain.jpg [ccic.gov]

    (A 256 CPU O3K, a 16 CPU O2K, and some RAIDs)
    http://www.cines.fr/images/IRISetMINERVE2.jpg [cines.fr]
  • A few years ago, I took the the tour of NCAR's computing center, a true nerd mecca if ever there were one. After I got done bowing to all the raw power, I noticed something about the Crays that disapponited me, the same thing that disapponts me about the Cray SV1-32.

    They pretty much looked like all the other big iron in the room. Gone was tht distinctive C-shaped tower. So was the need to hire a plumber to help install the water or freon based cooling system.

    Granted, these big guys are impressive, but they've lost that certain "soi de vie" (sp?) that once distingiuished them from the other iron in the room.

    • Know why?

      Seymour Cray is dead. Dr. Cray was one of those genius-nutcase types, he wanted to build a private tunnel from his home outside Eau Claire, Wisconsin to his cottage on Lake Superior, for one thing. I know for certain that he insisted on at least two things. He believed that if you pay a million dollars are more for something, you should 1) be able to sit on it and 2) have your choice of any color. For that reason, you can get your Cray supercomputer in any color you like, and all the older "C-shaped" models that you refer to had padded seats somewhere on the case.
      • If you want flames painted on the side, you can have it. If you want a white SV1, you can have it. There's the standard colors, and then there's the custom orders. If you want special detailing, I they still sent them to a local (Chippewa Falls, WI) local auto-detailing shop and have them done up. It seems kind of crude, but there's been some pretty unique designs done that way.

        The Cray 1/2 were rounded because that was the most optimum (distance-wise) way to route all of the wires. Shorter wires means faster clock speeds, and those machines *came* overclocked. The seats were just pads covering the cooling units. The C shaped Cray 2 was supposed to be a straight circle (again, wire lengths) but they couldn't find techs small enough to crawl down in there to route wires and fix stuff. So, the made an opening, and the resulting C shape was *purely* coincidence. Ok, so I don't believe that either, but that's the official story.

        I heard this from one of the mechanical guys... one reason that the shapes more "boxy" is because of shipping concerns, doorways, etc. Not many people wait until the machine arrives to construct the server room anymore. :P
  • ...but has Linux been ported over to this yet?
  • "Milky Way Galaxy named Best Galaxy of 2001"
  • Because their 500mhz and gigahertz machines from the early and mid 90s didn't sell.
  • by Nastard ( 124180 ) on Sunday August 12, 2001 @06:39AM (#2130577)
    just one of these?
  • by green pizza ( 159161 ) on Sunday August 12, 2001 @07:23AM (#2132584) Homepage
    If your app requires lots of vector crunching, the SV1 [cray.com] is one hellofa machine that'll keep you more than happy. The specs (mentioned above) are staggering... up to 1 TB of RAM, up to 1229 CPUs, air and/or water cooled.

    However, it's not alone. There are some other pretty mighty machines out there. The NEC SX-5 [cray.com] has faster RAM and more powerful vector CPUs than the SV1, but does not scale as large. The SGI Origin 3000 [sgi.com] series is not vector, but rather a of (somewhat) traditional CPU design. It's available with up to 512 CPUs and 1 TB of RAM. Unlike both the SV1 and SX-5, the Origin can be ordered with graphics (which turns it into an Onyx).

    Then, there's the upcoming Cray SV2 [cray.com], which will be a combination of massive parallel & vector processing. Up to several thousand CPUs and a staggering RAM thruput of 250 GB/sec per bank!! (The Origin 3000 mentioned above has a total system bandwidth of 716 GB/sec.... but that's the entire machine. The SV2 will have more than that with just three banks of RAM alone).

    Some of these machines are single image systems (in the case of the Origin 3000, SX-5 and >33 CPU SV1)... meaning they are one single machine, not a cluster. Most run very specific OSes made just for their hardware, with the possible exception of the Origin. SGI's big Origin and Onyx 3000 machines run IRIX 6.5, the same OS that runs on a $150 e-bay special SGI Indy workstation. Kinda cool. The compilers and math libraries are also heavily tuned and generally come with lots of example code and performance tips. When my university purchased a 96 CPU Origin 2000 a few years ago, SGI included a *box* of binders and CDs from some past performance computing seminars they had held. Our university still holds a support contract for the Origin, and thus we're still getting significant compiler and library updates.

    Sort of belittles dual bank PC2600 DDR-SDRAM (2x 2.6 Gigabyte/sec = 5.2 Gigabyte/sec) and Myrinet (1 Gigabit/sec = 125 Megabyte/sec interconnect), doesn't it.

    Of course... a 16 node x86 cluster doesn't cost $500K - $50M either...
    • If your app requires lots of vector crunching, the SV1 is one hellofa machine...

      ...SGI Origin 3000 series is not vector, but rather a of (somewhat) traditional CPU design. It's available with up to 512 CPUs...

      It's worth mentioning that SGI used to make the SV1 and all the other Cray vector computers. They mismanaged this product, of course, as they did so many other things. But they probably would have held on to it if they had seen any future in vector-based supercomputers. In that respect, they were probably right. You will note that most of the system on top500.org's list are massively-parallel microprocessor systems, like the Origin.

      Some nitpicks: 512 processors is the "off-the-shelf" limit for the Origin series, but I know of special installations with as many as 2048. And there are probably some differences in the Irix kernel for the workstations and for supercomputers. I don't know the specifics, but possibly the two configurations of Irix are "the same" in much the same sense that Linux and Hurd are.

      Speaking of Linux, we will soon see Origin systems with Itanium chips in place of MIPS. (They may not be called "Origin", but most of the architecture will be the same.) Since it makes no sense to port the Irix kernel to the Itanium, these boxes will run Linux. Which is why SGI is open-sourcing XFS and other products associated with IRIX.

      • Actually, you'd be amazed at how similar the kernels are between an Indy and an Origin 3000. Obviously, there are a lot of platform specific changes in things like error handling, interrupt handling/routing, and some tweaks to the memory allocators to deal with the NUMA architecture. But, things like the scheduler and the filesystems and sort of general architecture wise, they are (pretty much) the same. Well, I suppose the Origin version is 64 bit and the Indy is 32 bit.

        As for system size, the 512p limit is real. With only one exception so far (NASA Ames), the largest O3000 you can get is 512p. There's a special mode that you can run in where you sacrifice half the memory capability per node to get twice as many nodes and hence a 1024p system, which is what NASA has. There is a press release on that someplace at NASA Ames and SGI but I forget where. The "special" 2048 is actually a pseudo shared memory cluster, probably using an interconnect similar to (but a lot faster than) Myrinet or using something like HIPPI. This is actually what Blue Mountain is.

        As for the Linux boxes, I worked with some prototype hardware based on the Origin 3000 series "chipset" with Itaniums. It was pretty cool stuff (I was working on porting the system partitioning software from Irix to Linux). We have also run an Origin 2000 version of Linux/MIPS on a 128p system.

        • OK, you obviously know Origin architecture better than me. But when I was documenting SGI technology (1999) the 3000 series didn't exist yet. Perhaps I overlooked it, but I don't recall any memory issues on Origin 2000 machines with more than 512 processors.

          Now that I think about it, my assertions about differences between IRIX on an Indy and IRIX on massively-parallel systems were pretty bogus. There used to be different versions of IRIX for different platforms, but nowadays SGI emphasizes "modular upgradeability". And IRIX is still basically a 32-bit system. There is a 64-bit IRIX initiative, but the deadline for that is usually given as 2038 [slashdot.org] ;)

          • When you were working on Origin, there *were* no Origins past 512 processors. The first one came online less than a month ago. Anything you saw was a cluster.

            As for Irix being 32 bit, that is 100% false. Irix for Indy/R4000 Indigo and O2 is 32 bit. Everything else has a full 64 bit kernel and can run 64 bit binaries. Heck, we couldn't even address all the memory on all the nodes in an Origin without 64 bits. Just because the time is stored in a 32 bit value does *not* mean it's a 32 bit OS.

  • Oh yeah? (Score:2, Funny)

    by Anonymous Coward
    I'm sorry to break it to all of you, but the Cray SV1 is just another example of the "Performance Myth." My G4, at only 450 MHz, can outperform any Cray model at PhotoShop. Allow me to demonstrate rendering this 200MB graphic image. The G4 renders it in only 20 seconds, while the Cray fails entirely!

    Sorry, Cray. I'm not buying.
    • Yes, but will yer Mac continue to outperform the Cray at running the Gimp?

      Darn it, now I'm going to have to go try this out and see...
  • Excuse me, but haven't they considered Beowulf clusters [beowulf.org]? I think they are better in both scalability and price. Even some clusters managed to rank among 100 fastest computers.

    • Not knowing alot about Beowulf, I looked at the site pointed to by the link in the base article and would like to point out that Digital (DEC) had this capacity in the OpenVMS cluster available as early as 1984. By 1994 when Beowulf was started it was possible to build a cluster of 32 nodes with 8 processors per node. all clustered. Current alpha technology allows a 32 way cluster in a box!! x 32 systems. Now while it may not be "open" in the linux sense of the word it is faster and the clustering technology is proven. As a point of history DEC also had an MPP box with 64 processors which died in advanced development in the early 80's.
    • Re:Beowulf? (Score:3, Funny)

      by Detritus ( 11846 )
      Real computers aren't named after some Danish nob with a sword.

      Real computers are designed in Chippewa Falls, Wisconsin. Real computers have high-speed interleaved main memory, and lots of it. Cache is for losers who can't afford a real memory system.

      • Shhh... don't convince upper management that cache is for losers. I don't need to be looking for another job. :/ BTW, Real Beer is made here too. Leinekugel's Honey Weiss and Red. Good stuff.
        • Amen!

          Trying to start my own branch of the Lienie Lodge here in Kansas City.


        • Err, Summit [summitbrewing.com] from Minnesota is significantly better than the watery lagers of Chippewa Falls (not to mention half of it comes from Milwaukee), although it wouldn't be the first time that cheeseheads have been in denial of the fact that Minnesotans Do It Better©. =)
      • Re:Beowulf? (Score:2, Interesting)

        by ScumBiker ( 64143 )
        There's a 50ish lady that works at Cray, named Dorothy. I met her at this years Rockfest, in Cadott, WI., which is about 20mi north of Chippewa Falls. She was wearing a Cray tshirt, which of course caught my eye right away. I ended up making friends with her and getting a phone number and contect person for Cray to get my very own Cray tshirt. We talked about how SGI is sucking the life out of everything around it and I found out Cray is back out on it's own. So, it appears that Cray is going to survive SGI after all, and will still be building those insanely fast machines they're known for.
      • Hate to break it to you...but the SV1 *is* the first Cray to have a cache. Specially designed, of course. :)
    • No. (Score:2, Informative)

      As fun as it is to try to tie everything to Beowulf clusters, it's not applicable and not necessary to bring up with every post. FWIW, not all tasks lend themselves well to being done in a distributed environment. Of course, that's been mentioned a few thousand times here before, so I won't waste my breath.
      • Re:No. (Score:2, Informative)

        by the gnat ( 153162 )
        I won't waste my breath.

        I will. Crays are vector supercomputers, which is something entirely different from your garden-variety Intel or RISC chip. There are several different types of computer you need to consider in the sort of comparison you're making:

        - Vector supercomputers. This includes Cray, and some by Fujitsu and Hitachi (perhaps NEC as well, but I think those are MIPS-based).
        - Massively parallel shared-memory supercomputers. The IBM SP2 and SGI Origin 2000/3000 come to mind. You take two of these, plug them into eachother, and get one computer twice the size with (I think) virtually no loss of bandwidth. I'm pretty sure these can also be connected just for high-bandwidth communications, but the real advantage is in shared memory. Cray makes these too, and SGI's MPPs are largely based on Cray technology (hence the "CrayLink" on Origins).
        - Distributed computers. Beowulf is just a set of patches (primarily to Linux) to make distributed-memory programming easier (e.g. utilizing multiple ethernet cards for higher bandwidth). You still have to write programs specially to take advantage of the machine.

        The difference lies primarily in programming techniques. You can not run a simple multithreaded program that would saturate an SP2 or Origin on a Beowulf cluster. You'd have to re-write it with PVM or something. PVM is not difficult, but it's not transparent. Some Fortran 90 compilers will do automatic parallelization, but not for a distributed-memory system.

        Basically, there's a hell of a lot more difference between a Cray and a Beowulf cluster than between the Beowulf cluster and the SETI@home network.

        ( disclaimer- I am not a supercomputer programmer, but a lot of the people I work with are. I do know something about parallel code, however. )
        • Cray also makes distributed clusters BTW. But anyway, just to clarify, when we say that real Crays are vector machines, people keep pointing out Altivec. I would just like to comment that while altivec has 128bit vectors (anothers words, it can hold 4 32bit numbers in one vector register, or 8 16 numbers), Crays have vectors that can hold up to 64 thousand 64bit elements. Of course, it takes many clock cycles to add or multiply vectors, but it is still faster than the altivec way.
      • but isn't that comment very relavent. the article is talking about a super-computer clas computers, and a baewolf cluster is concidered just that. claiming its superiority over other things may be a bit zealous though, its not the best, it is afordable for you and me though... i hope to someday say that i run a supercomputer in my basement, "sure, i'll let you see it... can yours do this".
        • Re:No. (Score:1, Interesting)

          by Anonymous Coward
          Beowulf clustering is a decent solution for calculations that involve easily parallelizable tasks (ie. sub-tasks that do not need to communicate with each other).

          If, however, the sub-tasks have to communicate with each other the bandwidth becomes critical and clustering over a network won't scale anymore.

          Cray represents another approach to the problem. It has an absolutely amazing bandwidth and can deal with the hard problems that can't be parallelized over a network.

          So, clustering Crays wouldn't help you at all.

    • whast the biggest beowulf you've heard of ?

      what does scalability mean ?

      iirc, the MASPAR MPPs were 16384 Motorola 68k's.

      Thats scalable - if you mean "lots of cpus".

      or what about some of the ASCI computers ? 8192 cpus, 6144 cpus, etc etc. No beowulf that big, eh ?

      What is it that you really mean by beowulf ? Or is it just "the buzzword" that everyone loves and this time (for the first time in 234092384234 slashdot articles) it happens to be slightly relevant ?

      The idea of shared-nothing commodity clusters isn't new, and linux isn't the only place its done , much less beowulf. Infact, Cornell ditched some SP/2 boxes to build a cluster--but they used Win2k-- and apparently they love it. You can buy such a compute cluster from Dell just like theirs if you want it.

      No, i dont think the issue here was "we've never heard of beowulf" or "well, we are against beowulf because we're snobs". Maybe, just maybe, they had criteria other than "must sound like 'eowulf' when they made a decision ?

      • Maybe, just maybe, they had criteria other than "must sound like 'eowulf' when they made a decision ?

        Actually, in a lot of supercomputing fields, the decision is heavily based on "it must run Cray Fortran compiler in optimal fashion". There are simply huge amounts of Fortran code, much of which was written and optimized 20 years ago by brilliant graduate students who have taken maybe a single CS course, that would have to be rewritten moving to any other platform.

        Rewriting all this code for a different system would make the Y2K update of all "that Cobol code where the source listing had been obsoleted because they'd modified the binaries because compilation took to long" seem like a walk in the park :-). Especially given that the new authors would likely be brilliant physics grad students who've taken (maybe) a single CS course.

        (Cray may supply F90, but I'd bet Cray's spend most of their time running amazingly optimized F55 code :-))
  • 300 MHz? That seems somewhat unimpressive... would someone mind educating me? :o
    • ... a 300 Mhz CPU clock, up to 192 4.8 GFLOPS CPUs or 1229 1.2 GFLOPS CPUs, and up to a terabyte of memory...

      That's a lot of GFLOPS :-), and a LOT of Ram.
      Im not an expert in CPU's but i've picked up a few things that maybe helps you.

      There are several ways of doing a CPU fast. You can (the very popular way) increase the clock frequency, thus doing more operations per second. One hertz equals one "cpu instruction" (sometimes they takes more then one, depending on what kind they are). This is the popular way to make a CPU sellable, unexperienced PC buyers sometimes simply focuses on "How many MHZ does this harddrive has ?" :-)
      The second way is closely connected to this, simply make more then one instruction per each clock frequency. This is working in parallell, a more complicated solution that helps in some types of operations, but not others. Some problems are not good for parallelizing.
      A CPU has something called a branch, [some have more then one, ie parallell processing] you can compare it to an assembly line in a modern factory. More pipes = parallell computing. For some reason, a short pipe [fewer operations until done] gives faster execution but lower clock frequencys, maybe because of heat or something. Could anyone fill me in here ? Anyhow, a cpu like the G4 [motorola/apple] has a rather short pipe, 4 or 5 steps. The P4 [intel] has a rather long one, 20 or so. This is why a P4 doesnt reach the same MHZ as the P4, but still can compete in raw computing power.

      You can also increase performance in a CPU by making special instruction sets the programmer can call, and then optimize those instruction sets. The Pentium++ for example, is a rather simple processor wrapped among a huge amount of addon instruction sets, like MMX, SSE, SSE2 (and many many more) etc. The wrapper hardware-compiles these advanced CPU-calls into the basic instructions the core CPU actually can understand.

      Hope I clearified somethings, and if I missed something or got something wrong, please correct me :-)

      • Ye olde 8086 is much like the cannonical 1 cycle = 1 instruction CPU that you described. Since the minimum number of trasistors needed to execute an instruction is pretty much fixed (but occaisionally somebody somewhere figures out a way to reduce the number by a few), and the amount of time it takes for the signals to pass through a sequence of transistors is basically fixed (although better materials and smaller transistors can improve this), a 1 cycle = 1 instruction really just isn't capable of running at a high clock speed (Mhz).

        There are several ways to improve speed. The direction Intel went with their chips (and many other vendors as well) is pipelining. Pipelining is when you take that fixed number of transistors and break it into groups based on when they do their work. A 2-stage pipeline is one where the instruction logic is separated into two steps. A 3-stage pipeline is three steps, and so on. A sequence of four instructions in a 3-stage pipeline executes like this:

        1) The instruction is loaded and the first stage is executed in one clock cycle

        2) The next instruction is loaded and it is executed in the first stage while the the first instruction is executed in the second stage (one clock cycle)

        3) The third instruction executes in the first stage, the second instruction executes in the second stage, and the first instruction executes in the third stage (one clock cycle)

        4) The fourth instruction executes in the first stage, the third instruction executes in the second stage, and the second instruction executes in the third stage (one clock cycle)

        5) The fourth instruction executes in the second stage and the third instruction executes in the third stage (one clock cycle)

        6) The fourth instrction executes in the third stage (one clock cycle)

        So, as you can see, once the pipeline is filled, one instruction completes every clock cycle, but each instruction takes three cycles to complete. Neat trick, eh? There are a lot of hairy details to take care of between stages, and pipelined processors can get very complicated very fast, particularly if you're trying to implement an instruction set that wasn't designed for pipelined architechture (i.e. x86 instruction set).

        Cray went a different way. A Cray process is uses vector instructions to process a lot of data in one instruction. Compare this to the pipeline where multiple instructions are in progess during any single clock cycle. A vector processor, on the other hand, has large sets of registers which are referenced as a vector and has instructions that can fill an entire vector from a particular chunk of memory, add two vectors and store the results in a third, multiply, divide, negate, whatever, a vector at a time. And then of course there is an instruction to store the contents of a vector into a particular chunk of memory.

        Pipelining has the marketing advantage that if you make your pipeline long enough (the Pentium 4 is a 20-stage pipeline) then the stages take less time to execute and you can bump up the clock speed.

        Vector architechture does not have this marketing advantage, but they are historically superior for certain applications and data sets (like weather modeling meteorological data).
        • Ye olde 8086 is much like the cannonical 1 cycle = 1 instruction CPU that you described. Since the minimum number of trasistors needed to execute an instruction is pretty much fixed (but occaisionally somebody somewhere figures out a way to reduce the number by a few), and the amount of time it takes for the signals to pass through a sequence of transistors is basically fixed (although better materials and smaller transistors can improve this), a 1 cycle = 1 instruction really just isn't capable of running at a high clock speed (Mhz).

          8086 is *far* from being clock cycle per instruction design. The fastest instructions in it take 3 cycles (like NOP or register to register ADD). Instructions with complex effective address calculations take even longer. For example MOV (MOV = load/store instruction in x86 'architecture') immediate (immediate = the data is supplied in the instruction) to memory with base + index + displacement addressing takes massive *22* clock cycles. For comparison, in more modern architectures (anything since 486), it often takes just 1 or 2 effective clock cycles in ideal conditions.

      • "For some reason, a short pipe [fewer operations until done] gives faster execution but lower clock frequencys, maybe because of heat or something. Could anyone fill me in here ?"

        Each stage in the pipeline lets the hardware work on the instruction a bit, to setup register access and whatnot. Quite a few of the steps in modern x86 processors are 'unwrapping' the CISC instruction and turning it into RISC. (This is a bit simplified). The more steps there are, the shorter (less time) each step can be, letting the clock rate go up. Fewer steps means (generally) that each step needs more time, therefor limiting clock speed.

        Long pipelines have one drawback, though. Assume there's one instruction currently being executed. The next one, in memory, will be in the stage that's one back. The next instruction after that will be in the stage before THAT, and so on. This works most of the time, where you have many sequential steps in a row. However, if there's a branch, the pipeline has to be flushed; it'll take at least as many clockcyles as there are stages in the pipeline before any instructions start getting actually executed; there's a lag time there while the instructions are making there way from the start to the end of the pipeline. There may/will be overhead on top of that which can make the stall time greater than if there was no pipeline at all.

        So, back to yer original question, a high-MHZ deep-pipelined chip can be slower than a lower-MHZ shallow-pipelined chip IF there are a lot of branches in the program, because each branch will require a pipeline flush, which takes a lot of time to recover from. Speculative branching helps out a lot here, but it's not 100percent accurate, and also requires a lot of silicon to deal with.

        All the extra real estate on the chip dedicated to the logic for deep pipelines could be, instead, dedicated to speeding up operations or extra cache or whatever. But x86 chips need fargin' deep pipelines these days to get high MHZ numbers, or else each complicated CISC instruction would take a year or so to decode.

        • if there's a branch, the pipeline has to be flushed

          Your reply implies the following towards the end but wasn't clear. Pipelines aren't automatically flushed as you first imply. A CPU has to decide which fork to take when it loads instructions after the branch is read into the pipeline. Only if the code takes the branch that's not already in the pipeline does the CPU discard the pipeline's contents.

    • by Anonymous Coward
      The fact is that even in ordinary PCs the processor speed is no longer a problem. The real bottle-neck is the I/O of both the memory and the mass storage.

      This has been common knowledge in the world of supercomputing for decades. In a multiprocessor architecture the speed of an individual processor is not that important. What's important is that the processors can efficiently access the memory, mass storage and can rapidly communicate with the other processors.

      If I were buying a new computer now I'd opt for a dual processor setup (possibly two 650 MHz P-III CPUs or something else in the same MHz range) over a single, blazingly fast CPU that chokes on the sluggish memory bus.

    • Well, it's kind of like this...
      In an ordinary PC, you can use one CPU clocked really fast, but you're limited by the speed of the I/O bus and memory bus. This is where cache comes in, as small amounts of data and code can be held in extremely fast memory "close" to the CPU.
      In a supercomputer like this, you use lots of slower processors, which aren't necessarily limited by bandwidth, but can individually get enough work done.

      Imagine, if you will, 35 people in Edinburgh, who need to get to Glasgow, some 50 miles away.
      Would it be quicker to transport them in a 160mph Porsche Boxster, one at a time, or take them in 5 Volvo estates?

    • If its anything like the older Crays (SV1 stands for "scalable vector", iirc its sort of a mix of vector and traditional CPUs).. then it gets its speed from the vectorized nature of the cpu and more importantly, the problem at hand.

      i was told in a CS course that the arch of the cray vector units is basically the same as the cray 1... the speeds have changed, the process has changed, the external peices have gotten much faster.. but at the core, the cray vector machines are very fast at the following type of thing:

      given a vector of a given length

      do foo to every element in that vector

      _very_ efficiently

      to see how this operates a bit better, consider how a normal cpu might do the following

      for i = 1 to 64


      blah[i] = blah[i] + 1


      that would end up getting compiled perhaps into something like this on a traditional cpu:


      load blah[i]

      increment blah[i]

      save blah[i]

      increment i

      if i 64, goto loop

      what we're seeing is that for 1 element, we do a load, an ALU op, a store, an ALU op, and a conditional branch.

      conditional branches fuck cpus. badly. having load stores inside inner loops, fucks cpus badly.

      to see why, you need to understand pipelining, but basically i'll make it short and easy: the instruction cache of a cpu is always stuffing the pipeline with its "guess" of what instructions should be... and its not until several of those 1.4ghz clock cycles later that you even know if you've got the right instruction... if you do, great.. if you dont, you're fucked and you flush the pipeline and start over.

      conditional branches fuck this all to hell because without optimization, you've got a 50% chance of filling your pipeline with the wrong instructions.. so on a p4 with a 20+ stage pipeline you're talking about throwing away some sizable portion of those instructions... and then refilling them... now, branch predition realy helps this a lot, but conditional branches are just one problem... the load/store units of cpus also typically introduce huge pipeline delays... i.e. you need to load blah[i] but that takes 2 or 3 cycles (even from cache!! dont even think about it if you need to go to main memory) so any instructions which use blah[i] must be scheduled at least 2-3 clock cycles aftewrads...

      so without keen optimization and ideal software loads, suddenly your 1.4ghz chip is stalling 2-3 instructions all the time.. and its only running like a 400mhz proc :)

      so, to make traditional cpus fast, pipelineing and multiple EUs have been added. these have drawbacks (and i'velisted some of pipelinings above).

      the "vector" approach is totally different. you actually have "vector" registers, and "vector instructions". the machine actually sets up "virtual" pipelines for you. so on a vector machine, the scenario above would be more like:


      xv = xv + 1

      (assuming xv is the vector register with your 64 elements in it)

      what the cray hardware does is hooks up the peices of its cpu in a virtual pipeline that does something like this:

      foreach element of vx




      notice that the foreach construct looks like a loop, but its not realy, its pipelined, so what actually gets sent through looks like this

      load i

      inc i, load i+ 1

      save i, inc i + 1, load i + 2

      save i+1, inc i + 2, load i + 3

      save i + 2, inc i + 3, load i + 4

      save i + 3, inc i + 4, load i + 5

      etc etc etc

      except for fill and drain, the load, inc, and save hardware units are always perfectly utilized. there is no branching or conditional logic involved.

      the example i've chosen is very trivial, and may be subject to huge factual or conceptual mistakes :) the cray's amazing speed only works in situations where the problem can be expressed in vector instructions, i.e. do the same thing to a fuckload of data in such a way that the cray's hardware can pipeline it efficiently..

      there are lots of interesting problems that the cray did _not_ handle well.. but for what its worth, the vector processors in the cray 1 aren't significantly different in operation and instruction set than the SV1 of today.. by many measures, cray "got it right" originally. the SV1 of today might use a normal BGA packaging on a CMOS based process, (the cray1 used discrete ECL logic and point to point wiring - all strung together by little old minnesotan women)

      also the original cray 1 ran at either 100 or 80mhz, could take 32mb of ram.... i.e. for the 1970s it was faster than any desktop workstation until the mid 90s...

      note that the top500 list crays are usually the T3Es.. which are a totally different beast than the vector processor.. a T3E is just a bunch of alpha CPUs on a very fast interconnect.. sort of like a "custom cluster in a box".

      • Great explanation! Much more clear than the one I was going to give :)

        Vector units are extraordinarily fast at certain tasks. I work with a custom DSP that uses a vector processor to do FIR filtering, and the amount of processing it does is mind blowing. We clock it at somewhere between 80-120 MHz (depending on application), and at the top end of that range it gets nearly a billion ops per second.

        Now, this does come with some drawbacks. First of all, it requires a tremendous amount of silicon to do properly, making development extremely expensive. Not to mention, that with all that logic running simultaneously, power consumption can become an issue as well. Secondly, it is a royal ain in the ass to program (or write a compiler for). When you have 8 operations per instruction word, making efficient use of that processing power involves writing some ugly, ugly code.

      • Hey, thanks for the info! That's really cool! :)
    • In altivec, multiple instructions can be executed at once, and each instruction works on 4 to 16 numbers at once. A cray on the other had also executes multiple instructions at once, but instead of only operating on 4 to 16 numbers per instruction, an instruction can affect up to 64k numbers. This obviously does not happen in one clock cycle but it does happen fast enough that one 300mhz processor is faster than several gigahertz processors, especially when you look at weighing in memory access times.
  • &ltsarcasm&gtOnly 300 Mhz? And so what if it can get 2.4 GFLOPS per processor... What are GFLOPS? Why aren't these machines as fast as pentium pro chips? I saw it has 192 processors... it better. So this machine has the processing power of 27 or 28 of the new Pentium processors that run at 2.1 GHZ.... Hardly seems worth it. I bet this Cray system probably ships with 5400 RPM disk drives too. Probably all about 800 MB. I don't think I will be buying one of these any time soon. And the darn thing is round? Probably stole some of the designers from Apple. &lt/sarcasm&gt
  • Quote:
    The Cray 1 was installed at Los Alamos National Laboratory in 1976. It boasted a record speed of 160 MFLOPS (million floating operations per second) and an 8MB memory.
    Well I guess back then I could have competed with my superior 0,5GB RAM I have now...

    But I find it frustrating to see this overclock'd circuits unleashed just for science. It may make a decent and nice Quake server though :)
    • just a stupid question:

      was it 8MB or 8 Mword? I seem to recall crays using some non-standard wordsize.

      While I'm at it, here's another:

      How fast were those 160 MFlops; I suspect that sustained throughput would play a big part in it. Is that about as fast -- in real world speed, not peak tight loop speed -- as today's desktops, or have we finally caught up to that?
      • Crays (except possibly the CS6400, the machine the Sun E10k is based on), sort of use a 64bit word. I say sort of because it holds a 64bit floating point or integer number, but it stores it in up to 80bits, depending on machine. It uses the extra space for error correction.

        So, an 8megawork cray is equivalent to a 64megabyte PC (memory wise that is), except it really has 80 megs.
  • I'll betch one of these suckers could crash windows in a couple o' microseconds.

    I have to wait almost all day for it.

  • Check out SARA [www.sara.nl]: TERAS' is a 1024-CPU system consisting of two 512-CPU SGI Origin 3800 systems. This machine has a peak performance of 1 TFlops (1012 floating point operations) per second. The machine will be fitted with 500MHz R14000 CPUs organized in 256 4-CPU nodes and will possess 1 TByte of memory in total. 10 TByte of on-line storage and 100 TByte near-line StorageTek storage will be available. 'TERAS' will consist of 44 racks, 32 racks containing CPUs and routers, 8 I/O racks and 4 racks containing disks.
    (And nopes it's not listed in top500 yet :)

    For more closeup pictures see: http://unfix.org/news/sara/ [unfix.org]

    Ain't it sweeeeeeeeeeet?
  • by robbyjo ( 315601 ) on Sunday August 12, 2001 @03:59AM (#2143219) Homepage

    Visit here [top500.org] to view 500 fastest computers in the world as of June 2001. Cray is actually number 11. IBM ASCI White SP Power 3 is the king.

    It's interesting to note that a beowulf cluster is also there (#42)

    • The 6th fastest (reported) supercomputer on that list is ASCI Blue Mountain (a cluster of 48 SGI Origin 2000's). It's pretty interesting to note the installation date of that machine... 1998!

      A lot has happined since then (just think, in 1998 the fastest x86 CPU was the Pentium II at 450 MHz). If you look further down the list, the next oldest machine is a Cray at number 35. Very cool that Blue Mountain is still a pretty impressive performer over three years later (an eternity by computer terms).
    • Actually the Cray at #11 is the T3 which is not as fast as the SV1. This list seems to be not a list of fastest per se but a list of ones working in production environments. Since I don't see the SV1 on the list I don't know whether this is a new rollout or one that is in production in uncredited environments......
    • Unless the situation has changed since I heard this, Cray is the only company where you can buy supercomputers commercially - that is, "off the shelf".

      Customer: I want the big red one on page 42

      Cray Salesperson: Cool choice! We'll start delivering it next week at noon...

      Other machines may be faster, but they're as rare as hens teeth.
    • A related site, which I find a bit more interesting, is the clusters database [top500.org]. Particularly noteworthy are three PC clusters that cross the teraflops line (peak performance, mind you, but still impressive).

  • by Giant Hairy Spider ( 467310 ) on Sunday August 12, 2001 @05:47AM (#2144287)
    • Does it sound familiar when we fill in this blank: "____ Supercomputer," with the company name?
    • Bigness of numbers.
    • Number of words that we don't understand. (ji... ga... flop?)
    • Cool paint job.
    • Number of clever supercomputer jokes accumulated around the brand. (Apple used a Cray to design their chips, Cray used an Apple to design...)
    • How easily could we imagine the case of this computer as concealing a hostile intelligence?
  • Hmmm, and here I was thinking that the SV1 was basically a cluster of J90s (admittedly with souped up processors ... lost track of whether they called them the S+ or SE now) and some rather beefy I/O. If you're looking at raw vector grunt, then the NEC SX series is rather impressive though supplies may not have resumed after that anti-dumping action was lifted. Cray has not really produced a top-end vector machine since their T90s and with the Japanese hell bent on their Whole Earth Simulator project (40 Tflops), I don't really see the US catching up anytime. And no, a beowulf of Itaniums don't count unless the problem is embarassingly parallel and your compiler cooperates.

    Anyway, now that Cray has been purchased by Tera (the guys who developed that highly threaded CPU) it will be interesting to see their technical direction. In terms of processor development, theirs is the only vaguely interesting CPU that has reached the semi-commercialisation stage.

    • The SV1 was built as an upgrade path for the J90 users, so they were somewhat compatable as far as board swapping, etc. goes.

      Now keep in mind that the J90/SV1 is Cray's "budget" line...The SV2 (due out next year) is supposed to be a successor to both the T90 AND the T3E (its both vector and Mass Parallel).

      I'm curious to see what happens with the Tera multithreading systems as well. The first few years I imagine they will just be bought as computing research machines. (so that people can see what they do)

Never worry about theory as long as the machinery does what it's supposed to do. -- R. A. Heinlein