Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Red Storm Rising: Cray Wins Sandia Contract 89

anzha writes "It seems Cray is alive and kicking at least. They might even be making a come back after its very rough time as a part of SGI. The big news? Cray seems to have won the Red Storm contract - Sandia's newest supercomputer procurement - from Sandia National Labs. Check out the press release here. I'd say that this is probably an SV2, but the press release is a bit scant on details."
This discussion has been archived. No new comments can be posted.

Red Storm Rising: Cray Wins Sandia Contract

Comments Filter:
  • bad link to cray (Score:1, Informative)

    by Anonymous Coward
    link on story should be http://www.cray.com not ww
  • Wasn't Red Storm a project put forth by Compaq [thestandard.com] to build a 100 teraflop system?
    • I believe this was a premature announcement, at least as regards this project name. They didn't get the Red Storm contract, although they were a competitor, and the architecture was far from a done deal.
  • by korpiq ( 8532 ) <`-.' `at' `korpiq.iki.fi'> on Wednesday June 19, 2002 @03:48AM (#3727659) Homepage

    I know something about the problems of multiprocessing, but I would like to know how come monumental systems can still sell in the days of commodity hardware and - oh gosh, not again - Beowulf :). Ok, maybe spread-out computing is not applicable to all kinds of computationally heavy problems, but I'd really like to see some stats on where a monolith like Cray is more applicable and where a multilith like what they use in movie rendering.

    Just pondering while waiting for net-enabled market of processing power (remember processtree?) and storage space (freenet, the new one) to make millionaires of all excessive-hardware owners, through paypal. Well, maybe not :)
    • It all comes doen to inter-cpu bandwidth needs for the particular piece of code you need to run. A render farm has pretty much no need for interprocessor bandwidth, wheras crays have it in the 100's of GBs/s because the kind of numerical physics simulation that is usually run on these beasties needs all the bandwidth it can get, and a little beowulf cluster of x86 toys just ain't gonna gut it.
    • by bentini ( 161979 ) on Wednesday June 19, 2002 @04:15AM (#3727725)
      A lot of problems just don't scale well across different systems. Say, fo rinstance, you need to simulate particle interaction...

      Distributed processing is only really good when the subproblems are separate enough that they can be calculated separately.

      Also, supercomputers are a lot better for vector code. Intel and Athlon might say that their current offerings are Vector Processors, but they really aren't. When you need to exploit DLP, supercomputers are the way to go.

      Also, research and funding like this will uncover the techniques that we can expect to be exploited in desktop processors in 5-20 years, so it helps us eventually.


      • Say, fo rinstance, you need to simulate particle interaction...

        Actually most molecular dynamics codes parallelize quite well. Of course you need better bandwidth and latency than seti@home, but nothing a cluster couldn't handle (especially if you have one of them nifty low-latency interconnects).
      • Distributed processing is only really good when the subproblems are separate enough that they can be calculated separately.

        Um...that's not really true. Certainly, distributed processing works like gangbusters when the problem is what's called "embarrassingly parallel". Things like Seti, or distributed ray tracing of a static scene scale perfectly on a distributed system.

        But there are lots of groups (the DOE national labs being some) that do distributed work on problems that are not embarrassingly parallel. The trick is making sure that you have a fast interconnect between the boxes, and can make use of that interconnect efficiently.

        In general, a simulation (including particle simulations) will break up a region of space into pieces, and each piece is calculated for a "time step" on a processor. Then they each determine if they need information from another processor, or if their information could be used by another. A collective communication then takes place to exchange what's needed. Rinse and repeat.

        If you do your work right, you get a scalable system, one where you can add processors and get a proportional increase in performance (or maximum problem size). If you don't do you work right, the system will not scale well.

        Distributed computing is being used for more and more things these days.
    • Ok, I'm only halfway through the video about SV2 architecture at http://www.cray.com/company/video/ [cray.com] and I already find my question laughable :D

      This stuff is plain cool.
      • It is even better when you're strolling through the data center of a fortune 500 company's headquarters, come across two different Cray machines, and go "WOAH! This is the coolest thing I've ever seen!".. I forget the models of the ones I saw, but no custom x86 rig drops my jaw nearly as much.
    • I'm not sure what this new Red Storm machine has in the way of individual nodes, but Sandia has some history in parallel computing, dating back to the paper

      Gustafson, J.L., G.R. Montry and R.E. Benner. "Development of Parallel Methods for a 1024-processor Hypercube."
      SIAM Journal on Scientific and Statistical Computing Vol. 9, No. 4, July 1988.
      as well as the ASCI Red machine [sandia.gov], which, IIRC, was the first machine to break the 1 teraFLOPS barrier.

      That machine, BTW, was built by Intel out of fastest Pentium chips of the day. I think a later upgrade to Pentium IIs [sandia.gov] increased its speed to about 3 teraFLOPS.

      As far as MP machines are concerned, it could be argued on the basis of the ASCI Red machine that they have a fairly "economical" strategy [I know, I know, it's hard to argue that anything costing $9e7 as being "economical" - but you are talking about buying one of the fastest few computers in the world - rack mounted Athlon MPs could do great until you get up to O(100) processors, but doing the interconnects for O(10000) processors gets to be tricky].

      Also there is CPlant [sandia.gov], their own (everybody's gotta have one) pet project to build a B----- cluster out of Alpha based machines running a modified Linux.

    • but I'd really like to see some stats on where a monolith like Cray is more applicable and where a multilith like what they use in movie rendering.

      In the case of the T3, fecalith might be more appropriate.

  • Being a long time Cray fan and standing in awe of how massive are the undertakings currently being driven by supercomputer, I would normally be impressed. But I just finished reading Seth Lloyd's article at the Edge [edge.org]. The MIT professor of Mechanical Engineering came up with "The amount of information that can be stored by the ultimate laptop, 10 to the 31st bits, is much higher than the 10 to the 10th bits stored on current laptops". I know /. dealt with this recently but reading the prof's thought processes in depth is a fun intellectual high.

    O yah I gotta get me a Beowulf cluster 'o these, baby.



  • Was I the only one who saw "Red Storm Rising" and thought it was about yet another movie adaption of a Tom Clancy novel ? (might be because I read a Sum of all Fears review recently)...

    Hmmm, on second thoughts, maybe it IS just me..

    • i thought the same too. Even if it was a movia adaption, i wouldnt get excited. Tom Clancy movies have a BAD habbit of being utterly massacred and turned into crappola compared to the books (The one exception being the Hunt for Red October, which i felt to be pretty decent). Now i know its hard to impossible to make a movie that follows a book perfectly. But after the foul taste of "clear and present danger", which was so far different from the book as to barely deserve the same title as the book, I dont have much hope for future clancy movies. I havent seen sum of all fears yet, I am gonna wait for it to hit rental status first.
  • Does it run Linux ? If so, how much time would it take to compile the kernel.

    • Due to exponential development speed of feature creep, gcc complexity and coffee production, you can still have a coffee break. Just cook the coffee and fill an injector syringe with it before hitting enter. Put the needle between your finger and enter key and push, thus having a coffee needle break as you compile.
    • generally (Score:3, Informative)

      by martissimo ( 515886 )
      Crays run a variant of Unix called UNICOS

      too lazy to dig you up a link, and no need to karma whore anyways, just google it or go to cray.com and read about it ;)
    • Does it run Linux ? If so, how much time would it take to compile the kernel.

      Blah, we don't need no OS running a Cray. Legend has it that Seymour Cray (who invented the Cray-1 supercomputer, and most of Control Data's computers) actually toggled the first operating system for the CDC-7600 in on the front panel from memory when it was first powered on. Rumor has it that SV2 bootstrap the brain of the admin in front of it and start cloning...
    • It probably would take quite a while, actually.. Cray machines' strength lies in vector processing. In other words, your BASH shell will run at Pentium 166 speeds while your eigen matrix computation will fly through at 100x the speed.

      I guess you could say its almost like using your nVidia GPU to do your regular computational tasks; it can do it, but it really shines when doing 3D graphics.

  • I don't want a cray - I want something like this [cnn.com]

    Seriously though - I wonder how much research Cray are doing into the realms of quantum computing? Couldn't find anything about it on the cray website...

    • They would tell? :))

      Same time, there are documented supercomputers and undocumented ones... Undocumented makes me curious :))
  • by jukal ( 523582 ) on Wednesday June 19, 2002 @04:38AM (#3727763) Journal
    from: Building a Better Bio-Supercomputer [thestandard.com], this one year old newspiece might provide some info on what the system will be:

    <clip> Competitor Compaq is taking a different path. In January, the company announced plans to develop a 100-teraflop bio-supercomputer dubbed Red Storm in partnership with Celera Genomics, the Rockville, Md., company that mapped the human genome, and Sandia National Laboratories in Albuquerque, N.M. Although Blue Gene will be 10 times faster than Red Storm, a Celera executive stresses that the company's machine could eventually match IBM's speed. Unlike Blue Gene, though, Red Storm is being designed for a broader array of life-science experiments and may be used to conduct nuclear research. The supercomputer, set to begin operating in 2004, will cost an estimated $125 million to $150 million to build. </clip>

    This seems to be somewhat in-line with the cost approximate stated in the press release [cray.com] $90 million. Or am I completely in my effort to undestand what this press release is about?
  • What I really want is a computer that exists in multiple parallel dimensions, so a single piece of hardware has massive (even infinite) parallel processing power.

    Addressing might be a problem tho.

    I wounder if anyone is researching this?...

  • Yeah, I believe Cray is gonna sell a lot more High-end systems this year... with Final Fantasy XI being released and all.

  • Does the Crays still use their esoteric emmitter-coupled logic gates?

    Thats some weird funky logic with negative power rails etc...
    • The answer to the question is no.. Cray doesn't use ECL for the main beasts any more. That was one of the things that drove them into the ground in the 90's. The Japanese switched to CMOS, and drove the prices way down. Cray eventually followed suit, with their former low-end (YMP-EL which was CMOS based from the get go) spawning the SV1.
  • SGI at least tempts us with stuff that will fit on a desk, Cray needs to too cause i want one god damn it!
  • As a child I grew up with rumors of these magical cray computers, computers so powerful they would do mere computation or games in the time it takes for our computer to add two numbers.

    Obviously these are not the truth of the power, but they have the power to do more advanced computations then we are able to do on our computer, I felt a little sad when I heard they are closing but now I may one day be able to see one.

    Hopefully Cray will not go the high road still, and make affordable high end PC/mainframes instead of insanely powerful PC/mainframes that make up about half a companies assets. Obviously I don't mean single computers, but their equivilant.
  • More Info and doh! (Score:4, Interesting)

    by anzha ( 138288 ) on Wednesday June 19, 2002 @08:19AM (#3728495) Homepage Journal
    Since I submitted this story, Sandia National Labs [sandia.gov] has released their own press release here [sandia.gov]. Note that they say, it's an MPP (Massively Parallel Processor), but details to come.

    What's interesing is that Cray has two machines that might be called MPPs:

    1. The T3E [cray.com] with it's single system image, Unicos/mk and Alpha processors.
    2. The Linux Cluster [cray.com].

    The SV2 [cray.com] might be called a massively parallel vector machine with potentially thousands of vector processors; However, they likely would have said 'vector' in the initial press release. On top of that, Cray would have trumpeted probably quite loudly they'd sold $90 million worth of SV2 because it helps more systems.. That makes me have doubts whether or not its an SV2.

    The MTA [cray.com] doesn't count here either being called a multithreaded architecture rather than a parallel one (semantic hair splitting, yes, but important ones).

    Furthermore, Cray is in the process of discontinuing the T3E because of its age.

    To make it even more delicious is that Red Storm is mentioned a lot in searches at Sandia in conjunction with Cplant [sandia.gov]. Cplant uses linux...

    So with a little bit of thought that would imply which Cray would be used here?

    Saying 'imagine a beowulf cluster of those' might be a bit more accurate than the joke would normally go. ;) BTW, sorry, I can't believe I missed the w. Is Bush holding it hostage in his name? ;)
    • Dude, your getting a Dell!

      "A world-class Linux cluster solution combining cost-effective hardware from Dell and Cray's world-class services and software for High Performance Computing applications and workloads."
    • by Anonymous Coward

      A cluster isn't an MPP.

      The Cray SV2 is an MPP. It is composed of both SSMP and Vector
      technologies and is basically a "next generation" T3E. Where the
      original T3E was an SSI MPP using the Alpha
      microprocessor, it was SSMP only.

      As to why anyone would buy a large system such as an SV2, beyond the
      performance, there is an aspect to manageability, patching and
      testing that really large clusters have a problem with. This in
      combination with the fact that you have to have huge computer rooms
      with power and cooling, where certain agencies are spending up to
      ~100 Million on the physical facilities alone. Then once you have
      your large physical plant and giant cluster, there is a little issue
      of latency. The speed of light is constant, if I have to fetch data
      a few million times from the other end of the room it adds up. All
      of this added together makes spending, say 30 million, on an SV2 look
      like a real bargain.

      It has little to do with the O.S. and more to do with the fundamental
      tools, read architectures, to do the job. Face it, commercial off the
      shelf technology and huge clusters aren't *great* for every problem,
      just like it would be wasteful for most home users to have 8 way SMP's
      in to run Quake, or a word processor on.
      • Better not tell IBM a cluster isn't an MPP. ;)

        In all seriousness though, I agree with with pretty much everything above, except that the Cray guys I know have said that they may evolve a true MPP out of their cluster technology and experiences. With the T3E line mostly dead due to the fact that SGI holds so many of the patents, this makes a lot of sense. Doubly so, since it would also give them time to work on refining linux for their applications: almost universally they praise linux, but say its just not quite there yet for the supercomputing field (other than clusters).

        As for the SV2 being the follow-on to the T3E, um, it it looks like it's more an SV1 follow-on with parts borrowed from the T3E (topology, etc), rather than a scalar processor MPP.

        Any which way, Cray's experience in tools and the HPC world would and will be very useful for the clustering world.

        Just imnsho. ;)
  • by peter303 ( 12292 ) on Wednesday June 19, 2002 @08:55AM (#3728709)
    Generally the only way large supercomputers can be built in the USA is through government contracts. Industry is unwilling to pay more than $10 million for a large computer. The Department of Energy has been commisioning top-end computers ($10M to $100M) for weapons research and NOAA for weather forecasting.

    I am ambivalant about this. On one hand I want to see a petaflop computer by 2010. (Two 100 teraflop computers have contracted for the 2007 timeframe, so this is possible.) On the other hand I am suspicious that computer companies won't build these on their own and dont like the governement propping up weak computer companies.

"Virtual" means never knowing where your next byte is coming from.

Working...