Red Storm Rising: Cray Wins Sandia Contract 89
anzha writes "It seems Cray is alive and kicking at least. They might even be making a come back after its very rough time as a part of SGI. The big news? Cray seems to have won the Red Storm contract - Sandia's newest supercomputer procurement - from Sandia National Labs. Check out the press release here. I'd say that this is probably an SV2, but the press release is a bit scant on details."
Re:Huh? (Score:1)
Morehover is undoubtful that the existence of supercomputer with superperformance brings to the development of new concepts of applications that later can influence the desktop computer industry.
Re:Huh? (Score:1, Informative)
Re:Huh? (Score:1)
Re:Huh? (Score:1)
Use to have one down the hall from my desk. (Score:1)
Re:Huh? (Score:3, Insightful)
For another example, clustering technology (which I'm sure is going to get posted about in this thread) was an attempt to duplicate and borrows ideas from the massively parallel machines like the Cray T3E, the SGI Origin, and the old Thinking Machines boxes.
Re:Huh? (Score:2)
God those were sexy. I'd take one to bed with me now.
bad link to cray (Score:1, Informative)
err.. I'm confused? (Score:2, Informative)
Re:err.. I'm confused? (Score:1)
Care to shed some light? (Score:3, Funny)
I know something about the problems of multiprocessing, but I would like to know how come monumental systems can still sell in the days of commodity hardware and - oh gosh, not again - Beowulf
Just pondering while waiting for net-enabled market of processing power (remember processtree?) and storage space (freenet, the new one) to make millionaires of all excessive-hardware owners, through paypal. Well, maybe not
Re:Care to shed some light? (Score:3, Insightful)
Re:Care to shed some light? (Score:5, Informative)
Distributed processing is only really good when the subproblems are separate enough that they can be calculated separately.
Also, supercomputers are a lot better for vector code. Intel and Athlon might say that their current offerings are Vector Processors, but they really aren't. When you need to exploit DLP, supercomputers are the way to go.
Also, research and funding like this will uncover the techniques that we can expect to be exploited in desktop processors in 5-20 years, so it helps us eventually.
Re:Care to shed some light? (Score:2)
Say, fo rinstance, you need to simulate particle interaction...
Actually most molecular dynamics codes parallelize quite well. Of course you need better bandwidth and latency than seti@home, but nothing a cluster couldn't handle (especially if you have one of them nifty low-latency interconnects).
Distributed processing (Score:2)
Um...that's not really true. Certainly, distributed processing works like gangbusters when the problem is what's called "embarrassingly parallel". Things like Seti, or distributed ray tracing of a static scene scale perfectly on a distributed system.
But there are lots of groups (the DOE national labs being some) that do distributed work on problems that are not embarrassingly parallel. The trick is making sure that you have a fast interconnect between the boxes, and can make use of that interconnect efficiently.
In general, a simulation (including particle simulations) will break up a region of space into pieces, and each piece is calculated for a "time step" on a processor. Then they each determine if they need information from another processor, or if their information could be used by another. A collective communication then takes place to exchange what's needed. Rinse and repeat.
If you do your work right, you get a scalable system, one where you can add processors and get a proportional increase in performance (or maximum problem size). If you don't do you work right, the system will not scale well.
Distributed computing is being used for more and more things these days.
Re:Care to shed some light? (Score:3, Interesting)
This stuff is plain cool.
Re:Care to shed some light? (Score:1)
Re:Care to shed some light? (Score:3, Interesting)
I'm not sure what this new Red Storm machine has in the way of individual nodes, but Sandia has some history in parallel computing, dating back to the paper
as well as the ASCI Red machine [sandia.gov], which, IIRC, was the first machine to break the 1 teraFLOPS barrier.That machine, BTW, was built by Intel out of fastest Pentium chips of the day. I think a later upgrade to Pentium IIs [sandia.gov] increased its speed to about 3 teraFLOPS.
As far as MP machines are concerned, it could be argued on the basis of the ASCI Red machine that they have a fairly "economical" strategy [I know, I know, it's hard to argue that anything costing $9e7 as being "economical" - but you are talking about buying one of the fastest few computers in the world - rack mounted Athlon MPs could do great until you get up to O(100) processors, but doing the interconnects for O(10000) processors gets to be tricky].
Also there is CPlant [sandia.gov], their own (everybody's gotta have one) pet project to build a B----- cluster out of Alpha based machines running a modified Linux.
Re:Care to shed some light? (Score:2)
In the case of the T3, fecalith might be more appropriate.
SuperComputer Dawrfed by Laptop (Score:2, Informative)
Being a long time Cray fan and standing in awe of how massive are the undertakings currently being driven by supercomputer, I would normally be impressed. But I just finished reading Seth Lloyd's article at the Edge [edge.org]. The MIT professor of Mechanical Engineering came up with "The amount of information that can be stored by the ultimate laptop, 10 to the 31st bits, is much higher than the 10 to the 10th bits stored on current laptops". I know /. dealt with this recently but reading the prof's thought processes in depth is a fun intellectual high.
O yah I gotta get me a Beowulf cluster 'o these, baby.
Another Tom Clancy adaptation ?? (Score:1, Funny)
Was I the only one who saw "Red Storm Rising" and thought it was about yet another movie adaption of a Tom Clancy novel ? (might be because I read a Sum of all Fears review recently)...
Hmmm, on second thoughts, maybe it IS just me..
OT: Another Tom Clancy adaptation ?? (Score:2)
Obligatory comment (Score:2, Funny)
It's still about the time of a coffee break ;) (Score:1, Funny)
Due to exponential development speed of feature creep, gcc complexity and coffee production, you can still have a coffee break. Just cook the coffee and fill an injector syringe with it before hitting enter. Put the needle between your finger and enter key and push, thus having a coffee needle break as you compile.
generally (Score:3, Informative)
too lazy to dig you up a link, and no need to karma whore anyways, just google it or go to cray.com and read about it
Re:Obligatory comment (Score:2)
Blah, we don't need no OS running a Cray. Legend has it that Seymour Cray (who invented the Cray-1 supercomputer, and most of Control Data's computers) actually toggled the first operating system for the CDC-7600 in on the front panel from memory when it was first powered on. Rumor has it that SV2 bootstrap the brain of the admin in front of it and start cloning...
Re:Obligatory comment (Score:1)
I guess you could say its almost like using your nVidia GPU to do your regular computational tasks; it can do it, but it really shines when doing 3D graphics.
Cray? Nah... (Score:1)
Seriously though - I wonder how much research Cray are doing into the realms of quantum computing? Couldn't find anything about it on the cray website...
Re:Cray? Nah... (Score:1)
Same time, there are documented supercomputers and undocumented ones... Undocumented makes me curious
Associated clip from the industry standard (Score:3, Interesting)
<clip> Competitor Compaq is taking a different path. In January, the company announced plans to develop a 100-teraflop bio-supercomputer dubbed Red Storm in partnership with Celera Genomics, the Rockville, Md., company that mapped the human genome, and Sandia National Laboratories in Albuquerque, N.M. Although Blue Gene will be 10 times faster than Red Storm, a Celera executive stresses that the company's machine could eventually match IBM's speed. Unlike Blue Gene, though, Red Storm is being designed for a broader array of life-science experiments and may be used to conduct nuclear research. The supercomputer, set to begin operating in 2004, will cost an estimated $125 million to $150 million to build. </clip>
This seems to be somewhat in-line with the cost approximate stated in the press release [cray.com] $90 million. Or am I completely in my effort to undestand what this press release is about?
Parallel Processing (Score:1)
Addressing might be a problem tho.
I wounder if anyone is researching this?...
Re:Parallel Processing (Score:1)
Re:Parallel Processing (Score:1)
for finding the correct answer
42
We all know it's 42... Logos... suffer the light
More sales.. (Score:2, Funny)
I wonder... (Score:1)
Does the Crays still use their esoteric emmitter-coupled logic gates?
Thats some weird funky logic with negative power rails etc...
Re:I wonder..., NO, Crays (SV1, 1+ and 2) use CMOS (Score:3, Interesting)
god damn it... (Score:2)
Re:Super Computer SMP Why (Score:1)
But, what does a home box using SMP ( 2 CPU's ) have to do with a supercomputer ? You just cant compare them, can you ?
thank god, (Score:1)
Obviously these are not the truth of the power, but they have the power to do more advanced computations then we are able to do on our computer, I felt a little sad when I heard they are closing but now I may one day be able to see one.
Hopefully Cray will not go the high road still, and make affordable high end PC/mainframes instead of insanely powerful PC/mainframes that make up about half a companies assets. Obviously I don't mean single computers, but their equivilant.
More Info and doh! (Score:4, Interesting)
What's interesing is that Cray has two machines that might be called MPPs:
1. The T3E [cray.com] with it's single system image, Unicos/mk and Alpha processors.
2. The Linux Cluster [cray.com].
The SV2 [cray.com] might be called a massively parallel vector machine with potentially thousands of vector processors; However, they likely would have said 'vector' in the initial press release. On top of that, Cray would have trumpeted probably quite loudly they'd sold $90 million worth of SV2 because it helps more systems.. That makes me have doubts whether or not its an SV2.
The MTA [cray.com] doesn't count here either being called a multithreaded architecture rather than a parallel one (semantic hair splitting, yes, but important ones).
Furthermore, Cray is in the process of discontinuing the T3E because of its age.
To make it even more delicious is that Red Storm is mentioned a lot in searches at Sandia in conjunction with Cplant [sandia.gov]. Cplant uses linux...
So with a little bit of thought that would imply which Cray would be used here?
Saying 'imagine a beowulf cluster of those' might be a bit more accurate than the joke would normally go.
Re:More Info and doh! (Score:2)
"A world-class Linux cluster solution combining cost-effective hardware from Dell and Cray's world-class services and software for High Performance Computing applications and workloads."
Re:More Info and doh! (Score:1, Informative)
A cluster isn't an MPP.
The Cray SV2 is an MPP. It is composed of both SSMP and Vector
technologies and is basically a "next generation" T3E. Where the
original T3E was an SSI MPP using the Alpha
microprocessor, it was SSMP only.
As to why anyone would buy a large system such as an SV2, beyond the
performance, there is an aspect to manageability, patching and
testing that really large clusters have a problem with. This in
combination with the fact that you have to have huge computer rooms
with power and cooling, where certain agencies are spending up to
~100 Million on the physical facilities alone. Then once you have
your large physical plant and giant cluster, there is a little issue
of latency. The speed of light is constant, if I have to fetch data
a few million times from the other end of the room it adds up. All
of this added together makes spending, say 30 million, on an SV2 look
like a real bargain.
It has little to do with the O.S. and more to do with the fundamental
tools, read architectures, to do the job. Face it, commercial off the
shelf technology and huge clusters aren't *great* for every problem,
just like it would be wasteful for most home users to have 8 way SMP's
in to run Quake, or a word processor on.
Re:More Info and doh! (Score:2)
In all seriousness though, I agree with with pretty much everything above, except that the Cray guys I know have said that they may evolve a true MPP out of their cluster technology and experiences. With the T3E line mostly dead due to the fact that SGI holds so many of the patents, this makes a lot of sense. Doubly so, since it would also give them time to work on refining linux for their applications: almost universally they praise linux, but say its just not quite there yet for the supercomputing field (other than clusters).
As for the SV2 being the follow-on to the T3E, um, it it looks like it's more an SV1 follow-on with parts borrowed from the T3E (topology, etc), rather than a scalar processor MPP.
Any which way, Cray's experience in tools and the HPC world would and will be very useful for the clustering world.
Just imnsho.
supercomputer "corporate welfare" (Score:3, Insightful)
I am ambivalant about this. On one hand I want to see a petaflop computer by 2010. (Two 100 teraflop computers have contracted for the 2007 timeframe, so this is possible.) On the other hand I am suspicious that computer companies won't build these on their own and dont like the governement propping up weak computer companies.