World's Fastest Supercomputer To Be Built At ORNL 230
Homey R writes "As I'll be joining the staff there in a few months, I'm very excited to see that Oak Ridge National Lab has won a competition within the DOE's Office of Science to build the world's fastest supercomputer at Oak Ridge National Lab in Oak Ridge, Tennessee. It will be based on the promising Cray X1 vector architecture. Unlike many of the other DOE machines that have at some point occupied #1 on the Top 500 supercomputer list, this machine will be dedicated exclusively to non-classified scientific research (i.e., not bombs)."
Cowards Anonymous adds that the system "will be funded over two years by federal grants totaling $50 million. The project involves private companies like Cray, IBM, and SGI, and when complete it will be capable of sustaining 50 trillion calculations per second."
good stuff (Score:4, Interesting)
Personally I'm happy to see Cray still making impressive machines. Not every problem can be solved by "divide and conquer" clusters.
50 trillion (Score:2, Interesting)
Wow, that's darn fast.
I wonder if that processing power could be used for rendering like was done by Weta and how the performance could compare to their renderfarm.
Talking out my ass here, but (Score:1, Interesting)
Sure, I'd love to have one of those things in my house, but as long as the government is spending my money, I think I'd rather see them go for a more cost effective solution, rather than another 1 ton monster that'll be obsolete in two years.
Re:good stuff (Score:0, Interesting)
in reality there is no difference between a multi processor system that uses a motherboard bus or a system that uses ethernet as as a "bus" between processors. It is an artificial distinction when you are talking about things this big.
They better hurry ... (Score:5, Interesting)
Still a whole year until they have a full machine, but the 512-way prototype reached 1.4 TFlops (LinPack). The complete machine will have 128 times the nodes and 50% higher frequency. So even with pessimistic scalability, this will be more than twice as fast.
Re:Yeah... (Score:3, Interesting)
Re:Talking out my ass here, but (Score:5, Interesting)
Firstly, the X1 was greater per-processor performance by a factor of 4. Then you add an interconnect that has half the latency, and 50 times the bandwidth of myrinet or infiniband. It also has memory and cache bandwidth enough to actually fill the pipelines, unlike a Xeon which can do a ton of math on whatever will fit in the registers. Some problems just don't work real well on clustered PCs, they need this kind of big iron.
Secondly, some problems cannot tollerate a failure in a compute node. IF you cluster together 10,000 PCs, the average failure rate means that one of those nodes will fail about every 4 hours. If your problem takes three days to complete, the cluster is worthless to you. A renderfarm can tolerate this sort of failure rate, just send those frames to another node. Some problems can't handle it.
Oak ridge is very concerned with getting the most bang for the buck.
Re:50 trillion calcs/sec...how fast really? (Score:1, Interesting)
as a former DOE employee (Score:5, Interesting)
the DOE does a lot of basic research in nuclear physics, quantam physics, et cetera. the FEL was used to galvanize power rods for VPCO (now Dominion Power) and made them last 3 times as long. Some William & Mary people use it for doing protein research, splicing molecules and stuff.
The DOE does a lot of very useful things that need high amounts of computing power, not just simulating nuclear bombs (although Oak Ridge does taht sort of stuff, as does Los Alamos). We only had a lame Beowulf cluster at TJNAF. I wish we would have had something like this beast.
I want to know how it stacks up to the Earth Simulator.
NOT the fastest! (Score:5, Interesting)
Re:Talking out my ass here, but (Score:3, Interesting)
Umm, bwah?
It's only going to be sitting there idle if you're not properly scheduling and qeueing jobs. Also, you -CAN- do the kind of simulations (Physics, chemicals) on a cluster *points at clusters at Chrylser and Shell*. The caveat is that you need to write out the result for the appropriate job to handle (in practice - job run 1 contains step 1, job run 2 step 2, etc). And a cluster is perfectly fine for this.
That all said - a supercomputer like this -IS- generally a better tool for the job if you've got the money. Money, in most places, -IS- an object, so we get the best bang for our buck.
*shrug*
Re:good stuff (Score:5, Interesting)
Un-classified research uses (Score:4, Interesting)
In the long run one would like to be able to get such simulations from the 10,000 atom level up to the billion-to-trillion (or more) atom level so you could simulate significant fractions of the volume of cells. Between now and then molecular biologists, geneticists, bioinformaticians, etc. would be happy if we could just get to the level of accurate folding (Folding@Home [standford.edu] is working on this from a distributed standpoint) and eventually to be able to model protein-protein interactions so we can figure out how things like DNA repair -- which involves 130+ proteins cooperating in very complex ways -- operate so we can better understand the causes of cancer and aging.