Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
United States Hardware

World's Fastest Supercomputer To Be Built At ORNL 230

Homey R writes "As I'll be joining the staff there in a few months, I'm very excited to see that Oak Ridge National Lab has won a competition within the DOE's Office of Science to build the world's fastest supercomputer at Oak Ridge National Lab in Oak Ridge, Tennessee. It will be based on the promising Cray X1 vector architecture. Unlike many of the other DOE machines that have at some point occupied #1 on the Top 500 supercomputer list, this machine will be dedicated exclusively to non-classified scientific research (i.e., not bombs)." Cowards Anonymous adds that the system "will be funded over two years by federal grants totaling $50 million. The project involves private companies like Cray, IBM, and SGI, and when complete it will be capable of sustaining 50 trillion calculations per second."
This discussion has been archived. No new comments can be posted.

World's Fastest Supercomputer To Be Built At ORNL

Comments Filter:
  • good stuff (Score:4, Interesting)

    by Anonymous Coward on Wednesday May 12, 2004 @09:31AM (#9125841)

    Personally I'm happy to see Cray still making impressive machines. Not every problem can be solved by "divide and conquer" clusters.
  • 50 trillion (Score:2, Interesting)

    by Killjoy_NL ( 719667 ) <slashdot@@@remco...palli...nl> on Wednesday May 12, 2004 @09:32AM (#9125852)
    50 trillion calculations per second.
    Wow, that's darn fast.

    I wonder if that processing power could be used for rendering like was done by Weta and how the performance could compare to their renderfarm.
  • by SatanicPuppy ( 611928 ) <SatanicpuppyNO@SPAMgmail.com> on Wednesday May 12, 2004 @09:34AM (#9125864) Journal
    I thought the age of the over-priced supercomputer was over, and the age of the cluster had begun?

    Sure, I'd love to have one of those things in my house, but as long as the government is spending my money, I think I'd rather see them go for a more cost effective solution, rather than another 1 ton monster that'll be obsolete in two years.
  • Re:good stuff (Score:0, Interesting)

    by Anonymous Coward on Wednesday May 12, 2004 @09:42AM (#9125931)
    Personally I'm happy to see Cray still making impressive machines. Not every problem can be solved by "divide and conquer" clusters.

    in reality there is no difference between a multi processor system that uses a motherboard bus or a system that uses ethernet as as a "bus" between processors. It is an artificial distinction when you are talking about things this big.

  • by realSpiderman ( 672115 ) on Wednesday May 12, 2004 @09:42AM (#9125934)
    ... or this [ibm.com] is going to beat them hard.

    Still a whole year until they have a full machine, but the 512-way prototype reached 1.4 TFlops (LinPack). The complete machine will have 128 times the nodes and 50% higher frequency. So even with pessimistic scalability, this will be more than twice as fast.

  • Re:Yeah... (Score:3, Interesting)

    by word munger ( 550251 ) <dsmunger@[ ]il.com ['gma' in gap]> on Wednesday May 12, 2004 @09:48AM (#9125982) Homepage Journal
    Unfortunately we haven't heard much from them lately [vt.edu] (Notice the "last updated" date). I suspect they're still waiting on their G5 xServes.
  • by flaming-opus ( 8186 ) on Wednesday May 12, 2004 @10:01AM (#9126096)
    If you care to, read the pdf on their early impressions of the X1. The Army High Performance Computing Research Center (www.arc.umn.edu) did an analysis of their application and found that the X1 was actually MORE cost effective than a commodity cluster.

    Firstly, the X1 was greater per-processor performance by a factor of 4. Then you add an interconnect that has half the latency, and 50 times the bandwidth of myrinet or infiniband. It also has memory and cache bandwidth enough to actually fill the pipelines, unlike a Xeon which can do a ton of math on whatever will fit in the registers. Some problems just don't work real well on clustered PCs, they need this kind of big iron.

    Secondly, some problems cannot tollerate a failure in a compute node. IF you cluster together 10,000 PCs, the average failure rate means that one of those nodes will fail about every 4 hours. If your problem takes three days to complete, the cluster is worthless to you. A renderfarm can tolerate this sort of failure rate, just send those frames to another node. Some problems can't handle it.

    Oak ridge is very concerned with getting the most bang for the buck.
  • by Anonymous Coward on Wednesday May 12, 2004 @10:10AM (#9126182)
    Fractal iteration is also a very good use for this machine.
  • by bsDaemon ( 87307 ) on Wednesday May 12, 2004 @10:18AM (#9126243)
    I worked in Instrumention and Control for the Free Electron Laser project at the Thomas Jefferson National Accelerator Facility. We also host the CEBAF (Concentrated Electron Beam Accelerator Facility), which is a huge ass particle accelerator.
    the DOE does a lot of basic research in nuclear physics, quantam physics, et cetera. the FEL was used to galvanize power rods for VPCO (now Dominion Power) and made them last 3 times as long. Some William & Mary people use it for doing protein research, splicing molecules and stuff.
    The DOE does a lot of very useful things that need high amounts of computing power, not just simulating nuclear bombs (although Oak Ridge does taht sort of stuff, as does Los Alamos). We only had a lame Beowulf cluster at TJNAF. I wish we would have had something like this beast.
    I want to know how it stacks up to the Earth Simulator.
  • NOT the fastest! (Score:5, Interesting)

    by VernonNemitz ( 581327 ) on Wednesday May 12, 2004 @10:30AM (#9126373) Journal
    It seems to me that as long as multiprocessor machines qualify as supercomputers, then the Google cluster [tnl.net] counts as the fastest right now, and will still count as the fastest long after this new DOE computer is built.
  • by paitre ( 32242 ) on Wednesday May 12, 2004 @10:35AM (#9126414) Journal
    Certain operations, though, are highly dependant upon each previous result. Physics and chemical simulations are a good example. When you have situations like this, clusters don't do you a lot of good, since only one iteration can be worked on at a time -- leaving most of your cluster sitting there idle.

    Umm, bwah?
    It's only going to be sitting there idle if you're not properly scheduling and qeueing jobs. Also, you -CAN- do the kind of simulations (Physics, chemicals) on a cluster *points at clusters at Chrylser and Shell*. The caveat is that you need to write out the result for the appropriate job to handle (in practice - job run 1 contains step 1, job run 2 step 2, etc). And a cluster is perfectly fine for this.

    That all said - a supercomputer like this -IS- generally a better tool for the job if you've got the money. Money, in most places, -IS- an object, so we get the best bang for our buck.
    *shrug*
  • Re:good stuff (Score:5, Interesting)

    by Jeremy Erwin ( 2054 ) on Wednesday May 12, 2004 @10:47AM (#9126524) Journal
    But Virginia Tech's cluster doesn't use Ethernet as its primary network. It uses Infiniband. As for the cost not scaling linearly, ask yourself whether Big Mac's performance scales linearly.
  • by bradbury ( 33372 ) <`moc.liamg' `ta' `yrubdarB.treboR'> on Wednesday May 12, 2004 @11:25AM (#9126926) Homepage
    One of the major un-classified research uses is for molecular modeling for the study of nanotechnology. This really consumes a lot of computer time because one is dealing with atomic motion over pico-to-nano-second time scales. An example is the work [foresight.org] done by Goddard's group at CALTECH on simulating rotations of the Drexler/Merkle Neon Pump [imm.org]. If I recall properly they found that when you cranked the rotational rate up to about a GHz it flew apart. (For reference macro-scale parts like turbochargers or jet engines don't even come close...)

    In the long run one would like to be able to get such simulations from the 10,000 atom level up to the billion-to-trillion (or more) atom level so you could simulate significant fractions of the volume of cells. Between now and then molecular biologists, geneticists, bioinformaticians, etc. would be happy if we could just get to the level of accurate folding (Folding@Home [standford.edu] is working on this from a distributed standpoint) and eventually to be able to model protein-protein interactions so we can figure out how things like DNA repair -- which involves 130+ proteins cooperating in very complex ways -- operate so we can better understand the causes of cancer and aging.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...