Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
United States Hardware

World's Fastest Supercomputer To Be Built At ORNL 230

Homey R writes "As I'll be joining the staff there in a few months, I'm very excited to see that Oak Ridge National Lab has won a competition within the DOE's Office of Science to build the world's fastest supercomputer at Oak Ridge National Lab in Oak Ridge, Tennessee. It will be based on the promising Cray X1 vector architecture. Unlike many of the other DOE machines that have at some point occupied #1 on the Top 500 supercomputer list, this machine will be dedicated exclusively to non-classified scientific research (i.e., not bombs)." Cowards Anonymous adds that the system "will be funded over two years by federal grants totaling $50 million. The project involves private companies like Cray, IBM, and SGI, and when complete it will be capable of sustaining 50 trillion calculations per second."
This discussion has been archived. No new comments can be posted.

World's Fastest Supercomputer To Be Built At ORNL

Comments Filter:
  • good stuff (Score:4, Interesting)

    by Anonymous Coward on Wednesday May 12, 2004 @08:31AM (#9125841)

    Personally I'm happy to see Cray still making impressive machines. Not every problem can be solved by "divide and conquer" clusters.
    • Surely you meant "divide and cluster" right? :-D

  • Wow... (Score:3, Funny)

    by nother_nix_hacker ( 596961 ) on Wednesday May 12, 2004 @08:32AM (#9125845)
    The project involves private companies like Cray, IBM, and SGI, and when complete it will be capable of sustaining 50 trillion calculations per second."
    Outlook with no slowdown!
    • Re:Wow... (Score:5, Funny)

      by FenwayFrank ( 680269 ) on Wednesday May 12, 2004 @08:38AM (#9125899)
      It's so fast, the blue screen shifts to red!
      • Re:Wow... (Score:2, Insightful)

        I can't believe this got modded up to +5 Funny. Any true nerd on Slashdot knows that blue is at a higher frequency than red. So if something blue moves faster (increases in frequency) it is going to shift into ultraviolet and beyond.
        • Ever considered the possibility that it might be traveling towards you and therefore shift the other way?:P
          • Ahem. Moving towards you would result in blueshift. As it is already blue, it would instead result in ultravioletshift, et cetera, et al, yada yada, and so on.

            However, if it was moving AWAY at a high rate of speed, it would indeed result in red shift. And since we ALL want to move away from Outlook as fast as we can, this is indeed a grand achievement!
        • I remember the first time I saw Outlook. I moved away from it so fast that I had to pick up the logo on a ELF radio reciever.

          How's that for an anti-MS joke?
  • Qualifier (Score:5, Insightful)

    by andy666 ( 666062 ) on Wednesday May 12, 2004 @08:32AM (#9125851)
    As usual, there should be a qualifier as to what is meant by fastest. According to their definition they are, but not according to NEC's, for example.
    • As usual, there should be a qualifier as to what is meant by fastest.

      When complete it will be capable of sustaining 50 trillion calculations per second.

      Screw that. How many fps can it manage in Quake III?
  • 50 trillion (Score:2, Interesting)

    by Killjoy_NL ( 719667 )
    50 trillion calculations per second.
    Wow, that's darn fast.

    I wonder if that processing power could be used for rendering like was done by Weta and how the performance could compare to their renderfarm.
    • Build me a real time simulation of Morgan Webb PLEASE!
    • Re:50 trillion (Score:4, Insightful)

      by WindBourne ( 631190 ) on Wednesday May 12, 2004 @08:37AM (#9125887) Journal
      I wonder if that processing power could be used for rendering like was done by Weta and how the performance could compare to their renderfarm.
      Sure, but the real question is why would you? The cost of this on a per mip basis is sure to be much higher than a renderfarm. In addition, ray tracing lends itself to parellelism. There are many other problems out there that do not that can use this kind of box.
  • Hmm (Score:5, Funny)

    by LaserLyte ( 725803 ) * on Wednesday May 12, 2004 @08:34AM (#9125865)
    > ...capable of sustaining 50 trillion calculations per second.

    Hmm...I wonder if I could borrow it for a few days to give my dnet [distributed.net] stats a boost :D
  • by Anonymous Coward on Wednesday May 12, 2004 @08:35AM (#9125874)
    Wow, 50 trillion calculations per second. Thats almost fast enough to finish an infinite loop in under ten hours.
  • And then VT will add more nodes to their G5 cluster. :P
    • Re:Yeah... (Score:3, Interesting)

      by word munger ( 550251 )
      Unfortunately we haven't heard much from them lately [vt.edu] (Notice the "last updated" date). I suspect they're still waiting on their G5 xServes.
      • Which is forcing me to continue waiting for the one I ordered the day the fucking things were announced.
        They've gone from giving me a mid to late April ship date to "Sometime in June".

        Screw that. Apple is screwing the pooch if they're at all serious about getting into enterprise computing. It's one thing to slip one or two months, but now they're at four, and I wouldn't be suprised to see it go to 6 at this point.

        Fartknockers.
    • Remember, DOE is a tax-payer funded agency. For my money, the G5 solutions looks better!

  • Doom III (Score:4, Funny)

    by MrRuslan ( 767128 ) on Wednesday May 12, 2004 @08:39AM (#9125912)
    at an Impresive 67fps on this baby...
  • by realSpiderman ( 672115 ) on Wednesday May 12, 2004 @08:42AM (#9125934)
    ... or this [ibm.com] is going to beat them hard.

    Still a whole year until they have a full machine, but the 512-way prototype reached 1.4 TFlops (LinPack). The complete machine will have 128 times the nodes and 50% higher frequency. So even with pessimistic scalability, this will be more than twice as fast.

    • by flaming-opus ( 8186 ) on Wednesday May 12, 2004 @09:08AM (#9126163)
      Two radically different designs, will probably solve very different sorts of problems. Linpack is extremely good at giving a computer an impressive number. It's the sort of problem that fills up execution piplines to their maximum. Blue Gene was origionally designed to do protein-folding calculations. While many other tasks will work well on that machine, others will work very poorly.

      It's a mesh of a LOT of microcontroller-class processors. The theory being that these processors give you the best performance per transistor. Thus you can run them at a moderate clock, get decent performance out of them, and cram a whole hell of a lot of them into a cabinet. It's a cool design, I'm interested to see what it will be able to do, once deployed. However, for the problems they have at ORNL, I'm sure the X1 was a better machine. Otherwise they would have bought IBM. They already have a farm of p690s, so they have a working relationship.
  • by Debian Troll's Best ( 678194 ) on Wednesday May 12, 2004 @08:43AM (#9125938) Journal
    I love reading about these kinds of large supercomputer projects...this is really cutting edge stuff, and in a way acts as a kind of 'crystal ball' for the types of high performance technologies that we might expect to see in more common server and workstation class machines in the next 10 years or so.

    The article mentions that the new supercomputer will be used for non-classified projects. Does anyone have more exact details of what these projects may involve? Will it be a specific application, or more of a 'gun for hire' computing facility, with CPU cycles open to all comers for their own projects? It would be interesting to know what types of applications are planned for the supercomputer, as it may be possible to translate a raw measure of speed like the quoted '50 trillion calculations per second' into something more meaningful, like 'DNA base pairs compared per second', or 'weather cells simulated per hour'. Are there any specialists in these kinds of HPC applications who would like to comment? How fast do people think this supercomputer would run apt-get for instance? Would 50 trillion calculations per second equate to 50 trillion package installs per second? How long would it take to install all of Debian on this thing? Could the performance of the system actually be measured in Debian installs per second? I look forward to the community's response!

    • Well, besides weather simulation (which is among the most CPU-intensive work around), they could use this new computer to do computational fluid dynamics analysis--perfect for studying the aerodynamics of airplanes, shaping the aerodynamics of an automobile, and possibly studying how to reduce noise on a maglev train travelling at over 250 mph.
    • Quantum chemistry, or ab initio, calculations tend to be a biggie. I wouldn't be surprised if ab initio alone would account for > 50 % of all supercomputer cpu cycles in the world.

      Other big things are weather prediction, fluid dynamics, classical (i.e. "Newtonian") molecular dynamics with some kind of empirical potentials (e.g. protein folding and stuff can be thought of as MD).
    • "Will it be a specific application, or more of a 'gun for hire' computing facility, with CPU cycles open to all comers for their own projects?"

      This will be what is known as a "user facility" at DOE. CPU time will be doled out on a competetive basis, i.e., if someone has a project they would like to use it for, they will submit a proposal which will then be reviewed against others.
    • Some people here have mentioned nanotechnology simulations, I don't know that the label needs to be thrown around so much but I have written a few models of the type which will be used. I make, every once in a while, a computer model to explain my experimental results (usually for myself, when I don't believe something).

      Generally, these are voltage and force relaxations, with some areas of well defined voltage, some point charges thrown around, and very complex geometry. Basically, that means I set up a
  • Can anyone explain what "DOE" is? I'm assuming it's some american govt thing like department of energy. is that correct?
    • That's correct, it's the Department of Energy.

      I don't know why they would need it, but that's just because I don't know anything about the work of the DOE (not being an american and all that)
      • by bsDaemon ( 87307 ) on Wednesday May 12, 2004 @09:18AM (#9126243)
        I worked in Instrumention and Control for the Free Electron Laser project at the Thomas Jefferson National Accelerator Facility. We also host the CEBAF (Concentrated Electron Beam Accelerator Facility), which is a huge ass particle accelerator.
        the DOE does a lot of basic research in nuclear physics, quantam physics, et cetera. the FEL was used to galvanize power rods for VPCO (now Dominion Power) and made them last 3 times as long. Some William & Mary people use it for doing protein research, splicing molecules and stuff.
        The DOE does a lot of very useful things that need high amounts of computing power, not just simulating nuclear bombs (although Oak Ridge does taht sort of stuff, as does Los Alamos). We only had a lame Beowulf cluster at TJNAF. I wish we would have had something like this beast.
        I want to know how it stacks up to the Earth Simulator.
    • Re:Maybe it's me. (Score:5, Informative)

      by henryhbk ( 645948 ) on Wednesday May 12, 2004 @08:51AM (#9126020) Homepage
      Yes, DOE is the Federal Government's Department of Energy. Oak Ridge is a large federal govt. lab.
  • ...because a day later Palm users will massively interconnect to form the World Fastest Clustered Computer Environment. The OS? Linux, of course. .}
  • by Anonymous Coward

    or it certainly seems like it (reading the specs of the thing)

  • I couldn't find the source for the "non-classified" bit... These things are often not used for simulating new bombs but for, "evaluating the stability of the nuclear stockpile." Does research into whether the yield of our cold war nukes is down or up a few kilotons qualify as non-classified?
    • "evaluating the stability of the nuclear stockpile."
      Why don't they just print a best-use-before-date on those nukes?
  • They were listed as part of the solution.

    Oak Ridge has done extensive evaluations of recent IBM, SGI and Cray technology. Though I am still looking forward to data on IBM's Power5.

    Cray X1 Eval [ornl.gov]
    SGI Altix Eval [ornl.gov]
    • ORNL already has a 256 processor X1, a large IBM SP made of p690s, as well as a large SGI altix. I imagine the 50Tflops number will be a combined system with upgraded systems of all three types. They are obviously impressed with both the X1 and the Altix. The IBMs are no slouch though, and they are upgrading the interconnect, and IBM is just getting ready to launch a power5 update.

      It's probably just spin to call the project "A computer", rather than "several computers". Deep in one of those ORNL whitepaper
  • Since it's funded by federal grants, how much time, as a taxpayer, do I get on it?

    And I'm still waiting for my turn to drive one of the Mars rovers.

    • As a direct percentage of total taxpayers, your time would be equal to under one second. However, when calculated as a percentage of your tax contributions in relation to all tax revenues collected, it looks like you still owe us 23 days, 17 hours, and 54 minutes of processing time on your computer. You can drop your computer off at the closest IRS office to you.
      Thank you for your understanding in this matter,

      Your friendly neighbourhood IRS agent.

  • 3D torus topology (Score:5, Informative)

    by elwinc ( 663074 ) on Wednesday May 12, 2004 @09:05AM (#9126124)
    I checked out the topology of the Cray X1; they call it an "enhanced 3D torus." A 3D torus would be if you made an NxNxN cube of nodes, connected all ajacent nodes (top, bottom, left, right, front, back), and then connected all the processors on one face thru to the opposite face. I can't tell what an "enhanced" torus is. (Each X1 node, by the way, has four 12.8 gflop MSPs, and each MSP has eight 32-stage, 64 bit floating point pipelines.)

    So each node is directly connected to six ajacent nodes. Contrast this with the Thinking Machines Connection Machine CM2 topology, which had 2^N nodes connected in an N dimensional hypercube. [base.com] So each node in a 16384 node CM2 was directly connected to 16 other nodes. There's a theorem that you can always embed a lower dimensional torus in an N dimensional hypercube, so the CM2 had all the benefits of a torus and more. This topology was criticized because you never needed as much connectivity as you got in the higher node-count machines, to CM2 was in effect selling you too much wiring.

    Thinking Machines changed the topology to fat trees [ornl.gov] in the CM5. One of the cool things about the fat tree is it allows you to buy as much connectivity as you need. I'm really surprised that it seems to have died when Thinking Machines collapsed. On the other hand, any kind of 3D mesh is probably pretty good for simulating physics in 3D. You can have each node model a block of atmosphere for a weather simulation, or a little wedge of hydrogen for an H-bomb simulation. But it might be useful to have one more dimension of connection for distributing global results to the nodes.

    • So each node is directly connected to six ajacent nodes.

      Excellent. We can finally solve the Optimal Dungeon Theorem on hex tile games.
    • I don't think the fat tree died with Thinking Machines. For example, MCR at LLNL uses a Quadrics fat tree. I imagine many sizeable clusters (way more than 64 nodes) use one. There's one link here [quadrics.com], and the MCR link here [llnl.gov] but you can probably google for quadrics and fat tree to find some more. I'd be surprised if fat trees didn't show up in Myrinet / other interconnects, but you typically need to have a sizeable cluster before there's any point in calling it a fat tree.

      (Oh, and if you meant something else
    • Fat trees are still alive and well. It appears to be Quadrics topology of choice, as it is applied at LLNL and PNNL, which both user thier interconnect. I'm not sure the folks at ORNL would have specifed a Torus, unless they believed that they could make use of it. I know those guys, and they are some very smart people. I don't recall hearing a reason for the topology decision though.
  • ... to post the usual jokes, I've got to ask: What runs on these kind of machines? What OS do they use, and what kind of software? Can you buy software for supercomputers, or will the customer/new owner have to write all the software to run on it themselves? Anyone out there working on something similar have interesting facts about the software?
    • Supercomputers usually run some flavor of UNIX -- Unicos, IRIX, I think even Linux. In any case, they are specially built and designed for the supercomputer. Supercomputers are used for highly specialized scientific applications, and as such the programs would be specially written in Fortran, C, or Assembly, and often specially optomized for the architecture.
    • by flaming-opus ( 8186 ) on Wednesday May 12, 2004 @09:49AM (#9126546)
      The SGI altix runs a hacked up version of linux that's part 2.4 with a lot of backported 2.6 stuff as well as the Irix scsi layer. They are migrating to a pure 2.6 OS soon. The IBM system runs AIX 5.2. The Cray runs Unicos, which is a derivative of Irix 6.5, though they seem to be moving to Linux also. I'm gonna geuss that they run totalview as their debugger. They use DFS as their network filesystem. They have published plans to hook all these systems up to the Stornext filesystem which does Heirchical Storage Management. MPI and PVM are likely important libraries for a lot of their apps.

      For these sorts of machines, one can by utilities for data migration, backup, debugging, etc. However, the production code is written in-house, and that's the way they want it. Weather forcasting, for example, uses software called MM5, which has been evolving since the Cray-2 days, at least. A lot of this code is passed around between research facilities. It's not open source exactly, but the DOD plays nice with the DOE, etc.

      The basic algorithms have been around for a long time. In the early 90's, when MPPs and then clusters came onto the schene, a lot of work was done in structuring the codes to run on a large number of processors. Sometimes this works better than other times. Most of the work isn't in writing the code, but rather in optomising it. Trying to minimize the synchronous communication between nodes is of great importance.
    • While much of software will be custom applications, there are common packages that you'll find for simulatiing molecular interactions, doing sequence analysis, etc.

      You can check out a list of software available on a CRAY T3E [psc.edu] to get an idea.

  • NOT the fastest! (Score:5, Interesting)

    by VernonNemitz ( 581327 ) on Wednesday May 12, 2004 @09:30AM (#9126373) Journal
    It seems to me that as long as multiprocessor machines qualify as supercomputers, then the Google cluster [tnl.net] counts as the fastest right now, and will still count as the fastest long after this new DOE computer is built.
    • by compupc1 ( 138208 )
      Clusters and supercomputers are totally different things, by definition. They are used for different types of problems, and as such cannot really be compared.
    • Depends what you're doing. Something like google or SETI or frame rendering scales very well to a cluster, because the amount of internode communication required is very low.

      Something like CFD or FEM is about in the middle, which is to say that clusters and SCs do about as well as each other. This is because, although there is a requirement that nodes communicate, the amount of communication is relatively low compared to the amount of internal computation. ie each cell is mostly affected by the cells direc
  • by bradbury ( 33372 ) <<moc.liamg> <ta> <yrubdarB.treboR>> on Wednesday May 12, 2004 @10:25AM (#9126926) Homepage
    One of the major un-classified research uses is for molecular modeling for the study of nanotechnology. This really consumes a lot of computer time because one is dealing with atomic motion over pico-to-nano-second time scales. An example is the work [foresight.org] done by Goddard's group at CALTECH on simulating rotations of the Drexler/Merkle Neon Pump [imm.org]. If I recall properly they found that when you cranked the rotational rate up to about a GHz it flew apart. (For reference macro-scale parts like turbochargers or jet engines don't even come close...)

    In the long run one would like to be able to get such simulations from the 10,000 atom level up to the billion-to-trillion (or more) atom level so you could simulate significant fractions of the volume of cells. Between now and then molecular biologists, geneticists, bioinformaticians, etc. would be happy if we could just get to the level of accurate folding (Folding@Home [standford.edu] is working on this from a distributed standpoint) and eventually to be able to model protein-protein interactions so we can figure out how things like DNA repair -- which involves 130+ proteins cooperating in very complex ways -- operate so we can better understand the causes of cancer and aging.

  • ...will it be able to run Longhorn?

    Tim
  • Warning: abstract thoughts ahead.

    Considering the whole of spacetime as a single unit, with our perception limited to only one piece of it at a time, it occurs to me that perhaps everything in both our future and past exists all at once; we're just sliding down a scale as the next section is revealed to us.

    That said, wouldn't it make sense that the world's fastest computer is among the very last "super" computers built, many years (centuries? millennia?) in our future (if you want to call it that)? No comp
  • At that speed, if it were running Windows XP, the whole internet could be infected with a virus in mere nanoseconds.

  • by ggwood ( 70369 ) on Wednesday May 12, 2004 @02:46PM (#9131021) Homepage Journal
    This project claims many big improvements. First, programmers will be available to help parallalize code of scientists, who may be experts at, say, weather or protein folding but may not be experts at parallel code. Further, the facility is supposed to be open to all scientists from all countries and funded by any agnecy. CPU cycles are to be distributed on a merit-only basis, and not kept witin DOE for DOE grantees to use, as apparently has happened within various agencies in the past.

    The idea is to make it more like other national labs where - for example in neutron scattering - you don't have to be an expert on neutron scattering to use the facility. They have staff available to help and you may have a grant from NSF or NIH but you can use a facility run by DOE if that's the best one for the job.

    I attended this session [aps.org] at the American Physical Society meeting this March and I'm assuming this is the project referred to in the talks - I apologize if I'm wrong there, but this is at least what is being discussed by people within DOE. I'm essentially just summarizing what I heard at the meeting so although it sounds like the obvious list of things to do, apparently it has not been done before.

    The prospect of opening such facilities to all scientists from all nations is refreshing during a time where so many problems have arisen from lack of mobility of scientists. For example, many DOE facilities such as neutron scattering at Los Alamos (LANL) have historically relied on a fraction of foreign scientists to come and use the facility and this helps pay to maintain it. Much of this income has been lost and is not being compensated from other sources. Further, many legal immegrants working within the Physics community have had very serious visa problems preventing them from leaving the country to attend foreign conferences. The APS was held in Canada this year and the rate of people who could not show up to attend and speak was perhaps ten times greater then the APS conferences I attended previously. Although moving it to Canada helped many foreign scientists attend, it prevented a great deal of foreign scientists living within the US from going. Even with a visa to live and work within the US, they were not allowed to return to the US without additional paperwork which many people had difficulty getting.

    Obviously, security is heightened after 9/11, as it should be. I'm bringing up the detrimental sides to such policies not to argue no such policies should have been implemented, but to suggest the benefits be weighed against the costs - and the obvious costs such as to certain facilities should either be compensated directly or we should be honest and realize we are (indirectly) cutting funding to facilities which are (partly) used for defence in order to increase security.

    I mention LANL despite it's dubious history of retaining secrets because I have heard talks by people working there (this is after 9/11) on ways to detect various WMD crossing US boarders. Even though they personally are (probably) well funded, if they facilities they need to use don't operate any more this is a huge net loss. My understanding is that all national labs (in the US) have had similar losses from lost foreign use.
    ____________________________________________ ___

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...