Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
United States Hardware

World's Fastest Supercomputer To Be Built At ORNL 230

Homey R writes "As I'll be joining the staff there in a few months, I'm very excited to see that Oak Ridge National Lab has won a competition within the DOE's Office of Science to build the world's fastest supercomputer at Oak Ridge National Lab in Oak Ridge, Tennessee. It will be based on the promising Cray X1 vector architecture. Unlike many of the other DOE machines that have at some point occupied #1 on the Top 500 supercomputer list, this machine will be dedicated exclusively to non-classified scientific research (i.e., not bombs)." Cowards Anonymous adds that the system "will be funded over two years by federal grants totaling $50 million. The project involves private companies like Cray, IBM, and SGI, and when complete it will be capable of sustaining 50 trillion calculations per second."
This discussion has been archived. No new comments can be posted.

World's Fastest Supercomputer To Be Built At ORNL

Comments Filter:
  • Qualifier (Score:5, Insightful)

    by andy666 ( 666062 ) on Wednesday May 12, 2004 @09:32AM (#9125851)
    As usual, there should be a qualifier as to what is meant by fastest. According to their definition they are, but not according to NEC's, for example.
  • Re:50 trillion (Score:4, Insightful)

    by WindBourne ( 631190 ) on Wednesday May 12, 2004 @09:37AM (#9125887) Journal
    I wonder if that processing power could be used for rendering like was done by Weta and how the performance could compare to their renderfarm.
    Sure, but the real question is why would you? The cost of this on a per mip basis is sure to be much higher than a renderfarm. In addition, ray tracing lends itself to parellelism. There are many other problems out there that do not that can use this kind of box.
  • by Debian Troll's Best ( 678194 ) on Wednesday May 12, 2004 @09:43AM (#9125938) Journal
    I love reading about these kinds of large supercomputer projects...this is really cutting edge stuff, and in a way acts as a kind of 'crystal ball' for the types of high performance technologies that we might expect to see in more common server and workstation class machines in the next 10 years or so.

    The article mentions that the new supercomputer will be used for non-classified projects. Does anyone have more exact details of what these projects may involve? Will it be a specific application, or more of a 'gun for hire' computing facility, with CPU cycles open to all comers for their own projects? It would be interesting to know what types of applications are planned for the supercomputer, as it may be possible to translate a raw measure of speed like the quoted '50 trillion calculations per second' into something more meaningful, like 'DNA base pairs compared per second', or 'weather cells simulated per hour'. Are there any specialists in these kinds of HPC applications who would like to comment? How fast do people think this supercomputer would run apt-get for instance? Would 50 trillion calculations per second equate to 50 trillion package installs per second? How long would it take to install all of Debian on this thing? Could the performance of the system actually be measured in Debian installs per second? I look forward to the community's response!

  • Re:good stuff (Score:4, Insightful)

    by sotonboy ( 753502 ) on Wednesday May 12, 2004 @09:46AM (#9125972)
    I disagree. There is a huge difference. Bolting a load of boxes together with ethernet and all the associated overheads can never be as efficient as dedicated hardware for connecting, and sharing the processing load.

    Obviously there is a lot more that could affect the performance, such as how memory is implemented. In general though, the system will perform best when each processor is performing calculations, rather than looking after ehernet connections.
  • by Waffle Iron ( 339739 ) on Wednesday May 12, 2004 @09:48AM (#9125983)
    There are still a few computing problems that can't be efficiently split into a large number of subproblems that can be executed in parallel. For those cases, a cluster of small machines won't help.
  • Being Snide Here (Score:5, Insightful)

    by Seanasy ( 21730 ) on Wednesday May 12, 2004 @10:02AM (#9126099)

    I think ORNL and PSC [] know a lot more about supercomputing than you (or Internet rag pundits) do. As others have noted, there are real reasons for Big Iron.

    Clusters are great for certain problems but for heavy computation -- think simulating two galaxies colliding or earthquake modeling -- off the shelf clusters don't cut it.

    They're not wasting tax-payer money unless you consider basic researcher a waste.

  • by compupc1 ( 138208 ) on Wednesday May 12, 2004 @10:32AM (#9126397)
    Clusters and supercomputers are totally different things, by definition. They are used for different types of problems, and as such cannot really be compared.
  • Re:good stuff (Score:3, Insightful)

    by Shinobi ( 19308 ) on Wednesday May 12, 2004 @11:50AM (#9127277)
    The larger the system is, the more it matters.
  • Re:Wow... (Score:2, Insightful)

    by MarvinIsANerd ( 447357 ) on Wednesday May 12, 2004 @03:38PM (#9130914)
    I can't believe this got modded up to +5 Funny. Any true nerd on Slashdot knows that blue is at a higher frequency than red. So if something blue moves faster (increases in frequency) it is going to shift into ultraviolet and beyond.
  • by ggwood ( 70369 ) on Wednesday May 12, 2004 @03:46PM (#9131021) Homepage Journal
    This project claims many big improvements. First, programmers will be available to help parallalize code of scientists, who may be experts at, say, weather or protein folding but may not be experts at parallel code. Further, the facility is supposed to be open to all scientists from all countries and funded by any agnecy. CPU cycles are to be distributed on a merit-only basis, and not kept witin DOE for DOE grantees to use, as apparently has happened within various agencies in the past.

    The idea is to make it more like other national labs where - for example in neutron scattering - you don't have to be an expert on neutron scattering to use the facility. They have staff available to help and you may have a grant from NSF or NIH but you can use a facility run by DOE if that's the best one for the job.

    I attended this session [] at the American Physical Society meeting this March and I'm assuming this is the project referred to in the talks - I apologize if I'm wrong there, but this is at least what is being discussed by people within DOE. I'm essentially just summarizing what I heard at the meeting so although it sounds like the obvious list of things to do, apparently it has not been done before.

    The prospect of opening such facilities to all scientists from all nations is refreshing during a time where so many problems have arisen from lack of mobility of scientists. For example, many DOE facilities such as neutron scattering at Los Alamos (LANL) have historically relied on a fraction of foreign scientists to come and use the facility and this helps pay to maintain it. Much of this income has been lost and is not being compensated from other sources. Further, many legal immegrants working within the Physics community have had very serious visa problems preventing them from leaving the country to attend foreign conferences. The APS was held in Canada this year and the rate of people who could not show up to attend and speak was perhaps ten times greater then the APS conferences I attended previously. Although moving it to Canada helped many foreign scientists attend, it prevented a great deal of foreign scientists living within the US from going. Even with a visa to live and work within the US, they were not allowed to return to the US without additional paperwork which many people had difficulty getting.

    Obviously, security is heightened after 9/11, as it should be. I'm bringing up the detrimental sides to such policies not to argue no such policies should have been implemented, but to suggest the benefits be weighed against the costs - and the obvious costs such as to certain facilities should either be compensated directly or we should be honest and realize we are (indirectly) cutting funding to facilities which are (partly) used for defence in order to increase security.

    I mention LANL despite it's dubious history of retaining secrets because I have heard talks by people working there (this is after 9/11) on ways to detect various WMD crossing US boarders. Even though they personally are (probably) well funded, if they facilities they need to use don't operate any more this is a huge net loss. My understanding is that all national labs (in the US) have had similar losses from lost foreign use.
    ____________________________________________ ___

10.0 times 0.1 is hardly ever 1.0.