Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

DOE Asks For 30-Petaflop Supercomputer 66

Nerval's Lobster writes "The U.S. Department of Science has presented a difficult challenge to vendors: deliver a supercomputer with roughly 10 to 30 petaflops of performance, yet filled with energy-efficient multi-core architecture. The draft copy (.DOC) of the DOE's requirements provide for two systems: 'Trinity,' which will offer computing resources to the Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Livermore National Laboratory (LLNL), during the 2016-2020 timeframe; and NERSC-8, the replacement for the current NERSC-6 'Hopper' supercomputer first deployed in 2010 for the DOE facilities. Hopper debuted at number five in the list of Top500 supercomputers, and can crunch numbers at the petaflop level. The DOE wants a machine with performance at between 10 to 30 times Hopper's capabilities, with the ability to support one compute job that could take up over half of the available compute resources at any one time."
This discussion has been archived. No new comments can be posted.

DOE Asks For 30-Petaflop Supercomputer

Comments Filter:
  • by Anonymous Coward
    What are they going to use this machine for? Hopefully it's not Skyrim.
    • Re:So . . . (Score:5, Informative)

      by godrik ( 1287354 ) on Friday January 11, 2013 @06:19PM (#42563161)

      This machines are most likely goign to be the replacement of the ones we already have. NERSC is presenting the projects that are run on its computing infrastructure on its web site [1]. You can see on the first page the project that are currently running jobs and what they are doing. For instance this project [2] is about designing artifical photosynthetic cells. If you are interested just check the project they are funding.

      [1] https://www.nersc.gov/ [nersc.gov]
      [2] https://www.nersc.gov/science/energy-science/artificial-photosynthesis-i-design-principles-for-light-harvesting/ [nersc.gov]

      • by neonsignal ( 890658 ) on Friday January 11, 2013 @09:13PM (#42564485)

        Are you really claiming that a computer being run by Los Alamos and called Trinity [wikipedia.org] is primarily going to be used for alternative energy?

        • by AHuxley ( 892839 ) on Friday January 11, 2013 @10:01PM (#42564753) Journal
          Little Red Blogger from the Hood asks:
          "What a deep budget you have," ("The better to educate you with"),
          "Goodness, what big networks you have," ("The better to save you taxes by networking with)
          "And what big transformers you have!" ("The faster to compute for you with"),
          "What a big results you have," ("The better to nuke you with!")
      • by Orp ( 6583 )

        I am an early user on the Blue Waters petaflop machine (http://www.ncsa.illinois.edu/BlueWaters/). Mean time to failure for such a huge machine becomes a real issue when you have about 700,000 cores and who knows how many spinning hard drives and all that network infrastructure. However I and my research collaborators have managed to get jobs through that take on the order of 12 hours of wallclock time without a hardware fault, which is amazing IMO. I do wonder whether we can simply continue to expand the s

    • Re:So . . . (Score:5, Informative)

      by Raul654 ( 453029 ) on Friday January 11, 2013 @06:28PM (#42563243) Homepage

      Back when I worked for Supercomputing group at Los Alamos, the supercomputers were categorized into 'capacity' machines (the workhorses where they did most of the work, which typically run at near full utilization) and capability machines (the really big / cutting-edge / highly unstable machines that exist in order to push the edge of what is possible in software and hardware. One example of such an application would be high energy physics simulation) . It sounds like these machines fall into the latter category.

      • I don't know. Personally whenever I see machines with specs like these I get the idea that the only practical application would be advanced AI.

        Yes, I know the NNSA and others use this type of hardware to simulate physical environments and nuclear events but I just can't help but think there's a pretty good possibility our government is racing toward advanced AI systems. These computer folks are some of the best in the world and they know as well as anyone what an advancement in weapons tech an AI would b
        • There are a lot of different research that benefits from these kinds of machines. Mind you, the machine will hardly be running a single program at 30 pflops scale, but instead running dozens of smaller jobs at the same time, and economy comes with the scale. It's simpler to scale your job from 10,000 processors to 1 million on the same machine than running the smaller job in one site than porting to the big one. Besides, give 30 pflops and people on the physics, math and biology department will ask for 50 :

      • Even if the names don't get changed, they still get upgraded a lot. The power costs are so significant (several million dollars a year) that running a system that's more than a couple years old is completely unfeasible. For example, I have an account on HECToR, which has gone through three/four upgrades since it was first built in 2008: 11,328 2.8GHz cores to 22,646 2.3GHz cores to 44544 cores to 90,112 2.3GHz cores (and ram upgrades along the way for a total of 90TB now). One of those was a two-part upgrad
    • C&C for SkyNet!

  • by Janek Kozicki ( 722688 ) on Friday January 11, 2013 @05:51PM (#42562929) Journal
    bitcoins? :)
  • by Shag ( 3737 ) on Friday January 11, 2013 @05:52PM (#42562939) Journal

    Oh, if only science were elevated to Department status, with a cabinet-level secretary!

    I think you mean Department of Energy [energy.gov], Office of Science [energy.gov].

  • Large cluster of Raspberry Pis

    • Re:How about a (Score:5, Informative)

      by VorpalRodent ( 964940 ) on Friday January 11, 2013 @06:22PM (#42563189)
      This is Slashdot - I believe the meme you're looking for is "Beowulf cluster", and such a cluster of these things would probably even meet the recommended specs for Crysis.
    • Mostly, because of the network. Although the cpus (or the whole system, in the case of the Pi) are cheap, the inter-communication is SLOW. And this gets worse with scale. So what starts bad (with the PI) network, gets much worse in bigger scale.

      Although this is an excellent test bed for teaching parallel computing - EXACTLY because it scales so badly, so the bad effects are exaggerated.

  • 27 Petaflops at Oak Ridge
    20 Petaflops at Lawrence Livermore

    http://top500.org/lists/2012/11/ [top500.org]

  • Maybe the DOE should bid on that supercomputer being liquidated [slashdot.org] by the US state of New Mexico.

  • What's the problem? They can buy one that fits their need today. There are a variety of designs that will deliver this kind of performance available today from the likes of Cray and IBM.
    • It's not the fact they already exist, but that they have to spend government money. Since they are spending that money, they have to get the taxpayers their moneys worth and have to put out a "tender" so suppliers can compete offering the best deal. In order to prevent personal preferences of people in power, bribes and such, tenders are usually rather strict in their requirements and procedures. This is about a lot of tax money, so it gets a lot of attention. Your local community probably puts out tenders
  • and we won't learn about it until James Bamford writes another book. . .

  • Petaflops (Score:1, Redundant)

    by eulernet ( 1132389 )

    10 petaflops is the minimum to run Windows 8 smoothly.

    They ask for 30 petaflops, probably to run at least 3 other processes.

  • Fastest on earth, "yet filled with energy-efficient multi-core architecture." :rolleyes

    These are at cross-purposes. Do they want fastest on Earth, or pretty fast, but efficient, which is already driven by market mechanisms?

    "Hey! Multi-core and multi-cultural both have 'multi' in it! Can we have multi-cultural architecture, too? How much extra is that?"

    • Fastest on earth, "yet filled with energy-efficient multi-core architecture." :rolleyes

      These are at cross-purposes. Do they want fastest on Earth, or pretty fast, but efficient, which is already driven by market mechanisms?

      No, it's not. Today's supercomputers are thousands upon thousands times faster than those of decades past but are NOT taking up thousands of times more space or electricity.

      Hopper is 16,000 nodes and two Pflops. Cray can't just make 10 of them, put 'em together, and consider the order filled. Efficiency is a LOT of the challenge in making the world's fastest computers.

  • by Required Snark ( 1702878 ) on Friday January 11, 2013 @09:15PM (#42564501)
    They have a comparatively large number of existing codes (around 600) that run with no GPU acceleration. They want to continue this code base and not have to modify it very much, so they are not going to use any non CPU integrated coprocessors. According to the article:

    They could have built such a machine, but it would have required either discrete accelerators (a programming model they would rather skip) or something more proprietary like the Blue Gene platform (an architecture they have avoided). The hope is that by 2015, they will be able to get something on the exascale roadmap, but with a programming model that is reasonably friendly to CPU-based codes.

    That most likely means integrated heterogeneous processors like NVIDIA's "Project Denver" ARM-GPUs, AMD's x86-GPU APUs, or whatever Intel brings to the table with integrated Xeon Phi coprocessing. Although more complex than a pure CPU solution from a software point of view, the integrated designs at least avoid the messy PCIe communication and the completely separate memory space of the accelerator device.

    Note that one of the possible contenders is ARM with an integrated GPU. Slashdot readers are generally hostile to the idea of ARM for servers or HPC, but it is going to happen. Making the Top 100 list in the future will require more and more attention to FLOP/Watt, and ARM has a basic advantage over legacy oriented x86 architectures. Being dismissive of ARM is just as much of a fanboy attitude as being rabidly for any other architecture.

    • Don't forget that x86 comprises five of the top 10, being the rest Powerpc-based (BG/Q and Power7). Other contenders have much more chance on this market than, say, the workstation market.

  • Am I wrong thinking that this is not dramatically faster than Titan (27 TF peak)
    http://www.top500.org/system/177975 [top500.org]

    The specifications in the doc are interesting nonetheless!

  • A question I wonder about---I guess "10-30 petaflop" of a standard multi-core architecture would require > 1M computational cores. Suppose you're running a code on 500,000 cores.

    What is the mean time to failure of a core or some other piece of hardware required for that core to work? With 500k cores, I'd expect one to die every few hours or even minutes. Either that, or a random bit-flip from a cosmic ray.

    Given that, how do you finish a computation that takes more than an hour or so? And how do you g

  • Liquid Fluoride THORIUM Reactors (LFTRs) could get a leg up
    for -earlier- construction approvals, ie, if DoE puts some super-
    computers to the task of modeling them mathematically, eg, to
    help bring them on-line sooner.

    Or... we can let India and/or China do all that... and buy the com-
    pleted technology from them, after they've done that.

    • Another good thing is that by having these more "friendly" reactors, you can power more supercomputers! It's a win-win situation

  • It's pretty cynical that western governments want to tax harmless carbon dioxide (eg. in Australia) and limit our energy consumption through constantly jacking up the rates yet build extremely power-hungry installations in order to crunch all the data needed to surveil the citizens and build a profile of them.

  • If it's less than that of a human hair, they will need more processing power.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...