DOE Asks For 30-Petaflop Supercomputer 66
Nerval's Lobster writes "The U.S. Department of Science has presented a difficult challenge to vendors: deliver a supercomputer with roughly 10 to 30 petaflops of performance, yet filled with energy-efficient multi-core architecture. The draft copy (.DOC) of the DOE's requirements provide for two systems: 'Trinity,' which will offer computing resources to the Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Livermore National Laboratory (LLNL), during the 2016-2020 timeframe; and NERSC-8, the replacement for the current NERSC-6 'Hopper' supercomputer first deployed in 2010 for the DOE facilities. Hopper debuted at number five in the list of Top500 supercomputers, and can crunch numbers at the petaflop level. The DOE wants a machine with performance at between 10 to 30 times Hopper's capabilities, with the ability to support one compute job that could take up over half of the available compute resources at any one time."
Department of Science? (Score:4, Informative)
Oh, if only science were elevated to Department status, with a cabinet-level secretary!
I think you mean Department of Energy [energy.gov], Office of Science [energy.gov].
Re:So . . . (Score:5, Informative)
This machines are most likely goign to be the replacement of the ones we already have. NERSC is presenting the projects that are run on its computing infrastructure on its web site [1]. You can see on the first page the project that are currently running jobs and what they are doing. For instance this project [2] is about designing artifical photosynthetic cells. If you are interested just check the project they are funding.
[1] https://www.nersc.gov/ [nersc.gov]
[2] https://www.nersc.gov/science/energy-science/artificial-photosynthesis-i-design-principles-for-light-harvesting/ [nersc.gov]
Re:How about a (Score:5, Informative)
Re:So . . . (Score:5, Informative)
Back when I worked for Supercomputing group at Los Alamos, the supercomputers were categorized into 'capacity' machines (the workhorses where they did most of the work, which typically run at near full utilization) and capability machines (the really big / cutting-edge / highly unstable machines that exist in order to push the edge of what is possible in software and hardware. One example of such an application would be high energy physics simulation) . It sounds like these machines fall into the latter category.