DOE Asks For 30-Petaflop Supercomputer 66
Nerval's Lobster writes "The U.S. Department of Science has presented a difficult challenge to vendors: deliver a supercomputer with roughly 10 to 30 petaflops of performance, yet filled with energy-efficient multi-core architecture. The draft copy (.DOC) of the DOE's requirements provide for two systems: 'Trinity,' which will offer computing resources to the Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Livermore National Laboratory (LLNL), during the 2016-2020 timeframe; and NERSC-8, the replacement for the current NERSC-6 'Hopper' supercomputer first deployed in 2010 for the DOE facilities. Hopper debuted at number five in the list of Top500 supercomputers, and can crunch numbers at the petaflop level. The DOE wants a machine with performance at between 10 to 30 times Hopper's capabilities, with the ability to support one compute job that could take up over half of the available compute resources at any one time."
mmh... (Score:1)
... I've heard rumors that Microsoft will participate in the tender and propose their "HPC" solution based on Windows 8 (well, it's for all computer platforms, they say)
Re: (Score:2)
Re: (Score:2)
Hum, why BSD? He mentioned linux not because is the best solution ever (which might or might not be), but because a lot of petaflop-capable code was written specifically to run on it.. and because the big names (IBM, Crazy) fully support it. In fact, I don't remember ever using a BSD-based supercomputer. The top500 only shows one machine, at 0.1 petaflop, running a bsd-based OS. Search for os here: http://top500.org/statistics/sublist/ [top500.org]
Re: (Score:1)
Forgot to mention I was kidding :-)
Sure Windows 8 will never run on such big machines. It will never run on my PC too, btw :-)
btw: is Windows kernel still limited to 64 cpus? (I'm not talking clusters and the like, but "single image")
So . . . (Score:1)
Re:So . . . (Score:5, Informative)
This machines are most likely goign to be the replacement of the ones we already have. NERSC is presenting the projects that are run on its computing infrastructure on its web site [1]. You can see on the first page the project that are currently running jobs and what they are doing. For instance this project [2] is about designing artifical photosynthetic cells. If you are interested just check the project they are funding.
[1] https://www.nersc.gov/ [nersc.gov]
[2] https://www.nersc.gov/science/energy-science/artificial-photosynthesis-i-design-principles-for-light-harvesting/ [nersc.gov]
Re: artificial photosynthetic cells (Score:4, Insightful)
Are you really claiming that a computer being run by Los Alamos and called Trinity [wikipedia.org] is primarily going to be used for alternative energy?
Re: artificial photosynthetic cells (Score:4, Funny)
"What a deep budget you have," ("The better to educate you with"),
"Goodness, what big networks you have," ("The better to save you taxes by networking with)
"And what big transformers you have!" ("The faster to compute for you with"),
"What a big results you have," ("The better to nuke you with!")
Re: (Score:3)
I am an early user on the Blue Waters petaflop machine (http://www.ncsa.illinois.edu/BlueWaters/). Mean time to failure for such a huge machine becomes a real issue when you have about 700,000 cores and who knows how many spinning hard drives and all that network infrastructure. However I and my research collaborators have managed to get jobs through that take on the order of 12 hours of wallclock time without a hardware fault, which is amazing IMO. I do wonder whether we can simply continue to expand the s
Re:So . . . (Score:5, Informative)
Back when I worked for Supercomputing group at Los Alamos, the supercomputers were categorized into 'capacity' machines (the workhorses where they did most of the work, which typically run at near full utilization) and capability machines (the really big / cutting-edge / highly unstable machines that exist in order to push the edge of what is possible in software and hardware. One example of such an application would be high energy physics simulation) . It sounds like these machines fall into the latter category.
Re: (Score:2)
Yes, I know the NNSA and others use this type of hardware to simulate physical environments and nuclear events but I just can't help but think there's a pretty good possibility our government is racing toward advanced AI systems. These computer folks are some of the best in the world and they know as well as anyone what an advancement in weapons tech an AI would b
Re: (Score:2)
There are a lot of different research that benefits from these kinds of machines. Mind you, the machine will hardly be running a single program at 30 pflops scale, but instead running dozens of smaller jobs at the same time, and economy comes with the scale. It's simpler to scale your job from 10,000 processors to 1 million on the same machine than running the smaller job in one site than porting to the big one. Besides, give 30 pflops and people on the physics, math and biology department will ask for 50 :
Re: (Score:2)
Re: (Score:1)
C&C for SkyNet!
They just discovered (Score:5, Funny)
Department of Science? (Score:4, Informative)
Oh, if only science were elevated to Department status, with a cabinet-level secretary!
I think you mean Department of Energy [energy.gov], Office of Science [energy.gov].
Re: (Score:2)
Because clearly energy which is a part of science is more important than all science :)
Re: (Score:1)
Congressmen have already decided on creating the Department of Science. They're just looking for intelligent designers to do the job...
Re: (Score:1)
So how many GPU for a Peta Flop ?
Not to be overly pedantic, but PFLOPS --> 10^15 FLOP/sec, so saying "Peta Flop" doesn't make any sense
Re: (Score:1)
How about a (Score:2)
Large cluster of Raspberry Pis
Re:How about a (Score:5, Informative)
Re: (Score:2)
but does it run linux?
Re: (Score:2)
Mostly, because of the network. Although the cpus (or the whole system, in the case of the Pi) are cheap, the inter-communication is SLOW. And this gets worse with scale. So what starts bad (with the PI) network, gets much worse in bigger scale.
Although this is an excellent test bed for teaching parallel computing - EXACTLY because it scales so badly, so the bad effects are exaggerated.
DOE already has 2 of them... (Score:2)
27 Petaflops at Oak Ridge
20 Petaflops at Lawrence Livermore
http://top500.org/lists/2012/11/ [top500.org]
Make that 3 of them (Score:1)
Isn't New Mexico selling a supercomputer? (Score:2)
Maybe the DOE should bid on that supercomputer being liquidated [slashdot.org] by the US state of New Mexico.
Re: (Score:3)
Yeah -- but that is only spec'd at 172 TFlops, a long way away from 30 PFlops.
Nota problem, buy one today (Score:2)
Re: (Score:2)
this means the NSA already has one (Score:2)
and we won't learn about it until James Bamford writes another book. . .
Re: (Score:1)
Re: (Score:2)
http://www.datacenterdynamics.com/focus/archive/2012/03/light-shed-nsa [datacenterdynamics.com]’s-massive-supercomputer-project-spying
Petaflops (Score:1, Redundant)
10 petaflops is the minimum to run Windows 8 smoothly.
They ask for 30 petaflops, probably to run at least 3 other processes.
Re: (Score:2)
Puts the core in politically core-rect. (Score:1)
Fastest on earth, "yet filled with energy-efficient multi-core architecture." :rolleyes
These are at cross-purposes. Do they want fastest on Earth, or pretty fast, but efficient, which is already driven by market mechanisms?
"Hey! Multi-core and multi-cultural both have 'multi' in it! Can we have multi-cultural architecture, too? How much extra is that?"
Re: (Score:3)
Fastest on earth, "yet filled with energy-efficient multi-core architecture." :rolleyes
These are at cross-purposes. Do they want fastest on Earth, or pretty fast, but efficient, which is already driven by market mechanisms?
No, it's not. Today's supercomputers are thousands upon thousands times faster than those of decades past but are NOT taking up thousands of times more space or electricity.
Hopper is 16,000 nodes and two Pflops. Cray can't just make 10 of them, put 'em together, and consider the order filled. Efficiency is a LOT of the challenge in making the world's fastest computers.
System Archetecture (Score:3)
Note that one of the possible contenders is ARM with an integrated GPU. Slashdot readers are generally hostile to the idea of ARM for servers or HPC, but it is going to happen. Making the Top 100 list in the future will require more and more attention to FLOP/Watt, and ARM has a basic advantage over legacy oriented x86 architectures. Being dismissive of ARM is just as much of a fanboy attitude as being rabidly for any other architecture.
Re: (Score:2)
Don't forget that x86 comprises five of the top 10, being the rest Powerpc-based (BG/Q and Power7). Other contenders have much more chance on this market than, say, the workstation market.
Titan (Score:2)
Am I wrong thinking that this is not dramatically faster than Titan (27 TF peak)
http://www.top500.org/system/177975 [top500.org]
The specifications in the doc are interesting nonetheless!
How do you keep hardware from killing your work? (Score:2)
A question I wonder about---I guess "10-30 petaflop" of a standard multi-core architecture would require > 1M computational cores. Suppose you're running a code on 500,000 cores.
What is the mean time to failure of a core or some other piece of hardware required for that core to work? With 500k cores, I'd expect one to die every few hours or even minutes. Either that, or a random bit-flip from a cosmic ray.
Given that, how do you finish a computation that takes more than an hour or so? And how do you g
I hope DoE will -model- LFTRs, to speed approvals (Score:2)
Liquid Fluoride THORIUM Reactors (LFTRs) could get a leg up
for -earlier- construction approvals, ie, if DoE puts some super-
computers to the task of modeling them mathematically, eg, to
help bring them on-line sooner.
Or... we can let India and/or China do all that... and buy the com-
pleted technology from them, after they've done that.
Re: (Score:2)
Another good thing is that by having these more "friendly" reactors, you can power more supercomputers! It's a win-win situation
Double standards (Score:1)
It's pretty cynical that western governments want to tax harmless carbon dioxide (eg. in Australia) and limit our energy consumption through constantly jacking up the rates yet build extremely power-hungry installations in order to crunch all the data needed to surveil the citizens and build a profile of them.
How much of the universe can it simulate? (Score:2)
If it's less than that of a human hair, they will need more processing power.