## Petaflops? DARPA Seeks Quintillion-Flop Computers 185 185

coondoggie writes

*"Not known for taking the demure route, researchers at DARPA this week announced a program aimed at building computers that exceed current peta-scale computers to achieve the mind-altering speed of one quintillion (1,000,000,000,000,000,000) calculations per second. Dubbed extreme scale computing, such machines are needed, DARPA says, to 'meet the relentlessly increasing demands for greater performance, higher energy efficiency, ease of programmability, system dependability, and security.'"*
## Make sense, dammit (Score:5, Informative)

From TFA, written by Michael Cooney and propagated by the summary:

It looks like these "extreme scale computing" systems are needed before things like "ease of programmability" can be acheived. I call bullshit.

The actual notice from DARPA is named Omnipresent High Performance Computing (OHPC) [fbo.gov]. From the first paragraph of that page:

That makes a lot more sense.

Now, will someone please go and smack Michael Cooney up the back of head for writing like that?

## Exaflops (Score:5, Informative)

Quintillion is not an SI prefix. The next step after Peta is Exa.

## Re:Exaflops (Score:3, Informative)

Nope, Quintillion is a quantity, whereas Petaflops, Exaflops etc are rates of calculations per second. Please don't mix your units in your haste to appear smart.

## Re:Exaflops (Score:3, Informative)

for the record, there was a bunch of talks in IPDPS 2010 ( http://ipdps.org/ipdps2010/2010_advance_program.html [ipdps.org] ) about build exaflop mahcines including a keynotes.

## Old news (Score:3, Informative)

The top companies competing for the government funds are, not surprisingly, IBM and Cray.

See these two older

jdb2

## Re:Make sense, dammit (Score:5, Informative)

Right. If you actually read the announcement, it's not that they want yet more boondoggle supercomputing centers. What they want is more crunch power in small boxes. Read the actual announcement [fbo.gov] (PDF). See page 17. What they want is 1 petaflop (peak) in one rack, including cooling gear. The rack gets to draw up to 57 kilowatts (!).

## Re:Computing for the next generation (Score:3, Informative)

You've been simulated to die in our ongoing war with Eastasia, please report to the gassing chambers promptly to prevent the simulation from experiencing temporal improbabilities.

## Re:What's the need? (Score:5, Informative)

Simulating exploding hydrogen bombs, weather simulation, brute-force cracking, etc. Basically any distributed project you can think of (see BOINC [berkeley.edu]) can also be done with a supercomputer.

It's a scientific model with a boatload of variables and dependencies. Ask these guys [earthsystemmodeling.org].

## Re:What's the need? (Score:5, Informative)

Well, what is it about weather simulation that requires so much work?

The enormous number of variables, mostly. Weather, nuclear bombs, ocean currents, cryptography, even things as seemingly simple as modeling air flow around an object. If you are looking to develop a model of a process that involves a few thousand variables and you need to know the interaction of those variables several levels deep....you need to make a lot of calculations.

It hasn't been all that long that computers have had the computational power to dominate humans in games as 'simple' as chess.

## Re:I Love DARPA (Score:5, Informative)

Most people don't realize it but DARPA can best be described as a few dozen scientists and engineers with large checkbooks and a big travel budget. They go around the country and around the world looking for technologies that are beyond what we can do today but

mightbe possible with the right funding in the right places. Most importantly, they're aware that a large percentage of the projects that they fund will end in failure (or rather, will not meet all their goals), but the benefits of the ones that don't outweigh the costs.## Re:What's the need? (Score:3, Informative)

Detailed, 3-D simulation of things like nuclear explosions and scramjet engines.

Accuracy. Weather Prediction [wikipedia.org]

## Re:What's the need? (Score:3, Informative)

Imagine a simulation in 3D space. You model the space by a cube of 100x100x100 grid points. That's one million data points. Now say you have to do some calculation on them which scales quadratic in the number of data points. Say you manage to finish the calculation in one hour on some computer.

OK, but now you notice that those 100 data points in each direction are to inaccurate. You need 1000 points to be reasonably accurate. So now your data set is not one million, but one billion data points. And your O(N^2) algorithm makes sure that this factor 1000 in the number of grid points ends up as a factor one million in your computing time. So now the calculation would, on the same computer, need one million hours, or about 114 years. You almost certainly don't want to wait 114 years to get your results.

## Re:What's the need? (Score:5, Informative)

There are broad classes of algorithms where you can make good use of essentially arbitrary amounts of computing power to get better answers. When doing physical simulations of something like airflow over a jet wing, or the movement of a weather system, or the explosion of a hydrogen bomb, you'll break everything up into tiny units that you treat as monolithic elements whose behavior can be treated relatively simply, and calculate what happens to them over some tiny timescale, call the result the new state of the universe, and repeat. This is called "finite element analysis".

Because you're calculating everything in discreet steps, though, errors creep in and accumulate. The more processing power you have, the more elements you can use and the smaller time scales you can calculate over and get a more accurate answer in the same amount of time. The reason it's unacceptable to do the same calculation but have it go 1,000 or 1,000,000 times slower is that these simulations might already take hours, days, weeks, or even longer. Even the longest DoD contract needs an answer to the behavior of a proposed jet fighter wing in less than 1,000,000 days. :)

Scientific computing is an area where there will always be a use for more processing power.

There are other areas where it can be important, when you have real time constraints and can't just reduce your accuracy to make it work. I recall a story from advanced algorithms class where a bank was handling so many transactions per day that the time it took to process them all was more than 24 hours. Obviously this was a problem. The solution in that case was to modify the algorithm, but that's not always possible, and you need more computing. This is a little different in that you need the extra power to allow growth, as opposed to science where you could hand them an exaflop computer today and they'd be able to use it to its fullest.

## Re:What's the need? (Score:4, Informative)

In fluid dynamics simulations (which include weather stuff), there are huge computational problems. I work in the field, so bear with me a little.

The best model we have so far for fluids is to use balance equations (look up the Navier Stokes equations). This means that in order to describe the evolution of a fluid in a given domain, we need to split the domain into small cells, and then integrate numerically the balance equations. To put it simply, you have to integrate numerically a system of ordinary differential equations with many many variables (degrees of freedom).

For a simple but "correct" Navier Stokes simulation, the number of degrees of freedom is proportional to Re^(9/4), where Re is the Reynolds number (the memory requirements are proportional to the number of degrees of freedom). This Reynolds number, for typical systems (like the atmosphere) is of the order of at least 10^4-10^6 (you can look up typical values on wikipedia if you're interested). Furthermore, the number of timesteps needed for a "correct" simulation is proportional to Re^(3/4).

But these are not the most complicated simulations that are to be run on such machines. Research for issues like controled nuclear fusion needs to address much more demanding problems.

Numerical simulations of physical systems are inherently hard, because they scale polynomially with their complexity. However, they are generally cheaper than actual experiments, and you have access to more data.

## Re:Exaflops (Score:3, Informative)

True, that: FLOPS communicates a combination of the SI unit (1/s = Hz) with the identity of the thing being measured (floating point operations). It's like if you had KOM as an abbreviation for kilograms of milk.

## FLOPS, not FLOP (Score:3, Informative)

## Re:Make sense, dammit (Score:2, Informative)

Quick napkin math:

Rack has 42U

SuperMicro servers (TwinX) have 2 "blades" per 1U rail slot.

Each blade has 2 6-core Intel Nehalem CPUs generating approximately 225 GFLOPS each, or 450 per U.

18.9 TFLOPS per rack and consuming a peak of over 78,000 BTU and 600 amps and 72KW (breaking the budget).

Yep, there's a long way to go. Guessing some sort of customizable GPU massively parallel system. It'll be a bitch to develop for, but probably what's required to reach these numbers.

## Re:how sweet and innocent of them! (Score:3, Informative)

Meanwhile, this is perfect paranoia-food for the "ECHELON is reading my e-mails and SMS!" types. Thing is, they're probably right.

## Re:Exaflops (Score:3, Informative)

A metric assload is roughly equivalent to 2.2 Imperial assloads. Hope that helps.