Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing The Military Hardware

Petaflops? DARPA Seeks Quintillion-Flop Computers 185

coondoggie writes "Not known for taking the demure route, researchers at DARPA this week announced a program aimed at building computers that exceed current peta-scale computers to achieve the mind-altering speed of one quintillion (1,000,000,000,000,000,000) calculations per second. Dubbed extreme scale computing, such machines are needed, DARPA says, to 'meet the relentlessly increasing demands for greater performance, higher energy efficiency, ease of programmability, system dependability, and security.'"
This discussion has been archived. No new comments can be posted.

Petaflops? DARPA Seeks Quintillion-Flop Computers

Comments Filter:
  • Make sense, dammit (Score:5, Informative)

    by Lord Grey ( 463613 ) * on Wednesday June 23, 2010 @01:44PM (#32667618)

    From TFA, written by Michael Cooney and propagated by the summary:

    Dubbed extreme scale computing, such machines are needed DARPA says to "meet the relentlessly increasing demands for greater performance, higher energy efficiency, ease of programmability, system dependability and security."

    It looks like these "extreme scale computing" systems are needed before things like "ease of programmability" can be acheived. I call bullshit.

    The actual notice from DARPA is named Omnipresent High Performance Computing (OHPC) [fbo.gov]. From the first paragraph of that page:

    ... To meet the relentlessly increasing demands for greater performance, higher energy efficiency, ease of programmability, system dependability, and security, revolutionary new research, development, and design will be essential to enable new generations of advanced DoD computing system capabilities and new classes of computer applications. Current evolutionary approaches to progress in computer designs are inadequate. ...

    That makes a lot more sense.

    Now, will someone please go and smack Michael Cooney up the back of head for writing like that?

  • Exaflops (Score:5, Informative)

    by maxwell demon ( 590494 ) on Wednesday June 23, 2010 @01:47PM (#32667654) Journal

    Quintillion is not an SI prefix. The next step after Peta is Exa.

  • Re:Exaflops (Score:3, Informative)

    by daveime ( 1253762 ) on Wednesday June 23, 2010 @01:53PM (#32667724)

    Nope, Quintillion is a quantity, whereas Petaflops, Exaflops etc are rates of calculations per second. Please don't mix your units in your haste to appear smart.

  • Re:Exaflops (Score:3, Informative)

    by godrik ( 1287354 ) on Wednesday June 23, 2010 @01:58PM (#32667784)

    for the record, there was a bunch of talks in IPDPS 2010 ( http://ipdps.org/ipdps2010/2010_advance_program.html [ipdps.org] ) about build exaflop mahcines including a keynotes.

  • Old news (Score:3, Informative)

    by jdb2 ( 800046 ) * on Wednesday June 23, 2010 @02:00PM (#32667814) Journal
    The DOE as well as Oak Ridge, Los Alamos and Sandia National Laboratories already have programs in place to develop an "exascale" system by 2018. ( the date at which Moore's law predicts the possibility of such systems )
    The top companies competing for the government funds are, not surprisingly, IBM and Cray.

    See these two older /. stories here [slashdot.org] and here [slashdot.org].

    jdb2
  • by Animats ( 122034 ) on Wednesday June 23, 2010 @02:02PM (#32667844) Homepage

    Right. If you actually read the announcement, it's not that they want yet more boondoggle supercomputing centers. What they want is more crunch power in small boxes. Read the actual announcement [fbo.gov] (PDF). See page 17. What they want is 1 petaflop (peak) in one rack, including cooling gear. The rack gets to draw up to 57 kilowatts (!).

  • by TheKidWho ( 705796 ) on Wednesday June 23, 2010 @02:15PM (#32668004)

    You've been simulated to die in our ongoing war with Eastasia, please report to the gassing chambers promptly to prevent the simulation from experiencing temporal improbabilities.

  • Re:What's the need? (Score:5, Informative)

    by Yoozer ( 1055188 ) on Wednesday June 23, 2010 @02:21PM (#32668072) Homepage

    What do you need a computer that fast for?

    Simulating exploding hydrogen bombs, weather simulation, brute-force cracking, etc. Basically any distributed project you can think of (see BOINC [berkeley.edu]) can also be done with a supercomputer.

    Well, what is it about weather simulation that requires so much work?

    It's a scientific model with a boatload of variables and dependencies. Ask these guys [earthsystemmodeling.org].

  • Re:What's the need? (Score:5, Informative)

    by Hijacked Public ( 999535 ) on Wednesday June 23, 2010 @02:22PM (#32668086)

    Well, what is it about weather simulation that requires so much work?

    The enormous number of variables, mostly. Weather, nuclear bombs, ocean currents, cryptography, even things as seemingly simple as modeling air flow around an object. If you are looking to develop a model of a process that involves a few thousand variables and you need to know the interaction of those variables several levels deep....you need to make a lot of calculations.

    It hasn't been all that long that computers have had the computational power to dominate humans in games as 'simple' as chess.

  • Re:I Love DARPA (Score:5, Informative)

    by MozeeToby ( 1163751 ) on Wednesday June 23, 2010 @02:24PM (#32668116)

    Most people don't realize it but DARPA can best be described as a few dozen scientists and engineers with large checkbooks and a big travel budget. They go around the country and around the world looking for technologies that are beyond what we can do today but might be possible with the right funding in the right places. Most importantly, they're aware that a large percentage of the projects that they fund will end in failure (or rather, will not meet all their goals), but the benefits of the ones that don't outweigh the costs.

  • Re:What's the need? (Score:3, Informative)

    by John Hasler ( 414242 ) on Wednesday June 23, 2010 @02:29PM (#32668172) Homepage

    I mean, specifically, what can you do on something that fast that you couldn't do on one 1,000 (or 1,000,000) times slower? What kind of tasks need that much processing power?

    Detailed, 3-D simulation of things like nuclear explosions and scramjet engines.

    For example, you normally hear about them being used for things like > weather simulation. Well, what is it about weather simulation that requires > so much work?

    Accuracy. Weather Prediction [wikipedia.org]

  • Re:What's the need? (Score:3, Informative)

    by maxwell demon ( 590494 ) on Wednesday June 23, 2010 @02:30PM (#32668180) Journal

    Imagine a simulation in 3D space. You model the space by a cube of 100x100x100 grid points. That's one million data points. Now say you have to do some calculation on them which scales quadratic in the number of data points. Say you manage to finish the calculation in one hour on some computer.

    OK, but now you notice that those 100 data points in each direction are to inaccurate. You need 1000 points to be reasonably accurate. So now your data set is not one million, but one billion data points. And your O(N^2) algorithm makes sure that this factor 1000 in the number of grid points ends up as a factor one million in your computing time. So now the calculation would, on the same computer, need one million hours, or about 114 years. You almost certainly don't want to wait 114 years to get your results.

  • Re:What's the need? (Score:5, Informative)

    by Chris Burke ( 6130 ) on Wednesday June 23, 2010 @02:36PM (#32668234) Homepage

    There are broad classes of algorithms where you can make good use of essentially arbitrary amounts of computing power to get better answers. When doing physical simulations of something like airflow over a jet wing, or the movement of a weather system, or the explosion of a hydrogen bomb, you'll break everything up into tiny units that you treat as monolithic elements whose behavior can be treated relatively simply, and calculate what happens to them over some tiny timescale, call the result the new state of the universe, and repeat. This is called "finite element analysis".

    Because you're calculating everything in discreet steps, though, errors creep in and accumulate. The more processing power you have, the more elements you can use and the smaller time scales you can calculate over and get a more accurate answer in the same amount of time. The reason it's unacceptable to do the same calculation but have it go 1,000 or 1,000,000 times slower is that these simulations might already take hours, days, weeks, or even longer. Even the longest DoD contract needs an answer to the behavior of a proposed jet fighter wing in less than 1,000,000 days. :)

    Scientific computing is an area where there will always be a use for more processing power.

    There are other areas where it can be important, when you have real time constraints and can't just reduce your accuracy to make it work. I recall a story from advanced algorithms class where a bank was handling so many transactions per day that the time it took to process them all was more than 24 hours. Obviously this was a problem. The solution in that case was to modify the algorithm, but that's not always possible, and you need more computing. This is a little different in that you need the extra power to allow growth, as opposed to science where you could hand them an exaflop computer today and they'd be able to use it to its fullest.

  • Re:What's the need? (Score:4, Informative)

    by chichilalescu ( 1647065 ) on Wednesday June 23, 2010 @02:42PM (#32668336) Homepage Journal

    In fluid dynamics simulations (which include weather stuff), there are huge computational problems. I work in the field, so bear with me a little.

    The best model we have so far for fluids is to use balance equations (look up the Navier Stokes equations). This means that in order to describe the evolution of a fluid in a given domain, we need to split the domain into small cells, and then integrate numerically the balance equations. To put it simply, you have to integrate numerically a system of ordinary differential equations with many many variables (degrees of freedom).
    For a simple but "correct" Navier Stokes simulation, the number of degrees of freedom is proportional to Re^(9/4), where Re is the Reynolds number (the memory requirements are proportional to the number of degrees of freedom). This Reynolds number, for typical systems (like the atmosphere) is of the order of at least 10^4-10^6 (you can look up typical values on wikipedia if you're interested). Furthermore, the number of timesteps needed for a "correct" simulation is proportional to Re^(3/4).

    But these are not the most complicated simulations that are to be run on such machines. Research for issues like controled nuclear fusion needs to address much more demanding problems.

    Numerical simulations of physical systems are inherently hard, because they scale polynomially with their complexity. However, they are generally cheaper than actual experiments, and you have access to more data.

  • Re:Exaflops (Score:3, Informative)

    by DragonWriter ( 970822 ) on Wednesday June 23, 2010 @03:02PM (#32668602)

    FLOPS is not an SI unit.

    True, that: FLOPS communicates a combination of the SI unit (1/s = Hz) with the identity of the thing being measured (floating point operations). It's like if you had KOM as an abbreviation for kilograms of milk.

  • FLOPS, not FLOP (Score:3, Informative)

    by 91degrees ( 207121 ) on Wednesday June 23, 2010 @03:10PM (#32668730) Journal
    You should realise that the "S" stands for seconds. Okay - it doesn't matter that much, but this is meant to be a technical site. The editors should really get this stuff right.
  • by Anonymous Coward on Wednesday June 23, 2010 @03:15PM (#32668832)

    Quick napkin math:

    Rack has 42U

    SuperMicro servers (TwinX) have 2 "blades" per 1U rail slot.

    Each blade has 2 6-core Intel Nehalem CPUs generating approximately 225 GFLOPS each, or 450 per U.

    18.9 TFLOPS per rack and consuming a peak of over 78,000 BTU and 600 amps and 72KW (breaking the budget).

    Yep, there's a long way to go. Guessing some sort of customizable GPU massively parallel system. It'll be a bitch to develop for, but probably what's required to reach these numbers.

  • by $RANDOMLUSER ( 804576 ) on Wednesday June 23, 2010 @03:22PM (#32669004)
    If I'm North Korea or Al-Qaeda or "Red" China, or any one of a million other defined-as "bad guys", I'm not using RSA or some such, I'm using one-time-pads [wikipedia.org] or steganography [wikipedia.org] on any one of a billion different chat boards, probably one where I can post JPEGs. Places where the message location and encryption itself is all the sender signature it needs. It's the bankers and the private citizens (and possibly some foreign diplomatic services) who are using RSA and public-key type ciphers that (might maybe potentially could be) cracked by lots and lots of computing power.

    Meanwhile, this is perfect paranoia-food for the "ECHELON is reading my e-mails and SMS!" types. Thing is, they're probably right.
  • Re:Exaflops (Score:3, Informative)

    by Pharmboy ( 216950 ) on Wednesday June 23, 2010 @04:19PM (#32670076) Journal

    A metric assload is roughly equivalent to 2.2 Imperial assloads. Hope that helps.

They are relatively good but absolutely terrible. -- Alan Kay, commenting on Apollos

Working...