Forgot your password?
typodupeerror
Supercomputing The Military Hardware

Petaflops? DARPA Seeks Quintillion-Flop Computers 185

Posted by CmdrTaco
from the that's-a-lotta-flops dept.
coondoggie writes "Not known for taking the demure route, researchers at DARPA this week announced a program aimed at building computers that exceed current peta-scale computers to achieve the mind-altering speed of one quintillion (1,000,000,000,000,000,000) calculations per second. Dubbed extreme scale computing, such machines are needed, DARPA says, to 'meet the relentlessly increasing demands for greater performance, higher energy efficiency, ease of programmability, system dependability, and security.'"
This discussion has been archived. No new comments can be posted.

Petaflops? DARPA Seeks Quintillion-Flop Computers

Comments Filter:
  • by zero.kalvin (1231372) on Wednesday June 23, 2010 @01:46PM (#32667644)
    Call me tinfoil hat wearer, but me thinks they want a faster way of cracking encryption...
  • by Anonymous Coward on Wednesday June 23, 2010 @01:57PM (#32667768)

    It enables them to conduct espionage and counter-espionage against adversaries such as North Korea and Al-Quaeda.

    ...AND protect constitutional rights! More power to you!

  • I Love DARPA (Score:5, Insightful)

    by sonicmerlin (1505111) on Wednesday June 23, 2010 @01:57PM (#32667782)
    They come up with ideas that only ultra-geeks and science fiction nerds could come up with, and then they get billions in funding for it! It's like paradise. The fact that they're actually successful at advancing human technology is just icing on the cake.
  • by John Hasler (414242) on Wednesday June 23, 2010 @02:10PM (#32667934) Homepage

    But you'll have to fully deploy your longer keys long enough before they deploy their exaflop cracker that none of the inadequately-protected messages already in their possession are useful to them.

    I suspect that simulations are more interesting to them, though. Think what they'd save on testing if they could fully simulate hypersonic flight and scramjet engines (not that I don't think they'll use this for cracking).

  • by SirGarlon (845873) on Wednesday June 23, 2010 @02:22PM (#32668084)

    Since when the espionage is a GOOD thing!!!!!

    Since September 11, 2001.

    Or you could go back further, to July 26, 1939 [wikipedia.org]. But the real answer is, espionage has been a good thing ever since there have been enemies.

    I for one am all in favor of having fewer enemies. But for the ones that can't be ignored or reconciled, espionage is a Good Thing.

  • by Chowderbags (847952) on Wednesday June 23, 2010 @02:23PM (#32668102)

    I mean, specifically, what can you do on something that fast that you couldn't do on one 1,000 (or 1,000,000) times slower? What kind of tasks need that much processing power? For example, you normally hear about them being used for things like weather simulation. Well, what is it about weather simulation that requires so much work?

    Theoretically there's nothing you can't do on a supercomputer that you couldn't do with an ordinary desktop computer (except possibly for memory constraints), but for that matter you could also do everything by hand. The thing is, when your problem space is very large (i.e. calculating all interactions between X number of objects, where X is some huge number, or solving something like the Traveling Salesman Problem), you are limited in your options of what you can do to get results faster. If you're lucky, you can find some speedup of your problem (I.E. going to a better level of O-complexity [O(2^N)->O(n^2) would be a huge speedup, but doesn't happen often]), or tossing more resources at it. Yes, it'll still be slow, but if it takes you a year to do on a supercomputer, that's quite a bit better than spending 1000 years waiting on a regular computer.

  • by koxkoxkox (879667) on Wednesday June 23, 2010 @02:36PM (#32668236)

    If you take weather simulation :

    At a given point, you have a bunch of physical equations taking a set of parameters at time t and giving you these same parameters at time t+1. Of course, the smaller the time step, the better the result.

    To have the best possible result, you should consider the whole globe at once (think phenomenon like thermohaline circulation for example). However, you should also consider the finest grid possible, to take into account the heterogeneity of the geography, the local variations due to rivers, etc. It is also important to consider a three-dimensional model if you want to transcribe the atmospheric circulation, the evaporation, etc.

    I forgot the exact numbers, but Wikipedia gives an example of a current global climate models using a grid of 500,000 points (see http://en.wikipedia.org/wiki/Global_climate_model [wikipedia.org] ), which is a pretty coarse resolution, working with tiles of tens of thousands kilometer square.

    With the current computing capabilities, we can not go much farther for a global model. This is already an impressive improvement compared the first models, which were two dimensional and used very simplified equations, overlooking a large number of important physical mechanism.

    At the same time, we have satellite data several orders of magnitude more precise. Data from the satellite ASTER were computed to provide a complete altitude mapping of the globe with a theoretical resolution of 90 m. The vegetation cover can be obtained at a resolution of 8m using commercial satellite like FORMOSAT-2. Even the soil moisture can be measured at a resolution of around 50 km thanks to the new satellite SMOS.

    These sets of data are already used at the local level, for example to model the transfer between the soil and the atmosphere, taking into account the vegetation (SVAT modelling). It makes no doubt that a global climate model using a more precise grid and these data would significantly improve its prediction.

  • by epine (68316) on Thursday June 24, 2010 @10:10AM (#32677318)

    The fact that the NSA is still serving a purpose in spite of 'completely secure' key sizes should suggest a fairly obvious conclusion.

    Sweet. Stupidity by obscurity. Shall we integrate the area under the curve of obviousness * tinfoil_coefficient?

    There is an obvious conclusion, but apparently it's not obvious. It's one of those cases where whichever answer you presently hold seems obvious, until one discovers an even more obvious answer. The parent post has been careful to distance itself from any clue as to which rung on the ladder of obviousness it presently occupies, a strategy which suggests an entry level rungs. Think of the cost. I certainly wouldn't want to be a large enough blip on the threat radar to find myself at the center of an exaflop computation. I value my keratin.

    Feynman in Joking has a chapter on safe cracking. He ultimately concludes that "cold cracking" is largely a myth. Almost every safe cracker starts with an in: tampered mechanism, partially guessed combination, faulty mechanicals.

    The bulk of what your average cyber TLA computes would be simple traffic analysis, which at that scale, is probably not so simple, and involves correlating across networks (cell, internet, house of poozle). One wonders how many initial demerits one earns by connecting through a known onion router.

    Next you have attacks against keys with weak initial entropy, key leakage, or sloppy key management (betcha that's a growth industry). Any cipher which purports to send random bits can be hacked to leak key bits (secretly) in the apparently random nonce values. It's nearly impossible to prove your cipher isn't doing this without access to the source code all the way down to the CPU microcode, and beyond. Huh, a funny thing happened to our masks on the way to the foundry, but the chips seem to run great. From a TLA perspective, this is a useful advantage, because what you end up with is not a level playing field. What you can crack by brute force, someday soon your adversary can also crack by brute force. It's a lot more fun when you have to peel off the anonymous brown wrapper.

    What seems obvious to me is that your average TLA enjoys hiding behind this obviousness meme, and might even participate in its dissemination as a part of a highly successful initiative in distracting paranoids and shallow thinkers from useful analysis. You just have to find a forum where seeming clever is more important than being clever, add water, and stir.

    My favorite local coffee shop is right beside the schizophrenia resource center. If I had the right social hacking skills, I could accomplish this mission by buying the right person who drifts into the coffee shop with a wifi netbook a free coffee a day. "Just keep posting buddy, the Joe's on me."

Nothing is more admirable than the fortitude with which millionaires tolerate the disadvantages of their wealth. -- Nero Wolfe

Working...