Petaflops? DARPA Seeks Quintillion-Flop Computers 185
coondoggie writes "Not known for taking the demure route, researchers at DARPA this week announced a program aimed at building computers that exceed current peta-scale computers to achieve the mind-altering speed of one quintillion (1,000,000,000,000,000,000) calculations per second. Dubbed extreme scale computing, such machines are needed, DARPA says, to 'meet the relentlessly increasing demands for greater performance, higher energy efficiency, ease of programmability, system dependability, and security.'"
Make sense, dammit (Score:5, Informative)
From TFA, written by Michael Cooney and propagated by the summary:
It looks like these "extreme scale computing" systems are needed before things like "ease of programmability" can be acheived. I call bullshit.
The actual notice from DARPA is named Omnipresent High Performance Computing (OHPC) [fbo.gov]. From the first paragraph of that page:
That makes a lot more sense.
Now, will someone please go and smack Michael Cooney up the back of head for writing like that?
Re:Make sense, dammit (Score:5, Informative)
Right. If you actually read the announcement, it's not that they want yet more boondoggle supercomputing centers. What they want is more crunch power in small boxes. Read the actual announcement [fbo.gov] (PDF). See page 17. What they want is 1 petaflop (peak) in one rack, including cooling gear. The rack gets to draw up to 57 kilowatts (!).
Re: (Score:2, Informative)
Quick napkin math:
Rack has 42U
SuperMicro servers (TwinX) have 2 "blades" per 1U rail slot.
Each blade has 2 6-core Intel Nehalem CPUs generating approximately 225 GFLOPS each, or 450 per U.
18.9 TFLOPS per rack and consuming a peak of over 78,000 BTU and 600 amps and 72KW (breaking the budget).
Yep, there's a long way to go. Guessing some sort of customizable GPU massively parallel system. It'll be a bitch to develop for, but probably what's required to reach these numbers.
Re: (Score:3, Funny)
Re: (Score:2)
how sweet and innocent of them! (Score:5, Insightful)
Re:how sweet and innocent of them! (Score:5, Interesting)
Good luck. I can encrypt something in polynomial time (quadratic, isn't it?) that it takes you exponential time to encrypt.
Re: (Score:3, Insightful)
But you'll have to fully deploy your longer keys long enough before they deploy their exaflop cracker that none of the inadequately-protected messages already in their possession are useful to them.
I suspect that simulations are more interesting to them, though. Think what they'd save on testing if they could fully simulate hypersonic flight and scramjet engines (not that I don't think they'll use this for cracking).
Re: (Score:2)
4096-bit RSA encryption and 256-bit symmetrical encryption are way outside of capabilities of any imaginable classical computer.
Now, the problem might be in a insecure passphrase used to generate AES keys...
Re: (Score:2)
Not outside the capabilities of a classical computer, outside the capabilities of known decryption algorithms on conventional computers. The fact that the NSA is still serving a purpose in spite of 'completely secure' key sizes should suggest a fairly obvious conclusion.
Re: (Score:2)
The fact that the NSA is still serving a purpose in spite of 'completely secure' key sizes should suggest a fairly obvious conclusion./quote?
That people are too cheap/lazy/apathetic to bother encrypting stuff?
Re: (Score:2)
Seriously. They don't even properly close their quotes! ;)
Re: (Score:2)
I doubt it. Unless they have unbelievable good attacks, 256 bits give a WIDE margin of safety.
Schneier estimated that just cycling a counter through 2^220 bits requires energy of a Supernovae.
Re: (Score:2)
I have long assumed that the NSA has an attack on AES that is at worst 128 bits of difficulty on a 256bit key, and that they have computer resources to crack 128 bits within an hour.
Re: (Score:2)
Let's see...
128 bits = 2^128 possibilities
2^128 > (2^10)^12 = 1024^12 > 10^36
Supercomputer we're talking about = 10^18 operations/s
Meaning it would take about 10^18s (about the age of the universe) to cycle through 128bit keyspace.
Re: (Score:2)
Yeah, the assumption is definitely that the NSA uses custom hardware that does one thing only, and is at least 10^12 or so faster. Each device is probably 10^6 faster than a conventional cpu for this one task, and they presumably built out 10^9 or so devices (general purpose supercomputers are hard to parallelize to that degree, so are limited to around 10^4 devices).
obviousness for dummies (Score:3, Insightful)
The fact that the NSA is still serving a purpose in spite of 'completely secure' key sizes should suggest a fairly obvious conclusion.
Sweet. Stupidity by obscurity. Shall we integrate the area under the curve of obviousness * tinfoil_coefficient?
There is an obvious conclusion, but apparently it's not obvious. It's one of those cases where whichever answer you presently hold seems obvious, until one discovers an even more obvious answer. The parent post has been careful to distance itself from any clue as to which rung on the ladder of obviousness it presently occupies, a strategy which suggests an entry level rungs. Think of the cost
Re: (Score:2)
Encrypt, decrypt, what's the difference, when you're talking out your ass?
How the fuck is this "informative"? Who does it "inform"? What does it "inform" them of? That the poster doesn't know the difference between encryption and decryption?
You want "informative"? Here ya go:
From Bruce Schneier's "Applied Cryptography":
But my favorite part....
So the origi
Re: (Score:3, Interesting)
Re:how sweet and innocent of them! (Score:5, Funny)
FTFY
Re: (Score:3, Informative)
Re: (Score:2)
Since we figured out that Arnaud Amalric offered a suboptimal solution.
rj
Re:how sweet and innocent of them! (Score:5, Insightful)
Since September 11, 2001.
Or you could go back further, to July 26, 1939 [wikipedia.org]. But the real answer is, espionage has been a good thing ever since there have been enemies.
I for one am all in favor of having fewer enemies. But for the ones that can't be ignored or reconciled, espionage is a Good Thing.
Re: (Score:2)
Maybe you don't think that was a good thing, but you are definitely in a minority in your opinion. There have been lots of effective uses of espionage, includ
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
No. But if you take away their technology to spy on citizens, you also take away their ability to spy on enemies. Technology is like that. People can use it for Good Things or Bad Things. We learned that in the 20th century.
Re: (Score:2)
I think they want a faster way to preform a DOS attack. They plan to send so many pulses down the line at once that the ethernet cable vibrates so much it gets unplugged by your server.
Don't believe me? Send a letter to mythbusters.
Re: (Score:3, Funny)
Sorry, but DOS attacks are utterly outdated. Today you use Windows for your attacks.
SCNR
Re: (Score:2)
You're conflating government agencies. If you want to worry that the government is reading your email, you want to talk about the NSA. DARPA is more likely to be building toys for the military.
Ever wonder how they test nuclear weapon designs these days?
Re: (Score:2)
yes. because they really want to know the kind of porn you look for with ssl google.
Exaflops (Score:5, Informative)
Quintillion is not an SI prefix. The next step after Peta is Exa.
Re: (Score:3, Informative)
Nope, Quintillion is a quantity, whereas Petaflops, Exaflops etc are rates of calculations per second. Please don't mix your units in your haste to appear smart.
Re: (Score:3, Funny)
Just like people complaining how in Star Wars, Han Solo said he made the Kessel Run in less than 12 parsecs...yes, we know that parsecs are a measure of distance, Solo was talking about being able to complete the race using a shorter route than the standard 18 parsecs, which is why a measure of distance makes sense.
Source [wikia.com].
Disclaimer: some people may shout "retcon" at this explanation, but at this point singling out each instance retconning in the Star Wars universe is a wasted effort.
Re: (Score:2)
1. Hyperspace: distance and time are merely four directions in an orthogonal 4-space. So saying you made it in 12 parsecs when using a hyperdrive is completely correct. It's x^2+y^2+z^2+t^2 = 12^2.
2. I thought everyone knew this.
3. Han shot first, goddammit.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
Quintillion is a different quantity in long scale countries (10^30) vs short-scale countries (10^18), which is partly why the SI units were standardized.
Re: (Score:3, Informative)
for the record, there was a bunch of talks in IPDPS 2010 ( http://ipdps.org/ipdps2010/2010_advance_program.html [ipdps.org] ) about build exaflop mahcines including a keynotes.
Re: (Score:3, Funny)
Re: (Score:3, Informative)
True, that: FLOPS communicates a combination of the SI unit (1/s = Hz) with the identity of the thing being measured (floating point operations). It's like if you had KOM as an abbreviation for kilograms of milk.
Re: (Score:2, Redundant)
A lot of people make this mistake...it's in base 10.
You are thinking of Quibiflops.
Re: (Score:3, Funny)
Stop using antiquated units! The current unit in fashion is Vuvuzelaflops.
Re: (Score:2)
On /., everything is measured in Quibbleflops.
Re: (Score:2)
Regardless, its a metric assload of processing power. The only obvious reason I can see for this type of computing power is to render encryption by the average computers useless.
Re: (Score:2)
$ units
2438 units, 71 prefixes, 32 nonlinear units
You have: 1 ton
You want: metric assload
Unknown unit 'metric'
You want: ^D
$
Re: (Score:3, Informative)
A metric assload is roughly equivalent to 2.2 Imperial assloads. Hope that helps.
Quadrillion? (Score:2)
Ring Ring, we already have those... (Score:2)
Peta-flops (Score:2, Funny)
I'm glad DARPA is finally making a move to make their computing more animal friendly.
Translation (Score:5, Funny)
I want to run Crysis 2 in software rendering mode
I Love DARPA (Score:5, Insightful)
Re:I Love DARPA (Score:5, Informative)
Most people don't realize it but DARPA can best be described as a few dozen scientists and engineers with large checkbooks and a big travel budget. They go around the country and around the world looking for technologies that are beyond what we can do today but might be possible with the right funding in the right places. Most importantly, they're aware that a large percentage of the projects that they fund will end in failure (or rather, will not meet all their goals), but the benefits of the ones that don't outweigh the costs.
Re: (Score:3, Interesting)
It's even more interesting than that. If DARPA begins succeeding a lot, DARPA seniors end up having to explain to congress (yes, directly to congress) why it is they aren't forward-leaning enough. I.e., DARPA programs are expected to fail often, and congress uses this failure rate as pro forma information about how "researchy" DARPA is.
Joe.
Re: (Score:2)
Yeah, but you might as well add a few of programs that will almost definitely fail, but it would be so freakin sweet if the succeeded. I'll testify before congress if I screw up and accidentally fund a successful nuclear fusion or artificial intelligence project; I think I could talk my way out of that one.
For security? (Score:4, Funny)
Norton bogs my computer down too but that is just crazy :)
What's the need? (Score:3, Interesting)
First, I'm entirely ignorant of supercomputing. I don't know the first thing about it. I'm asking this out of sheer lack of knowledge in the field:
What do you need a computer that fast for?
I mean, specifically, what can you do on something that fast that you couldn't do on one 1,000 (or 1,000,000) times slower? What kind of tasks need that much processing power? For example, you normally hear about them being used for things like weather simulation. Well, what is it about weather simulation that requires so much work?
The whole idea is fascinating to me, but without ever having even been near the field, I can't imagine what a dataset or algorithm would look like that would take so much power to chew through.
Re:What's the need? (Score:5, Informative)
Simulating exploding hydrogen bombs, weather simulation, brute-force cracking, etc. Basically any distributed project you can think of (see BOINC [berkeley.edu]) can also be done with a supercomputer.
It's a scientific model with a boatload of variables and dependencies. Ask these guys [earthsystemmodeling.org].
Re: (Score:2)
Re: (Score:2)
Well, what is it about weather simulation that requires so much work?
It's a scientific model with a boatload of variables and dependencies. Ask these guys [earthsystemmodeling.org].
In particular, it's a fluid dynamics problem and they tend to be difficult to scale up to distributed computing because of the amount of coupling between adjacent simulation cells. Supercomputers (with their expensive interconnects and very high memory bandwidth) tend to be far better at this sort of problem.
Re:What's the need? (Score:5, Informative)
Well, what is it about weather simulation that requires so much work?
The enormous number of variables, mostly. Weather, nuclear bombs, ocean currents, cryptography, even things as seemingly simple as modeling air flow around an object. If you are looking to develop a model of a process that involves a few thousand variables and you need to know the interaction of those variables several levels deep....you need to make a lot of calculations.
It hasn't been all that long that computers have had the computational power to dominate humans in games as 'simple' as chess.
Re: (Score:3, Interesting)
Actually, there are only a handful of variables in a weather simulation. For a typical cloud-scale simulation you have the three components of wind, moisture, temperature, pressure, and precipitation variables. Say, 13 variables. That is not why you need supercomputers.
The reason you need supercomputers to do weather simulations is all about resolution, both spatial and temporal. Weather simulations break the atmosphere into cubes, and the more cubes you have, the better you resolve the flow. All weather si
Re:What's the need? (Score:4, Insightful)
I mean, specifically, what can you do on something that fast that you couldn't do on one 1,000 (or 1,000,000) times slower? What kind of tasks need that much processing power? For example, you normally hear about them being used for things like weather simulation. Well, what is it about weather simulation that requires so much work?
Theoretically there's nothing you can't do on a supercomputer that you couldn't do with an ordinary desktop computer (except possibly for memory constraints), but for that matter you could also do everything by hand. The thing is, when your problem space is very large (i.e. calculating all interactions between X number of objects, where X is some huge number, or solving something like the Traveling Salesman Problem), you are limited in your options of what you can do to get results faster. If you're lucky, you can find some speedup of your problem (I.E. going to a better level of O-complexity [O(2^N)->O(n^2) would be a huge speedup, but doesn't happen often]), or tossing more resources at it. Yes, it'll still be slow, but if it takes you a year to do on a supercomputer, that's quite a bit better than spending 1000 years waiting on a regular computer.
Re: (Score:2)
Might have something to do with the billions upon billions of billions of billions of atoms that need to be simulated.
The more processing power one has, the finer the simulation parameters.
Re: (Score:3, Informative)
Detailed, 3-D simulation of things like nuclear explosions and scramjet engines.
Accuracy. Weather Prediction [wikipedia.org]
Re: (Score:2)
I've wondered about this for some time. If countries like the USA have enough nukes to nuke the world several times over, and have had that capability for decades now, how are these simulations useful?
Re: (Score:3, Informative)
Imagine a simulation in 3D space. You model the space by a cube of 100x100x100 grid points. That's one million data points. Now say you have to do some calculation on them which scales quadratic in the number of data points. Say you manage to finish the calculation in one hour on some computer.
OK, but now you notice that those 100 data points in each direction are to inaccurate. You need 1000 points to be reasonably accurate. So now your data set is not one million, but one billion data points. And your O(N
Re: (Score:2)
Thanks! That's what I was wondering about. So is that the problem they're trying to solve: current models are too coarse and scientists think they can get more accurate results by increasing the points/partitions/whatever?
Re: (Score:3, Interesting)
You almost certainly don't want to wait 114 years to get your results.
You know, back in the day, we had some patience. Plus, the notion that one would have to wait 114 years to get results made us develop better algorithms, not just throw cycles at a problem. Kids these days... Now get off my lawn!
Re:What's the need? (Score:5, Informative)
There are broad classes of algorithms where you can make good use of essentially arbitrary amounts of computing power to get better answers. When doing physical simulations of something like airflow over a jet wing, or the movement of a weather system, or the explosion of a hydrogen bomb, you'll break everything up into tiny units that you treat as monolithic elements whose behavior can be treated relatively simply, and calculate what happens to them over some tiny timescale, call the result the new state of the universe, and repeat. This is called "finite element analysis".
Because you're calculating everything in discreet steps, though, errors creep in and accumulate. The more processing power you have, the more elements you can use and the smaller time scales you can calculate over and get a more accurate answer in the same amount of time. The reason it's unacceptable to do the same calculation but have it go 1,000 or 1,000,000 times slower is that these simulations might already take hours, days, weeks, or even longer. Even the longest DoD contract needs an answer to the behavior of a proposed jet fighter wing in less than 1,000,000 days. :)
Scientific computing is an area where there will always be a use for more processing power.
There are other areas where it can be important, when you have real time constraints and can't just reduce your accuracy to make it work. I recall a story from advanced algorithms class where a bank was handling so many transactions per day that the time it took to process them all was more than 24 hours. Obviously this was a problem. The solution in that case was to modify the algorithm, but that's not always possible, and you need more computing. This is a little different in that you need the extra power to allow growth, as opposed to science where you could hand them an exaflop computer today and they'd be able to use it to its fullest.
Re: (Score:2)
Going along with what you say... Another thing to consider is that in the process of designing something, you don't just do one simulation and declare it finished. If you knew what the answer would be, y
Re: (Score:2)
Ideally, you will want to iteratively search through combinations of input variables to determine an optimum in terms of output variables.
One thing you can do with enough computing power is work in near real time, interactively steering the simulation towards a situation that is interesting. Gamers will be familiar with why this can be a good idea, but it very useful when the effect you are actually studying is an emergent one of some physical situation where the input parameters have to be very exact to trigger. Certain types of mixing of immiscible fluids (on the way to making emulsions) can be very interesting, and the physics there is bot
Re:What's the need? (Score:4, Insightful)
If you take weather simulation :
At a given point, you have a bunch of physical equations taking a set of parameters at time t and giving you these same parameters at time t+1. Of course, the smaller the time step, the better the result.
To have the best possible result, you should consider the whole globe at once (think phenomenon like thermohaline circulation for example). However, you should also consider the finest grid possible, to take into account the heterogeneity of the geography, the local variations due to rivers, etc. It is also important to consider a three-dimensional model if you want to transcribe the atmospheric circulation, the evaporation, etc.
I forgot the exact numbers, but Wikipedia gives an example of a current global climate models using a grid of 500,000 points (see http://en.wikipedia.org/wiki/Global_climate_model [wikipedia.org] ), which is a pretty coarse resolution, working with tiles of tens of thousands kilometer square.
With the current computing capabilities, we can not go much farther for a global model. This is already an impressive improvement compared the first models, which were two dimensional and used very simplified equations, overlooking a large number of important physical mechanism.
At the same time, we have satellite data several orders of magnitude more precise. Data from the satellite ASTER were computed to provide a complete altitude mapping of the globe with a theoretical resolution of 90 m. The vegetation cover can be obtained at a resolution of 8m using commercial satellite like FORMOSAT-2. Even the soil moisture can be measured at a resolution of around 50 km thanks to the new satellite SMOS.
These sets of data are already used at the local level, for example to model the transfer between the soil and the atmosphere, taking into account the vegetation (SVAT modelling). It makes no doubt that a global climate model using a more precise grid and these data would significantly improve its prediction.
Re:What's the need? (Score:4, Informative)
In fluid dynamics simulations (which include weather stuff), there are huge computational problems. I work in the field, so bear with me a little.
The best model we have so far for fluids is to use balance equations (look up the Navier Stokes equations). This means that in order to describe the evolution of a fluid in a given domain, we need to split the domain into small cells, and then integrate numerically the balance equations. To put it simply, you have to integrate numerically a system of ordinary differential equations with many many variables (degrees of freedom).
For a simple but "correct" Navier Stokes simulation, the number of degrees of freedom is proportional to Re^(9/4), where Re is the Reynolds number (the memory requirements are proportional to the number of degrees of freedom). This Reynolds number, for typical systems (like the atmosphere) is of the order of at least 10^4-10^6 (you can look up typical values on wikipedia if you're interested). Furthermore, the number of timesteps needed for a "correct" simulation is proportional to Re^(3/4).
But these are not the most complicated simulations that are to be run on such machines. Research for issues like controled nuclear fusion needs to address much more demanding problems.
Numerical simulations of physical systems are inherently hard, because they scale polynomially with their complexity. However, they are generally cheaper than actual experiments, and you have access to more data.
Re: (Score:2)
Imagine you are simulating weather with an accuracy narrowed down to 1000 cubic meters. That's a cube 10 meters on a side, consider it roughly the size of a house. Not very accurate, right? Because there is a lot of detail going on within those 1000 cubic meters that your simulation is ignoring.
But: it's also a vast quantity of data to consider, even at that level of inaccuracy. Just to simulate the weather over the united states you'd have about 20,740,933,333 cells to compute. 20 Billion cells to com
Re: (Score:2)
Brain Simulation (Score:2)
Yes, ominously the article states that it will be running a "self-aware OS".
I'm of the view that there's a good chance that current or near-future supercomputers would be able to simulate a human brain in real-time. This is because there's an awful lot of computational redundancy in real brains, given what they're made from, and given their need to self-construct.
All that's needed is to reverse-engineer the algorithms used by each part of the brain, and to properly connect them up.
Re: (Score:2)
Big thanks to everyone who replied! Those are the kinds of answers I was looking for. I have a friend who farms and he has some enormous machines in his fields. I felt the same way about supercomputers as I did about his farming equipment: "Good grief! That must be useful for something or he wouldn't have bought it, but I can't imagine what I'd ever use such a thing for."
Special thanks to everyone who didn't interpret that as an attack on supercomputing or make "640KB ought to be enough for everybody" jokes
Re: (Score:2)
Re: (Score:2)
The beauty of computers is that they are fast enough that we can use massively iterative processes to get very accurate answers faster than you could possibly do by hand, and in situations where advanced techniques like calculus won't work.
Running with your example of weather simulation; a 'simple' way to do it is to lay out a 3-d grid and capture the current conditions at that grid. Then advance it by (say) a second, with each point being influenced by each point around it, the sun and anything else you ca
Old news (Score:3, Informative)
The top companies competing for the government funds are, not surprisingly, IBM and Cray.
See these two older
jdb2
Re: (Score:2)
Re: (Score:2)
As I rode my motorcycle past the Oak Ridge exit on the interstate on my way to North Carolina, I wondered why computing centers are located where coal is used for power generation, whereas Google places they're computing centers where cheap, renewable energy is available. Probably gov. pork (i.e. I want this in my district).
Heh, yeah, especially since it's estimated that the power consumption of an exaflop machine would, at a minimum, be 20 megawatts, at least with the projected advancement of current technology.
jdb2
Re: (Score:2)
Re: (Score:2)
Oak Ridge is in the TVA, which is hydro, not coal. The lab is there precisely because of the available power.
Re: (Score:2)
Re: (Score:2)
http://www.tva.gov/power/ [tva.gov]
Still waiting... (Score:2)
...for hellaflops [blogspot.com].
As long as they don't name it "Skynet" (Score:2)
I think we're OK. Maybe.
Prefix Change (Score:2)
Salvage sale? (Score:2)
Since their current petaflop systems are clearly not enough for them, can I pick up a few for $5 a piece at their next salvage sale?
Re: (Score:2)
Sure, just know that anything that ever could have held data ( Hard Drives, RAM, Registers on CPUs, etc. ) will be destroyed first.
yeah, right. (Score:2, Funny)
FLOPS, not FLOP (Score:3, Informative)
They should give Shaw some dough (Score:2)
They should buy a data center and fill it with D. E. Shaw's special purpose hardware for doing particle simulations: http://en.wikipedia.org/wiki/Anton_(computer) [wikipedia.org] , and instead of proposing grants for new software development, propose grants to keep the data center's queue full of interesting chemical simulations to run.
Bus innovation first, please (Score:3, Interesting)
What is really needed is faster *bus speeds*. So many CPUs just sit around waiting for data that sits across the bus. That's where the dramatic throughput improvements lie. Pretty please, DARPA? :)
Re: (Score:3, Informative)
You've been simulated to die in our ongoing war with Eastasia, please report to the gassing chambers promptly to prevent the simulation from experiencing temporal improbabilities.
Re: (Score:2)
Or I could link to the Isaac Asimov story which it's based on.
http://www.veeshanvault.org/shared/morebooks/Asimov,%20Isaac/Asimov,%20Isaac%20-%20Frustration.txt [veeshanvault.org]
Re: (Score:2)
Yeah, different places use different standards.
What I've always wondered is -- what do you call one thousand billions? What do you call two hundred thousand billions? It just seems awkward to have to string so many sizes together, but that's obviously from my perspective of having grown up doing it our way.
Re: (Score:2)
Nono:
1,000,000 = million (10^6) (mega)
1,000,000,000 = milliard (10^9) (giga)
1,000,000,000,000 = billion (10^12) (tera)
1,000,000,000,000,000 = billiard (10^15) (peta)
1,000,000,000,000,000,000 = trillion (10^18) (exa)
So, in other words, they want a trillion-flops thingie.
A quintillion on the other hand, is 1,000,000,000,000,000,000,000,000,000,000 (10^30) .. except of course, in the US. Which uses that silly 'short scale' numbering system.