Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing The Military Hardware

Petaflops? DARPA Seeks Quintillion-Flop Computers 185

coondoggie writes "Not known for taking the demure route, researchers at DARPA this week announced a program aimed at building computers that exceed current peta-scale computers to achieve the mind-altering speed of one quintillion (1,000,000,000,000,000,000) calculations per second. Dubbed extreme scale computing, such machines are needed, DARPA says, to 'meet the relentlessly increasing demands for greater performance, higher energy efficiency, ease of programmability, system dependability, and security.'"
This discussion has been archived. No new comments can be posted.

Petaflops? DARPA Seeks Quintillion-Flop Computers

Comments Filter:
  • Make sense, dammit (Score:5, Informative)

    by Lord Grey ( 463613 ) * on Wednesday June 23, 2010 @12:44PM (#32667618)

    From TFA, written by Michael Cooney and propagated by the summary:

    Dubbed extreme scale computing, such machines are needed DARPA says to "meet the relentlessly increasing demands for greater performance, higher energy efficiency, ease of programmability, system dependability and security."

    It looks like these "extreme scale computing" systems are needed before things like "ease of programmability" can be acheived. I call bullshit.

    The actual notice from DARPA is named Omnipresent High Performance Computing (OHPC) [fbo.gov]. From the first paragraph of that page:

    ... To meet the relentlessly increasing demands for greater performance, higher energy efficiency, ease of programmability, system dependability, and security, revolutionary new research, development, and design will be essential to enable new generations of advanced DoD computing system capabilities and new classes of computer applications. Current evolutionary approaches to progress in computer designs are inadequate. ...

    That makes a lot more sense.

    Now, will someone please go and smack Michael Cooney up the back of head for writing like that?

    • by Animats ( 122034 ) on Wednesday June 23, 2010 @01:02PM (#32667844) Homepage

      Right. If you actually read the announcement, it's not that they want yet more boondoggle supercomputing centers. What they want is more crunch power in small boxes. Read the actual announcement [fbo.gov] (PDF). See page 17. What they want is 1 petaflop (peak) in one rack, including cooling gear. The rack gets to draw up to 57 kilowatts (!).

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Quick napkin math:

        Rack has 42U

        SuperMicro servers (TwinX) have 2 "blades" per 1U rail slot.

        Each blade has 2 6-core Intel Nehalem CPUs generating approximately 225 GFLOPS each, or 450 per U.

        18.9 TFLOPS per rack and consuming a peak of over 78,000 BTU and 600 amps and 72KW (breaking the budget).

        Yep, there's a long way to go. Guessing some sort of customizable GPU massively parallel system. It'll be a bitch to develop for, but probably what's required to reach these numbers.

      • Re: (Score:3, Funny)

        by evilbessie ( 873633 )
        See the moon there, DARPA want that on a stick, with sugar and a cherry on top if you please.
    • by Yvanhoe ( 564877 )
      Translation : WPA2 is a bit harder to crack than expected but don't worry, we are giving our NSA kids all the tools they need !
  • by zero.kalvin ( 1231372 ) on Wednesday June 23, 2010 @12:46PM (#32667644)
    Call me tinfoil hat wearer, but me thinks they want a faster way of cracking encryption...
    • by Entropius ( 188861 ) on Wednesday June 23, 2010 @12:49PM (#32667676)

      Good luck. I can encrypt something in polynomial time (quadratic, isn't it?) that it takes you exponential time to encrypt.

      • Re: (Score:3, Insightful)

        by John Hasler ( 414242 )

        But you'll have to fully deploy your longer keys long enough before they deploy their exaflop cracker that none of the inadequately-protected messages already in their possession are useful to them.

        I suspect that simulations are more interesting to them, though. Think what they'd save on testing if they could fully simulate hypersonic flight and scramjet engines (not that I don't think they'll use this for cracking).

        • by Cyberax ( 705495 )

          4096-bit RSA encryption and 256-bit symmetrical encryption are way outside of capabilities of any imaginable classical computer.

          Now, the problem might be in a insecure passphrase used to generate AES keys...

          • by Surt ( 22457 )

            Not outside the capabilities of a classical computer, outside the capabilities of known decryption algorithms on conventional computers. The fact that the NSA is still serving a purpose in spite of 'completely secure' key sizes should suggest a fairly obvious conclusion.

            • The fact that the NSA is still serving a purpose in spite of 'completely secure' key sizes should suggest a fairly obvious conclusion./quote?

              That people are too cheap/lazy/apathetic to bother encrypting stuff?

            • by Cyberax ( 705495 )

              I doubt it. Unless they have unbelievable good attacks, 256 bits give a WIDE margin of safety.

              Schneier estimated that just cycling a counter through 2^220 bits requires energy of a Supernovae.

              • by Surt ( 22457 )

                I have long assumed that the NSA has an attack on AES that is at worst 128 bits of difficulty on a 256bit key, and that they have computer resources to crack 128 bits within an hour.

                • by chgros ( 690878 )

                  Let's see...
                  128 bits = 2^128 possibilities
                  2^128 > (2^10)^12 = 1024^12 > 10^36
                  Supercomputer we're talking about = 10^18 operations/s
                  Meaning it would take about 10^18s (about the age of the universe) to cycle through 128bit keyspace.

                  • by Surt ( 22457 )

                    Yeah, the assumption is definitely that the NSA uses custom hardware that does one thing only, and is at least 10^12 or so faster. Each device is probably 10^6 faster than a conventional cpu for this one task, and they presumably built out 10^9 or so devices (general purpose supercomputers are hard to parallelize to that degree, so are limited to around 10^4 devices).

            • by epine ( 68316 )

              The fact that the NSA is still serving a purpose in spite of 'completely secure' key sizes should suggest a fairly obvious conclusion.

              Sweet. Stupidity by obscurity. Shall we integrate the area under the curve of obviousness * tinfoil_coefficient?

              There is an obvious conclusion, but apparently it's not obvious. It's one of those cases where whichever answer you presently hold seems obvious, until one discovers an even more obvious answer. The parent post has been careful to distance itself from any clue as to which rung on the ladder of obviousness it presently occupies, a strategy which suggests an entry level rungs. Think of the cost

    • Re: (Score:3, Interesting)

      by SirGarlon ( 845873 )
      Actually, the military being able to crack encryption is in some sense a Good Thing. It enables them to conduct espionage and counter-espionage against adversaries such as North Korea and Al-Quaeda. Yeah that's kind of a Cold War mentality, but what is "cyber warfare" if not Cold War II?
      • by Yetihehe ( 971185 ) on Wednesday June 23, 2010 @12:57PM (#32667774)

        but what is "cyber warfare" if not Cold War 2.0?

        FTFY

      • Re: (Score:3, Informative)

        If I'm North Korea or Al-Qaeda or "Red" China, or any one of a million other defined-as "bad guys", I'm not using RSA or some such, I'm using one-time-pads [wikipedia.org] or steganography [wikipedia.org] on any one of a billion different chat boards, probably one where I can post JPEGs. Places where the message location and encryption itself is all the sender signature it needs. It's the bankers and the private citizens (and possibly some foreign diplomatic services) who are using RSA and public-key type ciphers that (might maybe potent
    • I think they want a faster way to preform a DOS attack. They plan to send so many pulses down the line at once that the ethernet cable vibrates so much it gets unplugged by your server.

      Don't believe me? Send a letter to mythbusters.

    • by mea37 ( 1201159 )

      You're conflating government agencies. If you want to worry that the government is reading your email, you want to talk about the NSA. DARPA is more likely to be building toys for the military.

      Ever wonder how they test nuclear weapon designs these days?

    • by nazsco ( 695026 )

      yes. because they really want to know the kind of porn you look for with ssl google.

  • Exaflops (Score:5, Informative)

    by maxwell demon ( 590494 ) on Wednesday June 23, 2010 @12:47PM (#32667654) Journal

    Quintillion is not an SI prefix. The next step after Peta is Exa.

    • Re: (Score:3, Informative)

      by daveime ( 1253762 )

      Nope, Quintillion is a quantity, whereas Petaflops, Exaflops etc are rates of calculations per second. Please don't mix your units in your haste to appear smart.

      • Re: (Score:3, Funny)

        by Pojut ( 1027544 )

        Just like people complaining how in Star Wars, Han Solo said he made the Kessel Run in less than 12 parsecs...yes, we know that parsecs are a measure of distance, Solo was talking about being able to complete the race using a shorter route than the standard 18 parsecs, which is why a measure of distance makes sense.

        Source [wikia.com].

        Disclaimer: some people may shout "retcon" at this explanation, but at this point singling out each instance retconning in the Star Wars universe is a wasted effort.

        • by blair1q ( 305137 )

          1. Hyperspace: distance and time are merely four directions in an orthogonal 4-space. So saying you made it in 12 parsecs when using a hyperdrive is completely correct. It's x^2+y^2+z^2+t^2 = 12^2.

          2. I thought everyone knew this.

          3. Han shot first, goddammit.

      • "quintillion-flop" is also a rate of calculations per second, which is equivalent to the more concise "exaflop". What's the problem?
      • Re: (Score:3, Interesting)

        by amchugh ( 116330 )

        Quintillion is a different quantity in long scale countries (10^30) vs short-scale countries (10^18), which is partly why the SI units were standardized.

    • Re: (Score:3, Informative)

      by godrik ( 1287354 )

      for the record, there was a bunch of talks in IPDPS 2010 ( http://ipdps.org/ipdps2010/2010_advance_program.html [ipdps.org] ) about build exaflop mahcines including a keynotes.

    • Re: (Score:3, Funny)

      by Chowderbags ( 847952 )
      FLOPS is not an SI unit.
      • Re: (Score:3, Informative)

        FLOPS is not an SI unit.

        True, that: FLOPS communicates a combination of the SI unit (1/s = Hz) with the identity of the thing being measured (floating point operations). It's like if you had KOM as an abbreviation for kilograms of milk.

    • Re: (Score:2, Redundant)

      by gandhi_2 ( 1108023 )

      A lot of people make this mistake...it's in base 10.

      You are thinking of Quibiflops.

    • Regardless, its a metric assload of processing power. The only obvious reason I can see for this type of computing power is to render encryption by the average computers useless.

      • by blair1q ( 305137 )

        $ units
        2438 units, 71 prefixes, 32 nonlinear units

        You have: 1 ton
        You want: metric assload
        Unknown unit 'metric'
        You want: ^D
        $

  • Call me when they get to googleflops ;-)
  • Peta-flops (Score:2, Funny)

    by Anonymous Coward

    I'm glad DARPA is finally making a move to make their computing more animal friendly.

  • Translation (Score:5, Funny)

    by Rik Sweeney ( 471717 ) on Wednesday June 23, 2010 @12:48PM (#32667668) Homepage

    I want to run Crysis 2 in software rendering mode

  • I Love DARPA (Score:5, Insightful)

    by sonicmerlin ( 1505111 ) on Wednesday June 23, 2010 @12:57PM (#32667782)
    They come up with ideas that only ultra-geeks and science fiction nerds could come up with, and then they get billions in funding for it! It's like paradise. The fact that they're actually successful at advancing human technology is just icing on the cake.
    • Re:I Love DARPA (Score:5, Informative)

      by MozeeToby ( 1163751 ) on Wednesday June 23, 2010 @01:24PM (#32668116)

      Most people don't realize it but DARPA can best be described as a few dozen scientists and engineers with large checkbooks and a big travel budget. They go around the country and around the world looking for technologies that are beyond what we can do today but might be possible with the right funding in the right places. Most importantly, they're aware that a large percentage of the projects that they fund will end in failure (or rather, will not meet all their goals), but the benefits of the ones that don't outweigh the costs.

      • Re: (Score:3, Interesting)

        by Courageous ( 228506 )

        It's even more interesting than that. If DARPA begins succeeding a lot, DARPA seniors end up having to explain to congress (yes, directly to congress) why it is they aren't forward-leaning enough. I.e., DARPA programs are expected to fail often, and congress uses this failure rate as pro forma information about how "researchy" DARPA is.

        Joe.

  • by LinuxInDallas ( 73952 ) on Wednesday June 23, 2010 @01:00PM (#32667806)

    Norton bogs my computer down too but that is just crazy :)

  • What's the need? (Score:3, Interesting)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday June 23, 2010 @01:00PM (#32667810) Homepage Journal

    First, I'm entirely ignorant of supercomputing. I don't know the first thing about it. I'm asking this out of sheer lack of knowledge in the field:

    What do you need a computer that fast for?

    I mean, specifically, what can you do on something that fast that you couldn't do on one 1,000 (or 1,000,000) times slower? What kind of tasks need that much processing power? For example, you normally hear about them being used for things like weather simulation. Well, what is it about weather simulation that requires so much work?

    The whole idea is fascinating to me, but without ever having even been near the field, I can't imagine what a dataset or algorithm would look like that would take so much power to chew through.

    • Re:What's the need? (Score:5, Informative)

      by Yoozer ( 1055188 ) on Wednesday June 23, 2010 @01:21PM (#32668072) Homepage

      What do you need a computer that fast for?

      Simulating exploding hydrogen bombs, weather simulation, brute-force cracking, etc. Basically any distributed project you can think of (see BOINC [berkeley.edu]) can also be done with a supercomputer.

      Well, what is it about weather simulation that requires so much work?

      It's a scientific model with a boatload of variables and dependencies. Ask these guys [earthsystemmodeling.org].

      • Don't forget reconstructing collider data. We had several thousand Linux boxes at FNAL that would take data from the CMS detector at the LHC and reconstruct events based on the data send down the pipe to us from CERN.
      • by dkf ( 304284 )

        Well, what is it about weather simulation that requires so much work?

        It's a scientific model with a boatload of variables and dependencies. Ask these guys [earthsystemmodeling.org].

        In particular, it's a fluid dynamics problem and they tend to be difficult to scale up to distributed computing because of the amount of coupling between adjacent simulation cells. Supercomputers (with their expensive interconnects and very high memory bandwidth) tend to be far better at this sort of problem.

    • Re:What's the need? (Score:5, Informative)

      by Hijacked Public ( 999535 ) on Wednesday June 23, 2010 @01:22PM (#32668086)

      Well, what is it about weather simulation that requires so much work?

      The enormous number of variables, mostly. Weather, nuclear bombs, ocean currents, cryptography, even things as seemingly simple as modeling air flow around an object. If you are looking to develop a model of a process that involves a few thousand variables and you need to know the interaction of those variables several levels deep....you need to make a lot of calculations.

      It hasn't been all that long that computers have had the computational power to dominate humans in games as 'simple' as chess.

      • Re: (Score:3, Interesting)

        by Orp ( 6583 )

        Actually, there are only a handful of variables in a weather simulation. For a typical cloud-scale simulation you have the three components of wind, moisture, temperature, pressure, and precipitation variables. Say, 13 variables. That is not why you need supercomputers.

        The reason you need supercomputers to do weather simulations is all about resolution, both spatial and temporal. Weather simulations break the atmosphere into cubes, and the more cubes you have, the better you resolve the flow. All weather si

    • by Chowderbags ( 847952 ) on Wednesday June 23, 2010 @01:23PM (#32668102)

      I mean, specifically, what can you do on something that fast that you couldn't do on one 1,000 (or 1,000,000) times slower? What kind of tasks need that much processing power? For example, you normally hear about them being used for things like weather simulation. Well, what is it about weather simulation that requires so much work?

      Theoretically there's nothing you can't do on a supercomputer that you couldn't do with an ordinary desktop computer (except possibly for memory constraints), but for that matter you could also do everything by hand. The thing is, when your problem space is very large (i.e. calculating all interactions between X number of objects, where X is some huge number, or solving something like the Traveling Salesman Problem), you are limited in your options of what you can do to get results faster. If you're lucky, you can find some speedup of your problem (I.E. going to a better level of O-complexity [O(2^N)->O(n^2) would be a huge speedup, but doesn't happen often]), or tossing more resources at it. Yes, it'll still be slow, but if it takes you a year to do on a supercomputer, that's quite a bit better than spending 1000 years waiting on a regular computer.

    • Well, what is it about weather simulation that requires so much work?

      Might have something to do with the billions upon billions of billions of billions of atoms that need to be simulated.

      The more processing power one has, the finer the simulation parameters.

    • Re: (Score:3, Informative)

      by John Hasler ( 414242 )

      I mean, specifically, what can you do on something that fast that you couldn't do on one 1,000 (or 1,000,000) times slower? What kind of tasks need that much processing power?

      Detailed, 3-D simulation of things like nuclear explosions and scramjet engines.

      For example, you normally hear about them being used for things like > weather simulation. Well, what is it about weather simulation that requires > so much work?

      Accuracy. Weather Prediction [wikipedia.org]

      • Detailed, 3-D simulation of things like nuclear explosions

        I've wondered about this for some time. If countries like the USA have enough nukes to nuke the world several times over, and have had that capability for decades now, how are these simulations useful?

    • Re: (Score:3, Informative)

      Imagine a simulation in 3D space. You model the space by a cube of 100x100x100 grid points. That's one million data points. Now say you have to do some calculation on them which scales quadratic in the number of data points. Say you manage to finish the calculation in one hour on some computer.

      OK, but now you notice that those 100 data points in each direction are to inaccurate. You need 1000 points to be reasonably accurate. So now your data set is not one million, but one billion data points. And your O(N

      • Thanks! That's what I was wondering about. So is that the problem they're trying to solve: current models are too coarse and scientists think they can get more accurate results by increasing the points/partitions/whatever?

      • Re: (Score:3, Interesting)

        You almost certainly don't want to wait 114 years to get your results.

        You know, back in the day, we had some patience. Plus, the notion that one would have to wait 114 years to get results made us develop better algorithms, not just throw cycles at a problem. Kids these days... Now get off my lawn!

    • Re:What's the need? (Score:5, Informative)

      by Chris Burke ( 6130 ) on Wednesday June 23, 2010 @01:36PM (#32668234) Homepage

      There are broad classes of algorithms where you can make good use of essentially arbitrary amounts of computing power to get better answers. When doing physical simulations of something like airflow over a jet wing, or the movement of a weather system, or the explosion of a hydrogen bomb, you'll break everything up into tiny units that you treat as monolithic elements whose behavior can be treated relatively simply, and calculate what happens to them over some tiny timescale, call the result the new state of the universe, and repeat. This is called "finite element analysis".

      Because you're calculating everything in discreet steps, though, errors creep in and accumulate. The more processing power you have, the more elements you can use and the smaller time scales you can calculate over and get a more accurate answer in the same amount of time. The reason it's unacceptable to do the same calculation but have it go 1,000 or 1,000,000 times slower is that these simulations might already take hours, days, weeks, or even longer. Even the longest DoD contract needs an answer to the behavior of a proposed jet fighter wing in less than 1,000,000 days. :)

      Scientific computing is an area where there will always be a use for more processing power.

      There are other areas where it can be important, when you have real time constraints and can't just reduce your accuracy to make it work. I recall a story from advanced algorithms class where a bank was handling so many transactions per day that the time it took to process them all was more than 24 hours. Obviously this was a problem. The solution in that case was to modify the algorithm, but that's not always possible, and you need more computing. This is a little different in that you need the extra power to allow growth, as opposed to science where you could hand them an exaflop computer today and they'd be able to use it to its fullest.

      • The reason it's unacceptable to do the same calculation but have it go 1,000 or 1,000,000 times slower is that these simulations might already take hours, days, weeks, or even longer. Even the longest DoD contract needs an answer to the behavior of a proposed jet fighter wing in less than 1,000,000 days. :)

        Going along with what you say... Another thing to consider is that in the process of designing something, you don't just do one simulation and declare it finished. If you knew what the answer would be, y

        • by dkf ( 304284 )

          Ideally, you will want to iteratively search through combinations of input variables to determine an optimum in terms of output variables.

          One thing you can do with enough computing power is work in near real time, interactively steering the simulation towards a situation that is interesting. Gamers will be familiar with why this can be a good idea, but it very useful when the effect you are actually studying is an emergent one of some physical situation where the input parameters have to be very exact to trigger. Certain types of mixing of immiscible fluids (on the way to making emulsions) can be very interesting, and the physics there is bot

    • by koxkoxkox ( 879667 ) on Wednesday June 23, 2010 @01:36PM (#32668236)

      If you take weather simulation :

      At a given point, you have a bunch of physical equations taking a set of parameters at time t and giving you these same parameters at time t+1. Of course, the smaller the time step, the better the result.

      To have the best possible result, you should consider the whole globe at once (think phenomenon like thermohaline circulation for example). However, you should also consider the finest grid possible, to take into account the heterogeneity of the geography, the local variations due to rivers, etc. It is also important to consider a three-dimensional model if you want to transcribe the atmospheric circulation, the evaporation, etc.

      I forgot the exact numbers, but Wikipedia gives an example of a current global climate models using a grid of 500,000 points (see http://en.wikipedia.org/wiki/Global_climate_model [wikipedia.org] ), which is a pretty coarse resolution, working with tiles of tens of thousands kilometer square.

      With the current computing capabilities, we can not go much farther for a global model. This is already an impressive improvement compared the first models, which were two dimensional and used very simplified equations, overlooking a large number of important physical mechanism.

      At the same time, we have satellite data several orders of magnitude more precise. Data from the satellite ASTER were computed to provide a complete altitude mapping of the globe with a theoretical resolution of 90 m. The vegetation cover can be obtained at a resolution of 8m using commercial satellite like FORMOSAT-2. Even the soil moisture can be measured at a resolution of around 50 km thanks to the new satellite SMOS.

      These sets of data are already used at the local level, for example to model the transfer between the soil and the atmosphere, taking into account the vegetation (SVAT modelling). It makes no doubt that a global climate model using a more precise grid and these data would significantly improve its prediction.

    • Re:What's the need? (Score:4, Informative)

      by chichilalescu ( 1647065 ) on Wednesday June 23, 2010 @01:42PM (#32668336) Homepage Journal

      In fluid dynamics simulations (which include weather stuff), there are huge computational problems. I work in the field, so bear with me a little.

      The best model we have so far for fluids is to use balance equations (look up the Navier Stokes equations). This means that in order to describe the evolution of a fluid in a given domain, we need to split the domain into small cells, and then integrate numerically the balance equations. To put it simply, you have to integrate numerically a system of ordinary differential equations with many many variables (degrees of freedom).
      For a simple but "correct" Navier Stokes simulation, the number of degrees of freedom is proportional to Re^(9/4), where Re is the Reynolds number (the memory requirements are proportional to the number of degrees of freedom). This Reynolds number, for typical systems (like the atmosphere) is of the order of at least 10^4-10^6 (you can look up typical values on wikipedia if you're interested). Furthermore, the number of timesteps needed for a "correct" simulation is proportional to Re^(3/4).

      But these are not the most complicated simulations that are to be run on such machines. Research for issues like controled nuclear fusion needs to address much more demanding problems.

      Numerical simulations of physical systems are inherently hard, because they scale polynomially with their complexity. However, they are generally cheaper than actual experiments, and you have access to more data.

    • by Surt ( 22457 )

      Imagine you are simulating weather with an accuracy narrowed down to 1000 cubic meters. That's a cube 10 meters on a side, consider it roughly the size of a house. Not very accurate, right? Because there is a lot of detail going on within those 1000 cubic meters that your simulation is ignoring.

      But: it's also a vast quantity of data to consider, even at that level of inaccuracy. Just to simulate the weather over the united states you'd have about 20,740,933,333 cells to compute. 20 Billion cells to com

    • by GenP ( 686381 )
      Real-time (or better!) simulation of biological systems [wikipedia.org].
      • Yes, ominously the article states that it will be running a "self-aware OS".

        I'm of the view that there's a good chance that current or near-future supercomputers would be able to simulate a human brain in real-time. This is because there's an awful lot of computational redundancy in real brains, given what they're made from, and given their need to self-construct.

        All that's needed is to reverse-engineer the algorithms used by each part of the brain, and to properly connect them up.

    • Big thanks to everyone who replied! Those are the kinds of answers I was looking for. I have a friend who farms and he has some enormous machines in his fields. I felt the same way about supercomputers as I did about his farming equipment: "Good grief! That must be useful for something or he wouldn't have bought it, but I can't imagine what I'd ever use such a thing for."

      Special thanks to everyone who didn't interpret that as an attack on supercomputing or make "640KB ought to be enough for everybody" jokes

    • Resolutions and frame rates to make a grown gamer weep.
    • The beauty of computers is that they are fast enough that we can use massively iterative processes to get very accurate answers faster than you could possibly do by hand, and in situations where advanced techniques like calculus won't work.

      Running with your example of weather simulation; a 'simple' way to do it is to lay out a 3-d grid and capture the current conditions at that grid. Then advance it by (say) a second, with each point being influenced by each point around it, the sun and anything else you ca

  • Old news (Score:3, Informative)

    by jdb2 ( 800046 ) * on Wednesday June 23, 2010 @01:00PM (#32667814) Journal
    The DOE as well as Oak Ridge, Los Alamos and Sandia National Laboratories already have programs in place to develop an "exascale" system by 2018. ( the date at which Moore's law predicts the possibility of such systems )
    The top companies competing for the government funds are, not surprisingly, IBM and Cray.

    See these two older /. stories here [slashdot.org] and here [slashdot.org].

    jdb2
    • As I rode my motorcycle past the Oak Ridge exit on the interstate on my way to North Carolina, I wondered why computing centers are located where coal is used for power generation, whereas Google places they're computing centers where cheap, renewable energy is available. Probably gov. pork (i.e. I want this in my district).
      • by jdb2 ( 800046 ) *

        As I rode my motorcycle past the Oak Ridge exit on the interstate on my way to North Carolina, I wondered why computing centers are located where coal is used for power generation, whereas Google places they're computing centers where cheap, renewable energy is available. Probably gov. pork (i.e. I want this in my district).

        Heh, yeah, especially since it's estimated that the power consumption of an exaflop machine would, at a minimum, be 20 megawatts, at least with the projected advancement of current technology.

        jdb2

        • Some of GE's newest wind turbines generate upwards of 3.6MW when spinning at ideal speed (13-17mph). So, that's what? 6 turbines to power this massive computing power? (yes, I know, the wind doesn't always blow, and the wind park is rarely going to generate it's nameplate capacity). Coal power can bite my shiny metal ass.
      • by bws111 ( 1216812 )

        Oak Ridge is in the TVA, which is hydro, not coal. The lab is there precisely because of the available power.

  • ...for hellaflops [blogspot.com].

  • That's a little odd, changing from SI prefix (metric), which uses mostly Greek words (Petaflops), to short scale, which uses Latin (Quintillion-Flop).
  • Since their current petaflop systems are clearly not enough for them, can I pick up a few for $5 a piece at their next salvage sale?

    • Since their current petaflop systems are clearly not enough for them, can I pick up a few for $5 a piece at their next salvage sale?

      Sure, just know that anything that ever could have held data ( Hard Drives, RAM, Registers on CPUs, etc. ) will be destroyed first.

  • Pfff, old news. It will produce 42 as final output, and then we'll have it build another machine capable of performing one peta-quazillion calculations per second.
  • FLOPS, not FLOP (Score:3, Informative)

    by 91degrees ( 207121 ) on Wednesday June 23, 2010 @02:10PM (#32668730) Journal
    You should realise that the "S" stands for seconds. Okay - it doesn't matter that much, but this is meant to be a technical site. The editors should really get this stuff right.
  • They should buy a data center and fill it with D. E. Shaw's special purpose hardware for doing particle simulations: http://en.wikipedia.org/wiki/Anton_(computer) [wikipedia.org] , and instead of proposing grants for new software development, propose grants to keep the data center's queue full of interesting chemical simulations to run.

  • by BoldAndBusted ( 679561 ) on Wednesday June 23, 2010 @09:53PM (#32673342) Homepage

    What is really needed is faster *bus speeds*. So many CPUs just sit around waiting for data that sits across the bus. That's where the dramatic throughput improvements lie. Pretty please, DARPA? :)

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...