Forgot your password?
typodupeerror
Power The Military Hardware Science Technology

DARPA Targets Computing's Achilles Heel: Power 100

Posted by timothy
from the never-a-good-time-to-buy-a-computer dept.
coondoggie writes "The power required to increase computing performance, especially in embedded or sensor systems has become a serious constraint and is restricting the potential of future systems. Technologists from the Defense Advanced Research Projects Agency are looking for an ambitious answer to the problem and will next month detail a new program it expects will develop power technologies that could bolster system power output from today's 1 GFLOPS/watt to 75 GFLOPS/watt."
This discussion has been archived. No new comments can be posted.

DARPA Targets Computing's Achilles Heel: Power

Comments Filter:
  • by FooAtWFU (699187) on Sunday January 29, 2012 @03:09PM (#38858823) Homepage

    It occurred to me the other day that, while I have been programming and working with network monitoring tools and the like for a while, and I can get an email alert (or text message) whenever a piece of equipment goes down, the rest of the world doesn't have that sort of capability. A big chunk of of California Highway 1 could fall into the ocean, and people could fall off after it, and no one would notice until someone called it in. If my hard disk is on fire, I can get a message, but if the woods are on fire, you need to wait for someone to see the smoke.

    Sensors and the like are pretty awesome to have.

  • by stevelinton (4044) <sal@dcs.st-and.ac.uk> on Sunday January 29, 2012 @03:21PM (#38858895) Homepage

    In a sense. There is a widespread view that we will need 1 Exaflop supercomputers by roughly 2019 or 2020 for a whole range of applications including aircraft design, biochemistry to processing data from new instruments like the square kilometer array. On current trends, such a computer will need gigawatts of power (literally), which amongst other things would force it to be located right next to a large power station that wasn't needed for other purposes. This is felt to be a bit of a problem and this DARPA initiative is just one small part of the effort to tackle this and get the Exaflop machine down to 50MW or so, which is the most that can be routinely supplied by standard infrastructure.

  • Turing Tax (Score:5, Interesting)

    by Wierdy1024 (902573) on Sunday January 29, 2012 @03:31PM (#38858969)

    The amount of computation done per unit energy, isn't really the issue. Instead the problem is the amount of _USEFUL_ computation done per unit energy.

    The majority of power in a modern system goes into moving data around, and other tasks which are not the actual desired computation. Examples of this are incrementing the program counter, figuring out instruction dependancies, and moving data between levels of caches. The actual computation of the data is tiny in comparison.

    Why do we do this then? Most of the power goes to what is informally called the "Turing Tax" - the extra things required to allow a given processor to be general purpose - ie. to compute anything. A single purpose piece of hardware can only do one thing, but is vastly more efficient, because all the power used figuring out which bits of data need to go where can all be left out. Consider it like the difference between a road network that lets you go anywhere and a road with no junctions in a straight line between your house and your work. One is general purpose (you can go anywhere), the other is only good for one thing, but much quicker and more efficient.

    To get nearer our goal, computers are getting components that are less flexible. Less flexibility means less Turing Tax. For example video encoder cores can do massive amounts of computation, yet they can only encode video - nothing else. For comparison, an HD video camera can record 1080p video in real time with only a couple of Watts. A PC (without hardware encoder) would take 15 mins or so to encode each minute of HD video, using far more power along the way.

    The future of low power computing is to find clever ways of making special purpose hardware to do the most computationally heavy stuff such that the power hungry general purpose processors have less stuff left to do.

  • by Luckyo (1726890) on Sunday January 29, 2012 @06:11PM (#38859905)

    Problem with lithium, it isn't mushroom and berries. You can just walk in there and pick it up. It's also not oil. You can't just put a hole in the ground, connect it to the pumping machinery and have oil. You need to have an actual ore mines, with huge, easy to sabotage, hard to fix machinery.

    And finally, it's solid and heavy. It's a total bitch to move from center of war-torn nation that has world's best specialists in asymmetric warfare fighting against you both economically and in terms of general feasibility.

  • Re:Turing Tax (Score:5, Interesting)

    by Kjella (173770) on Sunday January 29, 2012 @08:42PM (#38860617) Homepage

    To get nearer our goal, computers are getting components that are less flexible.

    Actually, computers have lost lots of dedicated processing units because it just wasn't worth doing in dedicated hardware, that's where for example softmodems (aka winmodems) came from. And with GPUs going from fixed pipelines to programmable shader units, they too have gone the other way. Dedicated hardware only works if you are doing a large number of exactly defined calculations from a well established standard, like say AES or H.264. Even in a supercomputer the job just isn't static enough, if the researchers have to tweak the algorithm are you going to do build a new computer? You have parameters, but the moment they say "oh and we have to add a new correction factor here" you're totally screwed. Not going to happen.

Real Programmers don't write in PL/I. PL/I is for programmers who can't decide whether to write in COBOL or FORTRAN.

Working...