DARPA Targets Computing's Achilles Heel: Power 100
coondoggie writes "The power required to increase computing performance, especially in embedded or sensor systems has become a serious constraint and is restricting the potential of future systems. Technologists from the Defense Advanced Research Projects Agency are looking for an ambitious answer to the problem and will next month detail a new program it expects will develop power technologies that could bolster system power output from today's 1 GFLOPS/watt to 75 GFLOPS/watt."
let me answer that with a question (Score:2)
Re:let me answer that with a question (Score:5, Insightful)
No, the problem is getting hold of raw materials for batteries. Mobile computing is on the rise and the west doesn't want to be too dependent on foreign mineral deposits. More efficient computers = smaller batteries = smaller amounts of lithium etc needed.
Re: (Score:1)
Re: (Score:3)
Do you realize how much CPU is required to decode h.264 1080p pr0n? What's the use of a laptop without it?
We used to that on a 200MHz dual-core ARM with some hardware decoding assist. If I remember correctly the whole system used less than 1.5W and much of that was the video encoder for the TV (we were using analogue component at the time, not HDMI).
And with GPU assist an Atom with a low-end GPU can happily play 1080P H.264.
Re: (Score:2)
In other words, you were outputting 480i. And your source was probably not better than 480p.
Uh, no. Amongst other things I was writing the drivers to control the video output, so I think I know what we were displaying.
Re:let me answer that with a question (Score:5, Informative)
And with GPU assist an Atom with a low-end GPU can happily play 1080P H.264.
Actually that depends on the bitrate of the encoding far more than whether it's "1080p" or not. I've seen plenty of "1080p H.264 video" that's got lousy quality the moment there's any action.
Not to mention the what profile of h264 was being used. High Profile requires much more computational power than Main. We're also assuming the video can be GPU accelerated. You can't just take any h264 video and get hardware acceleration, the video has to be encoded following certain rules about bitrate, b-frames, etc otherwise it will be all decoded in software.
Re: (Score:2)
Sorry but the natural assumption that power consumption decreases with decreasing transistor sizes went out the window pretty much right around the 90nm node. That was the inflection point where leakage went from being a minor nuisance to a major contributor comparable to switching power. Leakage goes up with decreasing transistor size, and so now it's a struggle to make sure that the new generation of part uses merely the same amount of power as the previous generation.
Lowering the power of devices today
Re:let me answer that with a question (Score:5, Interesting)
It occurred to me the other day that, while I have been programming and working with network monitoring tools and the like for a while, and I can get an email alert (or text message) whenever a piece of equipment goes down, the rest of the world doesn't have that sort of capability. A big chunk of of California Highway 1 could fall into the ocean, and people could fall off after it, and no one would notice until someone called it in. If my hard disk is on fire, I can get a message, but if the woods are on fire, you need to wait for someone to see the smoke.
Sensors and the like are pretty awesome to have.
Re: (Score:2)
The problem is connectivity with someone who cares. The last mile is notoriously expensive, even with wireless. You could put lots of sensors along Hwy 1, but you'd need something to say where it started and stopped sliding into the ocean. You can actually run a piece of wire, calibrate it, and use two of them to cipher stop and start by using time domain reflectometry-- the technique used to find data cable problems. Somewhere, that wire needs be connected so that a computer will cough an alert when condit
Re: (Score:1)
Ah, and they are cheap as well!
1984 called. (Score:2)
Sensors and the like are pretty awesome to have.
Indeed.
- BIG BROTHER
Re: (Score:1)
Re: (Score:2)
Why do we need so much energy? Why do we need so much processing power? Why do we need so much stuf?
All those questions are on the same line. They all have the same answer. To the best of my knowledge the answer is either a deterministc one based on darwinism, or "people are addicted to power".
Re: (Score:2)
Indeed. The implicit assumption that utility of an embedded device is linear with the utility of the device.
But of course, a new media format need to be introduced sooner or later, since the prices of Blue Ray are already starting to lose their "premium" justification and become just plain ordinary.
Re: (Score:1)
I don't know if DARPA has other things in mind, but the main reason why most research done into power efficiency of computing is because supercomputing clusters are having energy consumption become more and more of both a cost and a logistical problem. In fact, in the gov't. sponsored research on what it'll take for us to develop an exaflop computer (two years old now, I grant you), significantly increased power efficiency is considered absolutely necessary. Mobile computing is more of an inspiration for ef
Re: (Score:2)
Re:let me answer that with a question (Score:4, Interesting)
Problem with lithium, it isn't mushroom and berries. You can just walk in there and pick it up. It's also not oil. You can't just put a hole in the ground, connect it to the pumping machinery and have oil. You need to have an actual ore mines, with huge, easy to sabotage, hard to fix machinery.
And finally, it's solid and heavy. It's a total bitch to move from center of war-torn nation that has world's best specialists in asymmetric warfare fighting against you both economically and in terms of general feasibility.
Re:let me answer that with a question (Score:5, Informative)
In a pinch you can extract lithium from sea water. That's basically what a lithium deposit is... an old sea that dried up and left the salts. Lithium isn't a big fraction of a battery's cost, weight or volume. Please everyone stop being silly. The cobalt that is often used in lithium batteries is far more expensive, rare and used in larger proportions. We just don't call them cobalt batteries so no one knows about that part.
And my mod points just expired. (Score:1)
Hope someone else bumps you.
Re: (Score:2)
In a pinch, you can extract gold from sea water as well. That doesn't make it viable to do so either.
Re:let me answer that with a question (Score:5, Informative)
The problem is not just generating the power... it's delivering it and consuming it without breaking/melting. And that's what they're getting at here - getting more FLOPS per watt... not finding out how to push more watts into a system. A silly amount of the energy going into a supercomputer comes out as heat... and a silly amount of energy is then used to remove that heat. Hopefully, by significantly improving the energy efficiency of chips and systems, we can make them a lot more powerful without them needing a whole lot more power. And I haven't even mentioned the mobile/embedded side of the spectrum where its about battery life and comfortable operating temperatures... the same energy efficiency goals apply.
This is the sort of thing we over the pond are very interested in too. Like for example *cough* the Microelectronics Research Group [bris.ac.uk] that I'm a part of.
Re: (Score:3)
Erm, not to be overly pedantic, isn't *all* of the energy consumed by a supercomputer (or any other device) eventually converted into heat? First law of thermodynamics and all that?
Re:let me answer that with a question (Score:5, Interesting)
In a sense. There is a widespread view that we will need 1 Exaflop supercomputers by roughly 2019 or 2020 for a whole range of applications including aircraft design, biochemistry to processing data from new instruments like the square kilometer array. On current trends, such a computer will need gigawatts of power (literally), which amongst other things would force it to be located right next to a large power station that wasn't needed for other purposes. This is felt to be a bit of a problem and this DARPA initiative is just one small part of the effort to tackle this and get the Exaflop machine down to 50MW or so, which is the most that can be routinely supplied by standard infrastructure.
Re: (Score:2)
With an exaflop computer, simulating the human brain is looking like it might be possible. If we can get a simulated brain working as well as a real brain, there's a good chance we can make it better too, because our simulated brain won't have the constraints hat real brains have (ie. not limited by power/food/oxygen supply, not limited by relatively slow neurones and doesn't have to deal with cell repair and disease)
Basically, if current models of the brain are anywhere near correct, and current estimates
Re: (Score:2)
what makes a brain "better"? thinking faster, or thinking better thoughts?
Re: (Score:2)
What's your point, if you even have one? Just pouting? If you wanna be all relativistic, faster computing doesn't really help with the heat death of the universe so it's an exercise in futility, as are "good thoughts" no matter how they're defined. My point is, we're already derping with our current "hardware", why would supercomputers be put to any better use?
Re: (Score:3)
With an exaflop computer, simulating the human brain is looking like it might be possible.
Take a moment, relax, and then try to answer this question: What does computational speed have to do with it?
The point is that simulations are not linked to computational speed. Some simulations that we do today are performed thousands of times faster than "reality" while most others that we do today are performed thousands, or even millions of times slower. The speed of the simulation is irrelevant to their existence, so stop pretending that speed has any sort of importance to simulating something like
Re: (Score:1)
Fair enough, although it shouldn't be forgotten that just the memory requirement for a simulation on the scale of an entire human brain is huge (the specific order of magnitude necessary for such a computation is unknown as it is unclear at precisely what level the human brain does its computation). A modern supercomputer can't simulate a human brain even at a trillion+ times slowdown due to simply not having the memory for the computation.
Furthermore, for medical use, a million times slowdown on a simulati
Re: (Score:2)
If you simulate a human brain a trillion times slower than realtime, and want to spend 10 simulated years teaching it stuff, you're going to be a very old man by the time your experiments complete...
Speed is important...
Re: (Score:2)
I think the point is that we already have human brains that we can teach. There's no point having a computer pulling down a whole power station's worth of power just to simulate what is in the end only another human brain.
I am interested in AI and physics simulations myself so I'm not trying to say that simulating a brain isn't an interesting goal that might have something to teach us - but IMO if your end goal is useful intelligence for using in everyday life, there is no point in it. We already have billi
Re: (Score:2)
Some simulations that we do today are performed thousands of times faster than "reality" while most others that we do today are performed thousands, or even millions of times slower. The speed of the simulation is irrelevant to their existence
For a simulation to be useful it must reach desired results in a reasonable amount of time. If you are simulating something that only takes a few milliseconds in real life then a simulation that runs a thousand times slower than reality will still feel basically instant and one a million times slower will be done in around an hour. OTOH if you are simulating something that takes years in real life then with a 1000 times slowdown your simulation will be running for millenia.
Re: (Score:2)
Exaflop computer
- limited by power constraints (as per this article)
- limited by connectivity (nowhere near as many connections as a neurone)
- limited by lack of unit repair (has downtime when repair needed)
- limited by possibility of rogue programs, and damage
Slow neurones and the slow links between them don't actually seem to be an issue ...?
Seems more limited than a brain to me ...?
Re: (Score:2)
But each of those limitations improves approximately with moores law. the brain hasn't changed much this century. At some point one will surpass the other.
Re: (Score:2)
Computers have been faster than brains for most of their history, the thing that seems to matter is not speed but connections ...
Computers still have relatively limited numbers of these (compared to brains)
More of what we have now does not seem to be the solution, we are just getting power-hungry behemoths, that are very good at hyper-complex tasks but still no good at what we think is simple...
Moores Law has a limit, we are nearly at the atomic scale and Quantum effects are becoming more and more of an i
Re: (Score:2)
With an exaflop computer, simulating the human brain is looking like it might be possible.
It's looking like it's going to be rather more complex than that. Human brains use lots of power (for a biological system) and they do that not by being able to switch circuits very rapidly, but rather by being massively parallel. How to map that into silicon is going to be really challenging because it will require a totally different approach to the current ones; dealing with failures of individual components will be really a large part of the problem. To what extent will the power consumption itself prov
Re: (Score:2)
dealing with failures of individual components will be really a large part of the problem.
Highly doubtful since the brain itself is very sloppy about the whole process. Neurons dont fire at exact thresholds, frequent permanent damage events plague them as we go through life, and even diet can have measurable effects on brain chemistry that effect how signals propagate as well as cause damage.
What I'm saying is that there is clearly an extremely high degree of redundancy built into brains because of the reality of physical randomness, and that there is no reason to believe that any small part
Re: (Score:2)
With an exaflop computer, simulating the human brain is looking like it might be possible.
The main problem of simulating a brain isn't the computational power required.
Re:let me answer that with a question (Score:5, Insightful)
Concidering energy does not come cheap there is a very good commercial reason to save on one of the larger costs in computing (or any other activity)
And even though the US hosts the worldleaders in denial of CO2 related climate change it is still an ever more important concideration for many people, even in the US.
Re: (Score:2)
Every aspect of fuel use and cooling is been looked at. From HQ servers, air conditioning, servers in a tank to sensor networks.
They all need lots of electrical power that comes from very long fuel supply networks.
Re: (Score:2)
The word you are looking for is "crisis".
Crysis is a pun based on the Crytek company name and the aforementioned word.
Turing Tax (Score:5, Interesting)
The amount of computation done per unit energy, isn't really the issue. Instead the problem is the amount of _USEFUL_ computation done per unit energy.
The majority of power in a modern system goes into moving data around, and other tasks which are not the actual desired computation. Examples of this are incrementing the program counter, figuring out instruction dependancies, and moving data between levels of caches. The actual computation of the data is tiny in comparison.
Why do we do this then? Most of the power goes to what is informally called the "Turing Tax" - the extra things required to allow a given processor to be general purpose - ie. to compute anything. A single purpose piece of hardware can only do one thing, but is vastly more efficient, because all the power used figuring out which bits of data need to go where can all be left out. Consider it like the difference between a road network that lets you go anywhere and a road with no junctions in a straight line between your house and your work. One is general purpose (you can go anywhere), the other is only good for one thing, but much quicker and more efficient.
To get nearer our goal, computers are getting components that are less flexible. Less flexibility means less Turing Tax. For example video encoder cores can do massive amounts of computation, yet they can only encode video - nothing else. For comparison, an HD video camera can record 1080p video in real time with only a couple of Watts. A PC (without hardware encoder) would take 15 mins or so to encode each minute of HD video, using far more power along the way.
The future of low power computing is to find clever ways of making special purpose hardware to do the most computationally heavy stuff such that the power hungry general purpose processors have less stuff left to do.
Re: (Score:3, Informative)
For comparison, an HD video camera can record 1080p video in real time with only a couple of Watts. A PC (without hardware encoder) would take 15 mins or so to encode each minute of HD video, using far more power along the way.
While it makes your point, you're actually off by orders of magnitude on both: a modern PC can easily encode at 2-4x realtime for 1080p... and a good hardware encoder often uses less than 100 milliwatts. A typical rule of thumb is that dedicated hardware is roughly 1000 times more e
Re: (Score:2)
You are indeed correct - it all depends on the codecs, desired psnr and bits/pixel available. For modern codecs, the motion search is the bit that takes most of the computation, and doing it better is a super-linear complexity operation - hence both your numbers and mine could be correct, just for different desired output qualities.
The ratio though is a good approximate rule of thumb. I wonder how this ratio has changed as time has moved on? I suspect it may have become bigger as software focus has move
But you loose flexibility (Score:3)
If you want to talk about encoding, anime fan subbers are at the fore front. The latest is 10 bit encoding. It has a lot of benefits but what its main downside is that there is no hardware for it, you need to run it on the cpu. Someday hardware like a GPU might support it but that takes far to long to stay current.
That is the reason the general purpose CPU has won out so far, why mobile phones and tablet come with them as the main computing unit, because keeping up in hardware with the latest developments j
Re: (Score:2)
Re: (Score:2)
A couple of more words: Power Hog.
- at least when compared to ASICs. But there are new developments in the area, see Silicon Blue Technologies [wikipedia.org]. It will be interesting to see how things work out in the future. Looks like all the players are trying to create power efficient FPGAs.
Re: (Score:2)
And some words for you: volume and change. If you have a large enough application in the sense that you need millions of the things, and the application set in stone forever more, then ASICs are fine. If you ever intend to change it, or your run is small, FPGAs are a better choice.
Re: (Score:2)
or your run is small, FPGAs are a better choice
Yes, but the topic of discussion is power consumption not purchase price. My point was that FPGAs do not solve the problem of power consumption - at least not yet. They are getting better but then so are ASICs.
It appears that, looking forward, the best solution will be a combination of the two techniques. Specialized ASIC components glued together with FPGA elements. Most FPGA manufacturers already do this to a limited extent. It is common to see embedded CPUs in FPGAs - and I'm not referring to sof
Re: (Score:2)
Yes, but the topic of discussion is power consumption not purchase price.
Power consumption is part of it but I don't think you can draw reasonable conclusions from power consumption alone. It's important but so are upfront cost and flexibility.
My point was that FPGAs do not solve the problem of power consumption - at least not yet. They are getting better but then so are ASICs.
Yes ASICs are the most power efficient way of performing a repetitive computation task because they have neither the data pushing overhead of CPUs/GPUs or the reconfigurable wiring overhead of FPGAs. However putting a design into an ASIC is expensive, so it's only practical if you want a lot of a copies of the design, plan to run each cop
Re: (Score:2)
Less flexibility means less Turing Tax. For example video encoder cores can do massive amounts of computation, yet they can only encode video - nothing else.
And a Turning machine makes sense when transistors are expensive. But what's the actual cost of adding an h.264 encoder to a hardware die today? I bet it's cheaper than the electricity cost for doing much encoding over the ownership time of the part.
I suppose DSP's, VMX, MMX, SSE, etc. can all be seen as ways this has held true over time as transistor
Re:Turing Tax (Score:5, Interesting)
To get nearer our goal, computers are getting components that are less flexible.
Actually, computers have lost lots of dedicated processing units because it just wasn't worth doing in dedicated hardware, that's where for example softmodems (aka winmodems) came from. And with GPUs going from fixed pipelines to programmable shader units, they too have gone the other way. Dedicated hardware only works if you are doing a large number of exactly defined calculations from a well established standard, like say AES or H.264. Even in a supercomputer the job just isn't static enough, if the researchers have to tweak the algorithm are you going to do build a new computer? You have parameters, but the moment they say "oh and we have to add a new correction factor here" you're totally screwed. Not going to happen.
Re: (Score:2)
It's cyclical - going from specialized to general to specialized, etc.
Early computers used character generator chips - specialized processors that took ASCII(ish) inputs and generated the onscreen information. This evolve
Computronium (Score:2)
Re: (Score:2)
First we'll need nuclear fusion and some kind of autonomous robots.
And we ever want that brain to be ours, we'd better get a deep understand of neurology and neural implants before we get those autonomous robots...
Re: (Score:2)
Re: (Score:3)
Getting the level of detail from a brain you'd need to simulate it might be less a matter of implants than destructive readout. Slice-and-scan.
Well, right. That also eliminates the potential issues from having duplicate persons in virtual space and meat space.
they should talk to TI (Score:3, Funny)
TI's line of MSP430 chips run using little solar cells. hell, they practically run on their own self-esteem. so scale that technology and bam, you got a super computer that runs on a couple AA batteries.
I would be impressed (Score:1)
Re: (Score:2)
Re: (Score:2)
Haha. Look up Clinton selling the Chinese our missile guidance technology. It enabled the Chinese to build a space program, and provide cheaper launch services, and also gave them the essentials to build accurate ICBMs. Some folks considered it treasonous.
Re: (Score:2)
Or are you talking about Hughes and Loral who ILLEGALLY transferred tech to China and was CONVICTED of such?
There IS treason, but it sure as heck was not Clinton. Sadly, the treason continues to this day.
Re: (Score:2)
A few quotes:
Some guy with an axe to grind about Obama doing the same thing [onecitizenspeaking.com]:
In 1996, President Bill Clinton personally signed an executive order transferring control of satellite technology to the Department of Commerce; thus releasing restraints on a wide variety of sophisticated space and missile technology which were then exported to China.
CNN, 1998/05/22, [cnn.com]
WASHINGTON (May 22) -- President Bill Clinton on Friday defended a controversial satellite deal with China, even as White House officials delivered documents to the House International Relations Committee about the arrangement.
The president said the deal to launch U.S. satellites on rockets owned by other nations was "correct" and "based on what I thought was in the national interest and supportive of our national security."
Newsmax, 2003/9/29 [newsmax.com]:
Newly declassified documents show that President Bill Clinton personally approved the transfer to China of advanced space technology that can be used for nuclear combat.
The documents show that in 1996 Clinton approved the export of radiation hardened chip sets to China. The specialized chips are necessary for fighting a nuclear war.
"Waivers may be granted upon a national interest determination," states a Commerce Department document titled "U.S. Sanctions on China."
"The President has approved a series of satellite related waivers in recent months, most recently in November, 1996 for export of radiation hardened chip sets for a Chinese meteorological satellite," noted the Commerce Department documents.
These special computer chips are designed to function while being bombarded by intense radiation. Radiation hardened chips are considered critical for atomic warfare and are required by advanced nuclear tipped missiles.
Judicial Watch obtained the documents through the Freedom of Information Act, a Washington-based political watchdog group.
As I recall from the time, a lot of folks in the military and intelligence communities who were 'in the loop' were really vocal about this. It's been a long time so I don't recall too many details, so this will have to do.
Re: (Score:2)
if this was applied to American companies and western manufacturing ONLY. Sadly, the neo-cons will push for the to be applied to everybody, esp. China.
Yeah, I hate it when those xenophobic racist neo-cons push to share our advances with China (who has the fastest-growing need for power and one of the dirtiest power sources, namely coal). Hell, a move like that just might reduce poverty AND pollution, and no one wants that!
Thank god we liberals know that the only way to make the world a better place is to reflexively oppose not just everything the conservatives do, but everything we imagine they might do!
Re: (Score:2)
Re: (Score:2)
But how did "neo-cons" get involved? I didn't see them (or any mention of politics at all) anywhere in this story until you conjured them up out of thin air. Let me guess: the reason you're not a liberal is because the liberals are way too far to the right - correct?
Besides, the last time I looked, the neo-cons had been out of power for several years, so I don't think you have anything to worry about.
P.S. Neo-co
Re: (Score:2)
Re: (Score:2)
the problem is finding a superconductor that will operate at room temperature
That insight, and $4.99, will get you a cup of coffee at starbucks.
Maybe if we had a bunch of high-power supercomputers (ideally with low power consumption), we could run more atomic- or quantum-level simulations, research, etc, and find such a room-temperature superconductor!
P.S. On a related note, I have a great idea for improving cities, reducing pollution, and eliminating commute times: just invent teleportation! It would be much more effective than wasting time with incremental side project like
Power Consumption (Score:2)
Re: (Score:2, Informative)
P = C*V^2*f where P is power in Watts, C is capacitance in farads, V is voltage in volts, and f is frequency in Hertz. C is kind of hard to measure, and is dynamic depending on processor load. A design value can be determined from processor data sheets.
Power is only consumed in MOS transistors during transitions, to the value I = C*dv/dt, where C is the overall transistor capacitance to the power supply, in this instance. If dv is 0, ie, at a stable logic
Re: (Score:2)
You forgot leakage current, which in modern designs is comparable to the switching power.
Yeah, chip makers really wish power was only consumed on the transitions.
But, to answer the original question about a theoretical limit, yes there is. turning disorder into order takes energy, so you can approach it from a thermodynamics standpoint.
If your computer solely makes use of reversible calculations, you can reduce the power consumed to arbitrarily low levels using adiabatic circuits. Unfortunately "arbitrarily low power" comes commensurate with "arbitrary long computation time" so not necessarily a way to get TFlops/Watt.
Re: (Score:1)
Sustainability=reliability (Score:1)
Truth is stranger than fiction (Score:2)
the matrix is coming (Score:1)