Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Power The Military Hardware Science Technology

DARPA Targets Computing's Achilles Heel: Power 100

coondoggie writes "The power required to increase computing performance, especially in embedded or sensor systems has become a serious constraint and is restricting the potential of future systems. Technologists from the Defense Advanced Research Projects Agency are looking for an ambitious answer to the problem and will next month detail a new program it expects will develop power technologies that could bolster system power output from today's 1 GFLOPS/watt to 75 GFLOPS/watt."
This discussion has been archived. No new comments can be posted.

DARPA Targets Computing's Achilles Heel: Power

Comments Filter:
  • by unts ( 754160 ) on Sunday January 29, 2012 @02:51PM (#38858745) Journal

    The problem is not just generating the power... it's delivering it and consuming it without breaking/melting. And that's what they're getting at here - getting more FLOPS per watt... not finding out how to push more watts into a system. A silly amount of the energy going into a supercomputer comes out as heat... and a silly amount of energy is then used to remove that heat. Hopefully, by significantly improving the energy efficiency of chips and systems, we can make them a lot more powerful without them needing a whole lot more power. And I haven't even mentioned the mobile/embedded side of the spectrum where its about battery life and comfortable operating temperatures... the same energy efficiency goals apply.

    This is the sort of thing we over the pond are very interested in too. Like for example *cough* the Microelectronics Research Group [bris.ac.uk] that I'm a part of.

  • Re:Turing Tax (Score:3, Informative)

    by Anonymous Coward on Sunday January 29, 2012 @04:11PM (#38859209)

    For comparison, an HD video camera can record 1080p video in real time with only a couple of Watts. A PC (without hardware encoder) would take 15 mins or so to encode each minute of HD video, using far more power along the way.

    While it makes your point, you're actually off by orders of magnitude on both: a modern PC can easily encode at 2-4x realtime for 1080p... and a good hardware encoder often uses less than 100 milliwatts. A typical rule of thumb is that dedicated hardware is roughly 1000 times more efficient, power-wise, than a CPU performing the same task.

  • by Anonymous Coward on Sunday January 29, 2012 @07:05PM (#38860191)

    In a pinch you can extract lithium from sea water. That's basically what a lithium deposit is... an old sea that dried up and left the salts. Lithium isn't a big fraction of a battery's cost, weight or volume. Please everyone stop being silly. The cobalt that is often used in lithium batteries is far more expensive, rare and used in larger proportions. We just don't call them cobalt batteries so no one knows about that part.

  • Re:Power Consumption (Score:2, Informative)

    by alreaud ( 2529304 ) <alreaud@happycattech.com> on Sunday January 29, 2012 @08:56PM (#38860689)
    Yes and actually very simple (SI units),

    P = C*V^2*f where P is power in Watts, C is capacitance in farads, V is voltage in volts, and f is frequency in Hertz. C is kind of hard to measure, and is dynamic depending on processor load. A design value can be determined from processor data sheets.

    Power is only consumed in MOS transistors during transitions, to the value I = C*dv/dt, where C is the overall transistor capacitance to the power supply, in this instance. If dv is 0, ie, at a stable logic level, then I must also be 0, and hence power dissipation must be zero due to Ohm's Law, P=I*V.

    At 1.3V and 2.8GHz, the dv/dt multiplier becomes 4.73*10^9, significant for a 100-million transistor microprocessor even if overall capacitance/transistor ~femtofarads.
  • by SeaFox ( 739806 ) on Sunday January 29, 2012 @09:52PM (#38861001)

    And with GPU assist an Atom with a low-end GPU can happily play 1080P H.264.

    Actually that depends on the bitrate of the encoding far more than whether it's "1080p" or not. I've seen plenty of "1080p H.264 video" that's got lousy quality the moment there's any action.

    Not to mention the what profile of h264 was being used. High Profile requires much more computational power than Main. We're also assuming the video can be GPU accelerated. You can't just take any h264 video and get hardware acceleration, the video has to be encoded following certain rules about bitrate, b-frames, etc otherwise it will be all decoded in software.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...