Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Programming Power Hardware Technology

'Approximate Computing' Saves Energy 154 154

hessian writes "According to a news release from Purdue University, 'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption. "The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency," said Anand Raghunathan, a Purdue Professor of Electrical and Computer Engineering, who has been working in the field for about five years. "Computers were first designed to be precise calculators that solved problems where they were expected to produce an exact numerical value. However, the demand for computing today is driven by very different applications. Mobile and embedded devices need to process richer media, and are getting smarter – understanding us, being more context-aware and having more natural user interfaces. ... The nature of these computations is different from the traditional computations where you need a precise answer."' What's interesting here is that this is how our brains work."
This discussion has been archived. No new comments can be posted.

'Approximate Computing' Saves Energy

Comments Filter:
  • Analog (Score:5, Interesting)

    by Nerdfest (867930) on Wednesday December 18, 2013 @04:47PM (#45729675)

    This is also how analog computers work. They're extremely fast and efficient, but imprecise. It had a bit of traction in the old days, but interest seems to have died off.

  • Heard this before (Score:2, Interesting)

    by Animats (122034) on Wednesday December 18, 2013 @04:50PM (#45729705) Homepage

    Heard this one before. On Slashdot, even. Yes, you can do it. No, you don't want to. Remember when LCDs came with a few dead pixels? There used to be a market for DRAM with bad bits for phone answering machines and buffers in low-end CD players. That's essentially over.

    Working around bad bits in storage devices is common; just about everything has error correction now. For applications where error correction is feasible, this works. Outside that area, there's some gain in cost and power consumption in exchange for a big gain in headaches.

  • by raddan (519638) * on Wednesday December 18, 2013 @05:31PM (#45730195)
    Not to mention floating-point computation [wikipedia.org], numerical analysis [wikipedia.org], anytime algorithms [wikipedia.org], and classic randomized algorithms like Monte Carlo algorithms [wikipedia.org]. Approximate computing has been around for ages. The typical scenario is to save computation, nowadays expressed in terms of asymptotic complexity ("Big O"). Sometimes (as is the case with floating point), this tradeoff is necessary to make the problem tractable (e.g., numerical integration is much cheaper than symbolic integration).

    The only new idea here is using approximate computing specifically in trading high precision for lower power. The research has less to do with new algorithms and more to do with new applications of classic algorithms.
  • Half-precision (Score:4, Interesting)

    by michaelmalak (91262) <michael@michaelmalak.com> on Wednesday December 18, 2013 @05:33PM (#45730219) Homepage
    GPUs have already introduced half-precision [wikipedia.org] -- 16-bit floats. An earlier 2011 paper [ieee.org] by the same author as the one in this Slashdot summary cites a power savings of 60% for a "an approximate computing" adder, which isn't that much better than just going with 16-bit floats. I suppose both could be combined for even greater power savings, but my gut feeling is that I would have expected even more power savings once the severe constraint of exact results is discarded.
  • by Ottibus (753944) on Wednesday December 18, 2013 @05:46PM (#45730367)

    The problem with this approach is that the energy used for computation is a relatively small part of the whole. Much more energy is spent on fetching instructions, decoding instructions, fetching data, predicting branches, managing caches and many other processes. And the addition of approximate arithmetic increases the area and leakage of the processor which increases engergy consumption for all programs.

    Approximate computation is already widely used in media and numerical applications, but it is far from clear that it is a good idea to put approximate arithmetic circuits in a standard processor.

"And do you think (fop that I am) that I could be the Scarlet Pumpernickel?" -- Looney Tunes, The Scarlet Pumpernickel (1950, Chuck Jones)