Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming Power Hardware Technology

'Approximate Computing' Saves Energy 154

hessian writes "According to a news release from Purdue University, 'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption. "The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency," said Anand Raghunathan, a Purdue Professor of Electrical and Computer Engineering, who has been working in the field for about five years. "Computers were first designed to be precise calculators that solved problems where they were expected to produce an exact numerical value. However, the demand for computing today is driven by very different applications. Mobile and embedded devices need to process richer media, and are getting smarter – understanding us, being more context-aware and having more natural user interfaces. ... The nature of these computations is different from the traditional computations where you need a precise answer."' What's interesting here is that this is how our brains work."
This discussion has been archived. No new comments can be posted.

'Approximate Computing' Saves Energy

Comments Filter:
  • by i kan reed ( 749298 ) on Wednesday December 18, 2013 @03:42PM (#45729633) Homepage Journal

    The majority of CPU cycles in data centers is going to be looking up and filtering specific records in database(or maybe parsing files if you're into that). They can possibly save energy on a few specific kinds of scientific computing.

    • by l2718 ( 514756 ) on Wednesday December 18, 2013 @03:51PM (#45729711)

      This is not about data centers and databases. This is about scientific computation -- video and audio playback, physics simulation, and the like.

      The idea of doing a computation approximately first, and then refining the results only in the parts where more accuracy is useful is an old idea; one manifestation are multigrid [wikipedia.org] algorithms.

      • Isn't Newton-Raphson an "approximation"?
      • by raddan ( 519638 ) * on Wednesday December 18, 2013 @04:31PM (#45730195)
        Not to mention floating-point computation [wikipedia.org], numerical analysis [wikipedia.org], anytime algorithms [wikipedia.org], and classic randomized algorithms like Monte Carlo algorithms [wikipedia.org]. Approximate computing has been around for ages. The typical scenario is to save computation, nowadays expressed in terms of asymptotic complexity ("Big O"). Sometimes (as is the case with floating point), this tradeoff is necessary to make the problem tractable (e.g., numerical integration is much cheaper than symbolic integration).

        The only new idea here is using approximate computing specifically in trading high precision for lower power. The research has less to do with new algorithms and more to do with new applications of classic algorithms.
        • by tedgyz ( 515156 )

          Holy crap dude - you hit the nail on the head, but my brain went primal when you brought up the "Big O".

          • by raddan ( 519638 ) *
            My mind inevitably goes to this [youtube.com] when someone says "Big O". Makes being a computer scientist somewhat difficult.
        • The Big O formalism actually bounds the error quite precisely. In contrast, approximate computing does not offer any bound for the error (or perhaps only in the statistical sense).

    • Even OLAP could probably profit from this. Sometimes, it doesn't matter whether the response to the question "does my profit increase correlate strongly with my sales behavior X" is "definitely yes, by 0.87" or "definitely yes, by 0.86", the important thing is that is isn't "most likely no, by 0.03".

      Also, in the era of heterogeneous machines, you ought to have a choice in that.

    • Re:meanwhile... (Score:5, Informative)

      by ron_ivi ( 607351 ) <sdotno@NOSpAM.cheapcomplexdevices.com> on Wednesday December 18, 2013 @04:20PM (#45730035)

      The majority of CPU cycles in data centers is going to be looking up and filtering specific records in database

      Approximate Computing is especially interesting in databases. One of the coolest projects in this space is Berkeley AMPLab's BlinkDB [blinkdb.org]. Their cannonical example

      SELECT avg(sessionTime) FROM Table WHERE city='San Francisco' ERROR 0.1 CONFIDENCE 95%

      should give you a good idea of how/why it's useful.

      Their bencmarks show that Approximate Computing to 1% error is about 100X faster than Hive on Hadoop.

    • by lgw ( 121541 ) on Wednesday December 18, 2013 @04:21PM (#45730071) Journal

      Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!

      • Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!

        We've had approximate computing since the earliest days of the Pentium CPU.

        • We've had approximate computing since the earliest days of the Pentium CPU.

          My favorite joke of that era was
          I am Pentium of Borg.
          Arithmetic is irrelevant.
          Division is futile.
          You will be approximated!

      • " Perhaps "approximate computing" is farther along than I imagined!"

        Indeed, Excel has been doing it for 20 years.

        • Partly because MS shoved every feature imaginable in there, and then some. I'm surprised there's not a feature-complete implementation of Emacs somewhere in there (but I wouldn't be surprised if there was).
      • by formfeed ( 703859 ) on Wednesday December 18, 2013 @05:01PM (#45730599)

        Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!

        Sorry, that was my fault. I didn't have my ad-block disabled. They must have sent them to you instead.
        Just send them to me and I will look at it.

  • Analog (Score:5, Interesting)

    by Nerdfest ( 867930 ) on Wednesday December 18, 2013 @03:47PM (#45729675)

    This is also how analog computers work. They're extremely fast and efficient, but imprecise. It had a bit of traction in the old days, but interest seems to have died off.

    • This was my immediate reaction, as well. Analog computers do some things extremely well, and faster than could be done digitally. Absolute accuracy may not be possible, but plenty-good-enough accuracy is achievable for a lot of different types of problems. Back in the 1970s I worked for a small company as their chief digital/software guy. The owner of the company was wizard at analog electronics, and instilled in me a solid respect about what can be done with analog computing.
      • Absolute accuracy may not be possible, but plenty-good-enough accuracy is achievable for a lot of different types of problems.

        The same can be said of digital computers.

    • by ezdiy ( 2717051 )
      They live strong, reincarnated in MLC NAND flash cells [wikipedia.org] exactly because flash was first thing to reach litography cost limits. It's not actually true analog, but close enough to keep precision.
    • Re:Analog (Score:4, Informative)

      by bobbied ( 2522392 ) on Wednesday December 18, 2013 @04:54PM (#45730493)

      This is also how analog computers work. They're extremely fast and efficient, but imprecise. It had a bit of traction in the old days, but interest seems to have died off.

      Analog is not imprecise. Analog computing can be very precise and very fast for complex transfer functions. The problem with Analog is that it is hard to change the output function easily and it is subject to changes in the derived output caused from things like temperature changes or induced noise. So the issue is not about precision.

      • it is subject to changes in the derived output caused from things like temperature changes or induced noise.

        and

        Analog is not imprecise.

        does not compute.
        If the output is influenced by temperature and noise then it is imprecise.
        If a value should be 6.586789, but due to the temperature the output is 6.586913 then it has an error of 0.000124 .

    • Since in an Analogue computer every bit now contains an infinite amount of information, instead of just one, I imagine it would be incredibly fast.

      And since every decimal is already stored in approximate form, in a normal computer, I cannot imagine it being that different.

      • Since in an Analogue computer every bit now contains an infinite amount of information, instead of just one, I imagine it would be incredibly fast.

        What is this I don't even.

        • Binary: 1 bit can be 2 values, and contains that absolute minimal amount of information possible (true or false).
          Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit. So information is sent and computer far faster.
          Analogue: 1 bit can be an infinite amount of values, so an infinite amount more information can sent in a single bit. So information is sent and computed far far faster.

          • Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit.

            No, that's not what a bit is. 'Bit' is short for 'binary digit'. A bit can, by definition, only hold one of two possible states. It is a fundamental unit of information. A decimal digit comprises multiple bits. Somewhere between 3 and 4 bits per decimal digit.

            • I understand that, but "1 bit can be one of 10 different values" was more understandable in my opinion than "1 computational value can be one of 10 different values"

              You did not give me the correct alternative because one does not really exist, as far as I know.

              a single data point is a decent alternative.
              "Value" would work in some instances, but not if you are already using that word to mean something else in the very same sentence.

              Bit is what we use to call a single burst of information in a computer now. A

              • And if non binary computers started to be more popular I would not be surprised if the definition of bit expanded to include them.

                It won't, because they won't.

              • You did not give me the correct alternative because one does not really exist, as far as I know.

                I think I found it - it's called a ban [wikipedia.org].

          • by khallow ( 566160 )

            Analogue: 1 bit can be an infinite amount of values, so an infinite amount more information can sent in a single bit.

            Except that you don't have the ability to measure an infinite spread of values. In reality, it's finite information too.

          • Binary: 1 bit can be 2 values, and contains that absolute minimal amount of information possible

            That's the last correct statement in your post.

            Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit.

            You mean one digit. "Bit" has no other definition than the one you've given above.

            So information is sent and computer far faster.

            No, this simply isn't true. The bit is the fundamental unit of information. You can't transmit data faster simply by declaring it to be decimal/hexadecimal/analogue. All of those things are still, fundamentally, measured in bits. You might as well argue that since you can lift a package that weighs 1kg, you could just as easily lift a package that weighs 1000kg because it's still

            • No that is how a analogue computer works. The analogue values are not represented by bits the signals are analogue.

              Also "digit" is how you represent a decimal number, it does not imply the same computational signal type things, and cannot be used for analogue in any way.

              And defining "bit" as a "basic[/fundamental/indivisible] unit of information in computing" is not far off.

          • Ooh ooh, found another mistake (sorry).

            Decimal: 1 [decimal digit] can be one of 10 different values, so five times more information is present in a single [decimal digit].

            Just because it can represent five times more states, doesn't make it five times more information. It's about 3.322 times more information (as measured in bits).

            1 bit can be one of 2 states.
            3 bits can take one of 8 states - four times as many states, but only three times the number of bits.

    • This is also how analog computers work. They're extremely fast and efficient, but imprecise.

      On the contrary - they can be extremely precise. Analog computing elements were part of both the Saturn V and Apollo CSM stack guidance & navigation systems for example. Analog systems were replaced by digital systems for a wide variety of reasons, but accuracy was not among them.

  • by EmagGeek ( 574360 ) on Wednesday December 18, 2013 @03:48PM (#45729681) Journal

    We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?

    • by l2718 ( 514756 )

      I don't think you appreciate the point. In most cases, rather than multiplying 152343x1534324, you might as well just multiply 15x10^4x15x10^5 = 225x10^9

      . And to understand this you need to be very comfortable with what 2+2 equals exactly.

    • We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?

      Indeed... what's 3/9 + 3/9 + 3/9 after all? Does it approach 1, or is it actually 1? Do we care? Are we happy?

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Where the hell did you get that from? Oh yeah, its the talking points about the common core approach. Too bad that is nothing like what the common core says. Find a single place where any proponent of the common core said something like that and I'll show you a quote mine where they really said "it is understanding the process that is important, of which the final answer is just a small part because computation errors can be corrected."

  • by Anonymous Coward

    I find this interesting that most science fiction portrays an AI of some sort of having all of the advantages of sentience (creativity, adaptability, intuition) while also retaining the advantages of a modern computer (perfect recall, high computational accuracy, etc.). This kind of suggests that with a future AI, maybe that would not be the case; maybe the requirements for adaptability and creativity places sufficient demands on a system's (biological or electronic) resources that you couldn't have such a

    • However, being artificial brains, they can be connected to both analogue or imperfect-digital "brains", and to precise digital systems. In the same way that you can use a computer, but much closer, more immediate. Best of both worlds.

  • Heard this before (Score:2, Interesting)

    by Animats ( 122034 )

    Heard this one before. On Slashdot, even. Yes, you can do it. No, you don't want to. Remember when LCDs came with a few dead pixels? There used to be a market for DRAM with bad bits for phone answering machines and buffers in low-end CD players. That's essentially over.

    Working around bad bits in storage devices is common; just about everything has error correction now. For applications where error correction is feasible, this works. Outside that area, there's some gain in cost and power consumption in ex

    • As i recall, the greatest energy savings was determined when the answer did not matter at all, and the PC was unplugged from the wall, right?
  • Q: What do you call a series of FDIV instructions on a Pentium?
    A1: Successive approximations.
    A2: A random number generator

    .

    Hey, folks, I can keep this up all day.
    http://www.netjeff.com/humor/item.cgi?file=PentiumJokes [netjeff.com]

  • Been there (Score:5, Funny)

    by frovingslosh ( 582462 ) on Wednesday December 18, 2013 @03:57PM (#45729781)
    I remember Intel doing something like this back in the days of the 386, except without the energy savings.
  • Fuzzy Logic anyone? (Score:4, Informative)

    by kbdd ( 823155 ) on Wednesday December 18, 2013 @03:57PM (#45729787) Homepage
    Fuzzy logic was also supposed to save energy (in the form of requiring less advanced processors) by replacing computation intensive closed loop systems with table driven approximate logic.

    While the concept was interesting, it did not really catch up. Progress of silicon devices made it simply unnecessary. It ended up being used as a buzz word for a few years and quietly died away.

    I wonder if this is going to follow the same trend.

    • FWIW a typical Intel processor now uses huge tables for multiplication and division.
    • I wonder if this is going to follow the same trend.

      It's quite possible that if I didn't have to use this @#1%ing approximate computer I could definitively answer that question.

    • Fuzzy logic has nothing specifically to do with tables, but rather with approximate truth values. Standard probability is a perfectly respectable fuzzy logic, and doesn't need tables.

      I'm not remembering what it was supposed to do as far as efficiency goes, though.

  • I do this all the time. People are sometimes surprised when I can calculate an answer in a couple of seconds that takes other people half a minute or more, and my answer is within a few integers (or
    Saves me energy, too.
  • Mai spel checkar allreddy wurks dis weigh....

  • "If I asked you to divide 500 by 21 and I asked you whether the answer is greater than one, you would say yes right away," Raghunathan said. "You are doing division but not to the full accuracy. If I asked you whether it is greater than 30, you would probably take a little longer, but if I ask you if it's greater than 23, you might have to think even harder. The application context dictates different levels of effort, and humans are capable of this scalable approach, but computer software and hardware are n

    • However, multiplying "simpler" numbers might be faster. For example, I can multiply 20*30 in my head faster than 21.3625*29.7482 (YMMV). Rounding 21.3625*29.7482 to 20*30 might be "good enough" for many purposes, and you can even go back and keep more digits for a small number of cases where it's too close to call with the approximation.

    • I do not see why they needed to modify their instruction set to realize such gains.

      It was just a generic example to give the casual reader a basic grasp of the idea, not a specific scenario they'll be applying their process to.

  • It sounds like they just invented analog.

  • Half-precision (Score:4, Interesting)

    by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Wednesday December 18, 2013 @04:33PM (#45730219) Homepage
    GPUs have already introduced half-precision [wikipedia.org] -- 16-bit floats. An earlier 2011 paper [ieee.org] by the same author as the one in this Slashdot summary cites a power savings of 60% for a "an approximate computing" adder, which isn't that much better than just going with 16-bit floats. I suppose both could be combined for even greater power savings, but my gut feeling is that I would have expected even more power savings once the severe constraint of exact results is discarded.
  • And this is why we have thousands and thousands of approximation algorithms. Computers do the work perfectly precisely, except when we are talking about decimal numbers, and if you do not need perfect precision you just program in an approximate algorithm.

    I do not think you will ever do any better than picking the best mathematical algorithm for your problem, instead of just relying on lazy computers.

    • by tlhIngan ( 30335 )

      And this is why we have thousands and thousands of approximation algorithms. Computers do the work perfectly precisely, except when we are talking about decimal numbers, and if you do not need perfect precision you just program in an approximate algorithm.

      I do not think you will ever do any better than picking the best mathematical algorithm for your problem, instead of just relying on lazy computers.

      No, it's not. Approximation algorithms use exact computations and model approximation. The problem is using

      • If you don't care for the exact value you can use a specific algorithmic approximation, that normally gives you many orders of magnitude less computation time.

  • From TFS:

    'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption.'

    I am, for one, welcoming our new approximately accurate, longer-range drone overlords.

  • SHA256 double hash applications were probably first who used this on massive scale. It's actually ok to ramp clock/voltage up 50%, get 30% more rate at cost of 5% of wrong answers (and halving MTBF). ASIC miner chip giving wrong answers now and then because of imperfect mask process (even before OC) is common too.

    However numbers for standard-cell ASIC design don't seem much favourable, certainly not "doubling", much less energy saving (on the contrary, at ballpark 10-30% of OC you reach point of diminish
  • by Anonymous Coward

    Fuzzy logic and all that jazz really should be used more in computing.
    It could save considerable power, or allow for far more computation in a smaller space.

    So much stuff in computing only requires decently accurate results. Some requires even less accurate results.
    If something was off by one pixel during one frame when it was moving, big deal, no loss.

    Not to mention how great it would be for the sake of procedural noise.
    You want something that isn't too random, but is just one value messed up a little, th

  • by Ottibus ( 753944 ) on Wednesday December 18, 2013 @04:46PM (#45730367)

    The problem with this approach is that the energy used for computation is a relatively small part of the whole. Much more energy is spent on fetching instructions, decoding instructions, fetching data, predicting branches, managing caches and many other processes. And the addition of approximate arithmetic increases the area and leakage of the processor which increases engergy consumption for all programs.

    Approximate computation is already widely used in media and numerical applications, but it is far from clear that it is a good idea to put approximate arithmetic circuits in a standard processor.

  • by DigitAl56K ( 805623 ) on Wednesday December 18, 2013 @04:56PM (#45730533)

    .. this story or a slight variant gets reposted to Slashdot in one form or another.

  • by hamster_nz ( 656572 ) on Wednesday December 18, 2013 @05:11PM (#45730749)

    Due to ROM and cost limitations the original Sinclair Scientific calulator only produced approximate answers, maybe to 3 or four digits.
    This was far more accurate than the answers given by a slide rule....

    For more info have a look at this page Reversing Sinclair's amazing 1974 calculator hack - half the ROM of the HP-35 [righto.com]

  • 1. collect museum-piece Pentium systems
    2. exploit FDIV bug
    3. submit blurb to Slashdot
    4. ...
    5. Profit!

  • This post was going to contain something insightful and funny, but because I'm using an approximate computer, it contains neither.
  • Politicians and journalists computes approximately most of the time.
  • This reminded me of soft heaps. http://en.wikipedia.org/wiki/Soft_heap [wikipedia.org]
    Basically it creates a heap but some of the elements get corrupted and are not in the proper place as if it were a proper heap. Oddly enough, this can be used to write deterministic algorithms, and provides the best complexity for finding minimum spanning trees.
  • It's not correct to say that because approximate (serial, digital) computations don't use accurate (serial, digital) computation, and our brains don't use accurate (serial, digital) computation either, then our brains use approximate (serial, digital) computation.

    This is just as logical as saying that because green is not blue, and red is not blue, therefore green is red.

No spitting on the Bus! Thank you, The Mgt.

Working...