Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Math Power Hardware Science Technology

'Inexact' Chips Save Power By Fudging the Math 325

Barence writes "Computer scientists have unveiled a computer chip that turns traditional thinking about mathematical accuracy on its head by fudging calculations. The concept works by allowing processing components — such as hardware for adding and multiplying numbers — to make a few mistakes, which means they are not working as hard, and so use less power and get through tasks more quickly. The Rice University researchers say prototypes are 15 times more efficient and could be used in some applications without having a negative effect."
This discussion has been archived. No new comments can be posted.

'Inexact' Chips Save Power By Fudging the Math

Comments Filter:
  • Prediction (Score:5, Funny)

    by Anonymous Coward on Friday May 18, 2012 @09:17AM (#40040643)

    37 posts about the Pentium division bug.

    • by Anonymous Coward on Friday May 18, 2012 @09:20AM (#40040673)

      You just deprived someone of their +5 Funny, you bastard.

      • by Chrisq ( 894406 ) on Friday May 18, 2012 @09:57AM (#40041157)

        You just deprived someone of their +5 Funny, you bastard.

        My computer makes it a +4.7 funny.

        • That's too precise.

        • Re: (Score:3, Insightful)

          Comment removed based on user account deletion
          • The math processor wastes even more power. Ie, two processors running simultaneously. Even if you've got one idle and sleeping it's sucking up more power than no processor.

            The big problem in some cases would be having standards compliant IEEE floating point. The basic calculation may be fast but then there's this chunk of overhead involved to check for range/overflow/underflow exceptions. When this extra part is done in software you can use libraries that skip it, but sometimes hardware will do these ch

    • Damn. You beat me to it.

    • by Anonymous Coward on Friday May 18, 2012 @09:33AM (#40040843)

      36.9999995796 posts about the Pentium division bug.

      Fixed that for you.

    • by Zordak ( 123132 )

      37 posts about the Pentium division bug.

      By my estimation, at least half of the Slashdot readership isn't even old enough to remember the Pentium division bug.

      • Re: (Score:3, Funny)

        by Anonymous Coward

        37 posts about the Pentium division bug.

        By my estimation, at least half of the Slashdot readership isn't even old enough to remember the Pentium division bug.

        You're making the somewhat unsupportable assumption that Slashdot is attracting younger new readers somehow.

    • by Woogiemonger ( 628172 ) on Friday May 18, 2012 @10:16AM (#40041407)

      37 posts about the Pentium division bug.

      37! In a row?

  • Graphics cards (Score:3, Interesting)

    by Anonymous Coward on Friday May 18, 2012 @09:17AM (#40040645)

    Don't they do this too, but fudge the maths so they can be a bit faster?

    Hence the Cuda stuff needed special modes to operate in IEEE floats etc...

    • Re:Graphics cards (Score:4, Insightful)

      by Anonymous Coward on Friday May 18, 2012 @09:28AM (#40040779)

      Big difference between not dealing with full precision and encouraging erroneous behavior by trimming infrequently chunks of hardware.

  • by 3.5 stripes ( 578410 ) on Friday May 18, 2012 @09:19AM (#40040667)

    I wish I could say reading the article gave me some insight as to where it fudges, but they kinda left it out.

  • by foobsr ( 693224 ) on Friday May 18, 2012 @09:21AM (#40040685) Homepage Journal
    Will these chips also boost attempts at achieving AI again?

    CC.

    • What exactly does imprecise math have to do with AI? Intelligent systems may use heuristics, but that is not the equivilent (far from it) as not doing proper math.

      • Re:AI Chip (Score:5, Insightful)

        by trum4n ( 982031 ) on Friday May 18, 2012 @09:33AM (#40040845)
        Humans tend to do fast imprecise math to decided when to cross the street. It looks like that car won't hit me, but i can't say its going to take 4.865 seconds for it to get to the crosswalk. Estimations, even if fudged and almost completely wrong, should play a massive role in AI.
        • Humans aren't doing math at all in those situations. Hence, dumbing down a computer's math will not make it more "intelligent". Intelligent systems that are being designed today take advantage of a myriad of techniques developed over the past few decades. Path finding systems use different types of tree search algoritms. Self-driving cars will use a pair of cameras to judge distance and relative speed of external objects. In neither of these cases does imprecise math help the intelligence of they system.

          • We're not doing math? What is it we're doing then?

          • In neither of these cases does imprecise math help the intelligence of they system.

            It might, if the fuzzy math enables calculations 10,000 times a second instead of 10 times a second.

          • Re:AI Chip (Score:4, Interesting)

            by SharpFang ( 651121 ) on Friday May 18, 2012 @10:03AM (#40041247) Homepage Journal

            Your definition of math is very limited. Descriptive Geometry is math too.

            Path finding systems may use imprecise weight function when making decisions - calculating weights is a major burden.

            Using cameras involves image analysis. In essence, you're working with what is a bunch of noise, and you analyze it for meaningful data, by averaging over a whole bunch of unreliable samples. If you can do it 15 times faster at cost of introducing 5% more noise in the input stage, you're a happy man.

            In essence, if input data is burdened by noise of any kind - and only "pure" data like typed or read from disk isn't, any kind of real world data like sensor readouts, images or audio contains noise, the algorithm must be resistant to said noise, and a little more of it coming from math mistakes can't break it. Only after the algorithm reduces say 1MB of raw pixels into 400 bytes of vectorized obstacles you may want to be more precise.... and even then small skews won't break it completely.

            Also, what about genetic algorithms, where "mistakes" are introduced into the data artificially? They are very good at some classes of problems, and unreliable calculations at certain points would probably be advantageous to the final outcome.

          • As many have said below, your brain is indeed doing math - what it's not doing is "computation".

            Most of the discussions in this thread are forgetting that important difference. The applications for which this type of chip will be useful are those in which the exact value of something is not important, but the relationships between values are. For instance, if you're implementing a control system algorithm, you don't care that the value of your integration is something specific, but you do care that it wil

      • Neural networks, say? It's not like real brains, being analog computers, work in a deterministic manner anyway.
  • by SJHillman ( 1966756 ) on Friday May 18, 2012 @09:21AM (#40040691)

    These chips will, of course, be aimed at government markets.

    • by who_stole_my_kidneys ( 1956012 ) on Friday May 18, 2012 @09:25AM (#40040741)
      where accuracy is just some word that gets in the way.
    • who will be using them in the next generation of missile guidance systems.

      So they'll be able to put a warhead through a window still, just that they don't know if it'll be $Dictator's window or the kindergarten next door... oh, wait.

    • I can see a notice from the IRS right now.

      Hello, .

      By our records, we have determined that you are late on your taxes.

      Below is the amount you owe to the Federal Government.

      Your taxed income is: NaN DIVISION BY ZERO

      Thank you for your time.

  • First Post! (Score:5, Funny)

    by MyLongNickName ( 822545 ) on Friday May 18, 2012 @09:24AM (#40040731) Journal

    This is first post according to my new power-efficient computer!

  • that strategy has always worked for me.

  • I can already hear people arguing "no, no, I did not fudge with the numbers, it's the computer chip" :)

    • next Mars shot?

    • I can already hear people arguing "no, no, I did not fudge with the numbers, it's the computer chip" :)

      I think they must have used one of those in the computer they used to decide if Greece was up to joining the Euro.

  • At least our eventual computer overlords won't be able to count accurately to be sure they've eliminated all of us...

  • From here on out, I'm requiring my chips to show their work. And, it better not look the same as the work that that northbridge chip you are sitting next to.

  • by CastrTroy ( 595695 ) on Friday May 18, 2012 @09:30AM (#40040799)
    Seems like nothing new to me. Floating point binary math is basically used for the same reason. It gives us and answer that's close enough, without requiring too much computation time. And it causes all sorts of fun since even simple numbers like 0.1 can't be represented exactly in binary floating point. Binary floating point works well for scientific apps, but fails quite badly at financial apps. I think this is basically taking floating point to the next level where the calculations are even more off. Which might work for certain applications, but for other types of applications would be completely catastrophic. What really bothers me is languages and platforms that provide no ability to work with numbers in a decimal representation.
  • by bjourne ( 1034822 ) on Friday May 18, 2012 @09:30AM (#40040801) Homepage Journal
    Before someone comes up with that stupid remark, not much. :) If the chips are 15 times as efficient as normal ones, it means that you could run for instance four in parallel and rerun each calculation in which one of them differs. That way you would both get both accurate calculations and power savings. Modify the number of chips to run in parallel depending on the accuracy and efficiency needed.
    • by Hentes ( 2461350 ) on Friday May 18, 2012 @10:00AM (#40041211)

      If I'm reading the article right, the chips are still deterministic, they just don't care about a few rare edge cases. So whether there is an error or not depends on the input, and in your case all four chips will make the same mistake. What you could try is modify the input a little for each rerun and try to interpolate the result from that, but that won't give you perfect accuracy.

  • PI (Score:5, Funny)

    by Rik Sweeney ( 471717 ) on Friday May 18, 2012 @09:30AM (#40040809) Homepage

    "This isn't so much a circle as a square, what the hell's going on?!"
    "Oh, that's because the chip in your machine doesn't accurately define PI, it rounds the value up"
    "To what?"
    "4"

  • by Ravensfire ( 209905 ) on Friday May 18, 2012 @09:36AM (#40040885) Homepage

    Hmm, seems this has been used by The Fed and European Central Bank for quite a while now.

  • by Corson ( 746347 ) on Friday May 18, 2012 @09:38AM (#40040923)
    In more recent news, computer scientists determined that monkeys can get the same job done even faster, and by using even less power, and by making, um... a lot more mistakes.
  • Where I work, we call this "the much-faster 'wrong' algorithm". It's frequently a side-effect of overly-enthusiastic attempts at optimization, sometimes by people, and sometimes by compilers.

  • Hasn't this concept been around since the 60's?
  • by paleo2002 ( 1079697 ) on Friday May 18, 2012 @09:44AM (#40040989)
    This is exactly the problem with American chips lately. They're too lazy to put any effort into their work. Sure, they're "saving energy" but that just means they're going to become even more obese. Chips from many Asian manufacturers are already much more accurate and efficient than American ones. We need to encourage American chips to be more interested in STEM fields if we're ever going to turn our economy around!
  • by tomhath ( 637240 ) on Friday May 18, 2012 @09:44AM (#40040995)

    the concept works by allowing processing components — such as hardware for adding and multiplying numbers — to make a few mistakes, which means they are not working as hard

    But my math teacher didn't understand the important difference between efficient and lazy.

  • by rodrigoandrade ( 713371 ) on Friday May 18, 2012 @09:44AM (#40041005)

    The concept works by allowing processing components â" such as hardware for adding and multiplying numbers â" to make a few mistakes, which means they are not working as hard, and so use less power and get through tasks more quickly.

    This concept was used a lot back in my high school.

  • This device has already been invented by a certain Tom Peters. I present: THE CALCUCORN:

    http://video.google.com/videoplay?docid=-7064178994016272127 [google.com]

    (skip to 7:15)

  • by dedmorris ( 1137577 ) on Friday May 18, 2012 @09:54AM (#40041119)
    I do the same thing.

    I write 15 times as much code by not bothering to fix the mistakes.

  • Of course, we can't trust that number if it was run on the chips in question. What is the margin of error? Plus or minus two to the fourth power?
  • by TheSkepticalOptimist ( 898384 ) on Friday May 18, 2012 @09:57AM (#40041159)

    Wow, so the goal to be Green in the future is to introduce more bugs into hardware to save power. While I am sure there are limited uses of this kind of "math" in general I don't believe these chips will have widespread adoption because mathematical accuracy, at least for integer values, is kind of critical for most applications. Its hard enough for developers to predict the random an idiotic nature of the users of their software, now they have to build protection against hardware throwing them random results.

    This instantly reminded me of a developer that claimed a 1200% improvement in performance after he optimized some code. The developer wasn't particularly skilled and some senior level guys had already optimized the performance about as far as it could be taken, so we were dubious. We found after a code review that basically this developer has improved the efficiency of the software by skipping some critical intensive calculations that was the point of the software.

    Sure you could claim that this optimization is greener then the original code because the CPU is not working as hard, but if you are not going to get the expected results, f*ck being green.

  • As much as good enough might save power, good enough doesn't cut it once you start using Currency. As long as these stay away from accountants and banks that's fine, but will they is the issue.
  • by Freddybear ( 1805256 ) on Friday May 18, 2012 @10:04AM (#40041261)

    Call it the "Close enough for government work" chip.

  • by gman003 ( 1693318 ) on Friday May 18, 2012 @10:12AM (#40041351)

    Video game graphics could probably benefit from this. Very few people will notice that one pixel is #FA1003 instead of #FC1102, especially when it's replaced 16ms (or, worst-case, 33ms) later with yet another color. It might actually make things "better" - making the rendering seem more analog. Many games are "wasting" power adding film grain or bokeh depth-of-field or lens flares or vignette, to try to simulate the imperfections of analog systems to try to make their graphics less artificial-looking. If you can get a "better" look while using *less* power, all the better.

    Actually, I seem to recall hearing about this earlier. For some reason I want to say nVidia specifically has been looking into this.

  • What!? That rocket was NO WHERE NEAR ME. Wait, why is everything FROZEN?!
    Connection Terminated. Desynch error rate exceeded.

    Oh sure we'll just snapshot the whole flippin' gamestate to the clients and do reconciliation -- But that's just wrong.
    Error propagation, Non-determinism, etc etc. This is OK for GPU stuff that ONLY draws pixels. Anything that affects gameplay could only be done server side with dumb clients, but not for any real detailed worlds (just ask second life devs) -- Without deterministic client side prediction you need MUCH higher bandwidth and latency of less than 30ms to get equivalent experience. The size of game state in game worlds has been increasing geometrically (in PCs it still grows, consoles hit limits due to ridiculously long cycles and outdated HW), determinism and pseudo randomness helps keep the required synch state bandwith low. Oh, I guess I could use less precise computations for SOME particle effects (non damaging stuff), but you know what? I'M ALREADY DOING THAT.

    What's that you say? The errors could be deterministic? Oh really... well, then what the hell is the point? Why not just use SMALLER NUMBERS and let the PROGRAMMER decide what the precision should be. It's like no one's heard of short int or 16bit processors. Give a dedicated path for smaller numbers, and keep us from being penalised when we use them (currently, 16 bit instructions are performed in 32bit or 64bit then trimmed back down). Some GPU stuff already has HALF PRECISION floats. Optimise that path and STFU about fuzzy math, you sound moronic...

  • I can imagine a few cases when it could be allowed, based on mathematical proof in advance that error level would be acceptable.

    Audio/Video playback in a noisy environment
    Processing similar to PageRank and the recently announced NetRank for biochemical analysis might be able to produce better results for a given cost in electricity. In other words, deeper graph analysis traded for less significant digits
    CPU-controlled activities that depend on statistics and sensors, for example street light control, voice/gesture based activation of lighting
    Applications in which low power is the most important thing, especially if it is output meant for a human brain which already operates on a lossy basis. A wristwatch might be lower power if it is allowed to be correct within plus or minus 15 seconds.

  • I cannot wait until these chips start doing high frequency trading in the financial markets....

Hokey religions and ancient weapons are no substitute for a good blaster at your side. - Han Solo

Working...