'Approximate Computing' Saves Energy 154
hessian writes "According to a news release from Purdue University, 'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption. "The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency," said Anand Raghunathan, a Purdue Professor of Electrical and Computer Engineering, who has been working in the field for about five years. "Computers were first designed to be precise calculators that solved problems where they were expected to produce an exact numerical value. However, the demand for computing today is driven by very different applications. Mobile and embedded devices need to process richer media, and are getting smarter – understanding us, being more context-aware and having more natural user interfaces. ... The nature of these computations is different from the traditional computations where you need a precise answer."' What's interesting here is that this is how our brains work."
meanwhile... (Score:4)
The majority of CPU cycles in data centers is going to be looking up and filtering specific records in database(or maybe parsing files if you're into that). They can possibly save energy on a few specific kinds of scientific computing.
Numerical computation is pervasive (Score:5, Informative)
This is not about data centers and databases. This is about scientific computation -- video and audio playback, physics simulation, and the like.
The idea of doing a computation approximately first, and then refining the results only in the parts where more accuracy is useful is an old idea; one manifestation are multigrid [wikipedia.org] algorithms.
Re: (Score:3)
Re: (Score:1)
Re: (Score:3)
Welcome to half assed computing.
Which half?
Re: (Score:3)
Re:Numerical computation is pervasive (Score:5, Interesting)
The only new idea here is using approximate computing specifically in trading high precision for lower power. The research has less to do with new algorithms and more to do with new applications of classic algorithms.
Re: (Score:2)
Holy crap dude - you hit the nail on the head, but my brain went primal when you brought up the "Big O".
Re: (Score:2)
Re: (Score:2)
The Big O formalism actually bounds the error quite precisely. In contrast, approximate computing does not offer any bound for the error (or perhaps only in the statistical sense).
Re: (Score:2)
we will see this used on things where it shouldn't be, such as financial transactions.
I, for one, am OK with 1/10000th of a dollar accuracy.
Heck, within 1/640th of a cent ought to be good enough for anybody.
Re: (Score:2)
Then you are creating a problem if it goes into financial transactions. If each transaction is computed 1/10000 less (truncate), how much would it be off by 1 million transactions? Also, only 1 cent difference in accounting will need adjustment in multiple places in order to prove the difference. You are thinking too narrow.
Back to the GP, I do NOT see how this will be put in financial transaction anywhere? Also, it is a rule of thumb to separate decimal from the whole value currency. Simple transaction com
Re: (Score:2)
how much would it be off by 1 million transactions?
That depends entirely on the dataset.
If all the transactions are dealing in units no smaller than $.01, you should see no error truncating to the nearest $.0001, no matter the number of transactions.
If you're worried, just create an account called "Salami" and post the rounding errors there.
Re: (Score:2)
we will just drop that leftover parts of cent to o (Score:2)
we will just drop that leftover parts of cent to our own account.
Re: (Score:2)
1/10000th of a BitCoin will be worth far more than a dollar and will only keep going up.
Apart from when it goes down [slashdot.org]
Re: (Score:2)
Even OLAP could probably profit from this. Sometimes, it doesn't matter whether the response to the question "does my profit increase correlate strongly with my sales behavior X" is "definitely yes, by 0.87" or "definitely yes, by 0.86", the important thing is that is isn't "most likely no, by 0.03".
Also, in the era of heterogeneous machines, you ought to have a choice in that.
Re:meanwhile... (Score:5, Informative)
The majority of CPU cycles in data centers is going to be looking up and filtering specific records in database
Approximate Computing is especially interesting in databases. One of the coolest projects in this space is Berkeley AMPLab's BlinkDB [blinkdb.org]. Their cannonical example
should give you a good idea of how/why it's useful.
Their bencmarks show that Approximate Computing to 1% error is about 100X faster than Hive on Hadoop.
Re:meanwhile... (Score:5, Informative)
DB: Queries with Bounded Errors and Bounded Response Times on Very Large Data
Re:meanwhile... (Score:5, Funny)
Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!
Re: (Score:2)
Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!
We've had approximate computing since the earliest days of the Pentium CPU.
Re: (Score:2)
We've had approximate computing since the earliest days of the Pentium CPU.
My favorite joke of that era was
I am Pentium of Borg.
Arithmetic is irrelevant.
Division is futile.
You will be approximated!
Re: (Score:2)
" Perhaps "approximate computing" is farther along than I imagined!"
Indeed, Excel has been doing it for 20 years.
Re: (Score:2)
Re: (Score:2)
The reason Excel is a large program is because the cpu is imperfect.
Re:meanwhile... (Score:5, Funny)
Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!
Sorry, that was my fault. I didn't have my ad-block disabled. They must have sent them to you instead.
Just send them to me and I will look at it.
Analog (Score:5, Interesting)
This is also how analog computers work. They're extremely fast and efficient, but imprecise. It had a bit of traction in the old days, but interest seems to have died off.
Re: (Score:3)
Re: (Score:2)
Absolute accuracy may not be possible, but plenty-good-enough accuracy is achievable for a lot of different types of problems.
The same can be said of digital computers.
Re: (Score:1)
Re:Analog (Score:4, Informative)
This is also how analog computers work. They're extremely fast and efficient, but imprecise. It had a bit of traction in the old days, but interest seems to have died off.
Analog is not imprecise. Analog computing can be very precise and very fast for complex transfer functions. The problem with Analog is that it is hard to change the output function easily and it is subject to changes in the derived output caused from things like temperature changes or induced noise. So the issue is not about precision.
Re: (Score:2)
it is subject to changes in the derived output caused from things like temperature changes or induced noise.
and
Analog is not imprecise.
does not compute.
If the output is influenced by temperature and noise then it is imprecise.
If a value should be 6.586789, but due to the temperature the output is 6.586913 then it has an error of 0.000124 .
Re: (Score:2)
Since in an Analogue computer every bit now contains an infinite amount of information, instead of just one, I imagine it would be incredibly fast.
And since every decimal is already stored in approximate form, in a normal computer, I cannot imagine it being that different.
Re: (Score:2)
Since in an Analogue computer every bit now contains an infinite amount of information, instead of just one, I imagine it would be incredibly fast.
What is this I don't even.
Re: (Score:2)
Binary: 1 bit can be 2 values, and contains that absolute minimal amount of information possible (true or false).
Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit. So information is sent and computer far faster.
Analogue: 1 bit can be an infinite amount of values, so an infinite amount more information can sent in a single bit. So information is sent and computed far far faster.
Re: (Score:1)
Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit.
No, that's not what a bit is. 'Bit' is short for 'binary digit'. A bit can, by definition, only hold one of two possible states. It is a fundamental unit of information. A decimal digit comprises multiple bits. Somewhere between 3 and 4 bits per decimal digit.
Re: (Score:2)
I understand that, but "1 bit can be one of 10 different values" was more understandable in my opinion than "1 computational value can be one of 10 different values"
You did not give me the correct alternative because one does not really exist, as far as I know.
a single data point is a decent alternative.
"Value" would work in some instances, but not if you are already using that word to mean something else in the very same sentence.
Bit is what we use to call a single burst of information in a computer now. A
Re: (Score:2)
And if non binary computers started to be more popular I would not be surprised if the definition of bit expanded to include them.
It won't, because they won't.
Re: (Score:2)
You did not give me the correct alternative because one does not really exist, as far as I know.
I think I found it - it's called a ban [wikipedia.org].
Re: (Score:2)
Analogue: 1 bit can be an infinite amount of values, so an infinite amount more information can sent in a single bit.
Except that you don't have the ability to measure an infinite spread of values. In reality, it's finite information too.
Re: (Score:2)
Binary: 1 bit can be 2 values, and contains that absolute minimal amount of information possible
That's the last correct statement in your post.
Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit.
You mean one digit. "Bit" has no other definition than the one you've given above.
So information is sent and computer far faster.
No, this simply isn't true. The bit is the fundamental unit of information. You can't transmit data faster simply by declaring it to be decimal/hexadecimal/analogue. All of those things are still, fundamentally, measured in bits. You might as well argue that since you can lift a package that weighs 1kg, you could just as easily lift a package that weighs 1000kg because it's still
Re: (Score:2)
No that is how a analogue computer works. The analogue values are not represented by bits the signals are analogue.
Also "digit" is how you represent a decimal number, it does not imply the same computational signal type things, and cannot be used for analogue in any way.
And defining "bit" as a "basic[/fundamental/indivisible] unit of information in computing" is not far off.
Re: (Score:2)
Ooh ooh, found another mistake (sorry).
Decimal: 1 [decimal digit] can be one of 10 different values, so five times more information is present in a single [decimal digit].
Just because it can represent five times more states, doesn't make it five times more information. It's about 3.322 times more information (as measured in bits).
1 bit can be one of 2 states.
3 bits can take one of 8 states - four times as many states, but only three times the number of bits.
Re: (Score:2)
On the contrary - they can be extremely precise. Analog computing elements were part of both the Saturn V and Apollo CSM stack guidance & navigation systems for example. Analog systems were replaced by digital systems for a wide variety of reasons, but accuracy was not among them.
Re: (Score:2)
Seems like a totally impractical system.
You are probably already using an algorithm that produces approximate results, add on top of that a computer that makes mistakes routinely in the name of speed.
You would think that sometimes the stars would just align and you would get a result that is just completely wrong.
Re: Analog (Score:2)
Zero voltage? Zero leakage current? Your transistors sound ideal ....
Accuracy isn't important anymore (Score:4, Insightful)
We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?
Re: (Score:2)
I don't think you appreciate the point. In most cases, rather than multiplying 152343x1534324, you might as well just multiply 15x10^4x15x10^5 = 225x10^9
. And to understand this you need to be very comfortable with what 2+2 equals exactly.
Re: (Score:1)
We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?
Indeed... what's 3/9 + 3/9 + 3/9 after all? Does it approach 1, or is it actually 1? Do we care? Are we happy?
Re: (Score:2)
3/9 is 0.3* in decimal, which is an infinitely repeating 3. Add 3 of those together, you get an infinitely repeating 9, which, while it approaches 1 using concrete values, is not precisely 1, for the standard definition of 1. However, using approximate computing or general notation, they're the same for all intents and purposes.
This gets even more interesting when you use a different base such as binary, that doesn't have the same issues with notational conversion as base 10. Base 12 is also useful here.
Re: (Score:1)
Oh yes, and an alternative is to argue that 3/9 is in fact equivalent to 0.4 -- but 0.4 * 3 = 1.2, not 1. Or, you could argue that 3/9 is always 1/3 and has no decimal representation, as infinite sequences aren't actually representable (at which point sequences like pi become a bit of an issue, as they have no known finite representation in any number base -- that we know of).
Re: (Score:2)
"10" in base Pi.
Sorry, couldn't resist.
-l
Re: (Score:1)
I disagree, and will continue to unless you can provide a proof. An infinitely repeating number approaches the next value, and can be treated as equivalent for all intents and purposes, but the values are not strictly the same. I do agree about your actual argument however (assumptions, concepts, meaning of a decimal numeral, etc.).
Look at the series 0/infinity, 1/infinity, 2/infinity etc. -- these values, in our decimal notation, are all 0. However, when found in the middle of a calculation, multiplied
Re: (Score:2)
A proof? Okay, here goes. Define f(n) as a number with the decimal representation of "0." followed by n "9"s. The value of 0.9999... with infinitely many 9s is the value of f(n) as n goes to infinity, and I'll just call it x for now. For any number epsilon, we can find an integer n0 such that, for all values of n, n >= n0, the absolute value of f(n) - x is smaller than epsilon. That's what we mean by the value as n goes to infinity.
Now look at what happens if we assume x == 1. Then (if I've done
Re: (Score:2, Insightful)
Where the hell did you get that from? Oh yeah, its the talking points about the common core approach. Too bad that is nothing like what the common core says. Find a single place where any proponent of the common core said something like that and I'll show you a quote mine where they really said "it is understanding the process that is important, of which the final answer is just a small part because computation errors can be corrected."
Future AIs (Score:1)
I find this interesting that most science fiction portrays an AI of some sort of having all of the advantages of sentience (creativity, adaptability, intuition) while also retaining the advantages of a modern computer (perfect recall, high computational accuracy, etc.). This kind of suggests that with a future AI, maybe that would not be the case; maybe the requirements for adaptability and creativity places sufficient demands on a system's (biological or electronic) resources that you couldn't have such a
Re: (Score:2)
However, being artificial brains, they can be connected to both analogue or imperfect-digital "brains", and to precise digital systems. In the same way that you can use a computer, but much closer, more immediate. Best of both worlds.
Heard this before (Score:2, Interesting)
Heard this one before. On Slashdot, even. Yes, you can do it. No, you don't want to. Remember when LCDs came with a few dead pixels? There used to be a market for DRAM with bad bits for phone answering machines and buffers in low-end CD players. That's essentially over.
Working around bad bits in storage devices is common; just about everything has error correction now. For applications where error correction is feasible, this works. Outside that area, there's some gain in cost and power consumption in ex
Re: Heard this before (Score:1)
Re: (Score:1)
JPEG is not a scaling algorithm. It is a (lossy) image compression format. By 'compression', I mean it allows you to compress the file size (measured in bytes) -- not the image dimensions (measured in pixels). It has nothing to do, really, with resizing an image.
Scaling algorithms are things like point sampling, bilinear interpolation, bicubic interpolation, etc.
Cue the hoary old Intel Pentium jokes in 3...2...1 (Score:3)
A1: Successive approximations.
A2: A random number generator
.
Hey, folks, I can keep this up all day.
http://www.netjeff.com/humor/item.cgi?file=PentiumJokes [netjeff.com]
Been there (Score:5, Funny)
Re:Been there (Score:4, Funny)
I remember Intel doing something like this back in the days of the 386, except without the energy savings.
Actually it was Pentium [wikipedia.org] which was a precursor for these processors.
Near enough... he was saving mental energy by settling for the approximately correct answer.
Fuzzy Logic anyone? (Score:4, Informative)
While the concept was interesting, it did not really catch up. Progress of silicon devices made it simply unnecessary. It ended up being used as a buzz word for a few years and quietly died away.
I wonder if this is going to follow the same trend.
Re: (Score:2)
Re: (Score:2)
I wonder if this is going to follow the same trend.
It's quite possible that if I didn't have to use this @#1%ing approximate computer I could definitively answer that question.
Re: (Score:2)
Fuzzy logic has nothing specifically to do with tables, but rather with approximate truth values. Standard probability is a perfectly respectable fuzzy logic, and doesn't need tables.
I'm not remembering what it was supposed to do as far as efficiency goes, though.
I Use This (Score:2)
Saves me energy, too.
Old noos... (Score:2)
Mai spel checkar allreddy wurks dis weigh....
It's not hard (Score:2)
Re: (Score:2)
However, multiplying "simpler" numbers might be faster. For example, I can multiply 20*30 in my head faster than 21.3625*29.7482 (YMMV). Rounding 21.3625*29.7482 to 20*30 might be "good enough" for many purposes, and you can even go back and keep more digits for a small number of cases where it's too close to call with the approximation.
Re: (Score:2)
I do not see why they needed to modify their instruction set to realize such gains.
It was just a generic example to give the casual reader a basic grasp of the idea, not a specific scenario they'll be applying their process to.
Analog (Score:2)
It sounds like they just invented analog.
Half-precision (Score:4, Interesting)
Re: (Score:2)
Is that not what Approximation Algorithms are for? (Score:2)
And this is why we have thousands and thousands of approximation algorithms. Computers do the work perfectly precisely, except when we are talking about decimal numbers, and if you do not need perfect precision you just program in an approximate algorithm.
I do not think you will ever do any better than picking the best mathematical algorithm for your problem, instead of just relying on lazy computers.
Re: (Score:2)
No, it's not. Approximation algorithms use exact computations and model approximation. The problem is using
Re: (Score:2)
If you don't care for the exact value you can use a specific algorithmic approximation, that normally gives you many orders of magnitude less computation time.
Drones (Score:2)
'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption.'
I am, for one, welcoming our new approximately accurate, longer-range drone overlords.
Overclocked GPUs, ASIC, analog? (Score:1)
However numbers for standard-cell ASIC design don't seem much favourable, certainly not "doubling", much less energy saving (on the contrary, at ballpark 10-30% of OC you reach point of diminish
Glad this is happening (Score:1)
Fuzzy logic and all that jazz really should be used more in computing.
It could save considerable power, or allow for far more computation in a smaller space.
So much stuff in computing only requires decently accurate results. Some requires even less accurate results.
If something was off by one pixel during one frame when it was moving, big deal, no loss.
Not to mention how great it would be for the sake of procedural noise.
You want something that isn't too random, but is just one value messed up a little, th
Computation is not the big energy drain (Score:4, Interesting)
The problem with this approach is that the energy used for computation is a relatively small part of the whole. Much more energy is spent on fetching instructions, decoding instructions, fetching data, predicting branches, managing caches and many other processes. And the addition of approximate arithmetic increases the area and leakage of the processor which increases engergy consumption for all programs.
Approximate computation is already widely used in media and numerical applications, but it is far from clear that it is a good idea to put approximate arithmetic circuits in a standard processor.
Approximately once a month.. (Score:3)
.. this story or a slight variant gets reposted to Slashdot in one form or another.
Clive SInclair did this in 1974. (Score:5, Informative)
Due to ROM and cost limitations the original Sinclair Scientific calulator only produced approximate answers, maybe to 3 or four digits.
This was far more accurate than the answers given by a slide rule....
For more info have a look at this page Reversing Sinclair's amazing 1974 calculator hack - half the ROM of the HP-35 [righto.com]
I see what they did here (Score:1)
1. collect museum-piece Pentium systems ...
2. exploit FDIV bug
3. submit blurb to Slashdot
4.
5. Profit!
Approximate computer (Score:1)
Nothing New (Score:1)
Soft heaps (Score:2)
Basically it creates a heap but some of the elements get corrupted and are not in the proper place as if it were a proper heap. Oddly enough, this can be used to write deterministic algorithms, and provides the best complexity for finding minimum spanning trees.
Not how our brains work (Score:2)
It's not correct to say that because approximate (serial, digital) computations don't use accurate (serial, digital) computation, and our brains don't use accurate (serial, digital) computation either, then our brains use approximate (serial, digital) computation.
This is just as logical as saying that because green is not blue, and red is not blue, therefore green is red.
Re: (Score:2)
Re: (Score:2)
Congratulations, you've just described fixed-point arithmetic [wikipedia.org].
Re:It's a nice thought (Score:4, Funny)
Re: (Score:2)
First the precision used for computations is limited by both RAM and CPU power.
And so what? That's actually a part of what I had in mind. "Arbitrary precision" means exactly what it says: for any given finite precision, there exists an amount of space and time in which the computation of a (computable) number can be successfully completed.
And the second - for a lot of computations infinite precision is possible, feasible and used. For example computing 2+2 or 17/2.
These are not general cases, and I don't think that "computing with infinite precision" and "computing a subset of desirable results with infinite precision" are synonyms. There's a bigger snag in the general case. Not every real number can be comput
Re: (Score:2)
Except that's what these researchers are doing. They're building new instructions that perform faster but produce lower precision results.
Re: (Score:1)
Pshaw, I had one in the 4th grade. It was called a "slide rule" and I used it because I suck at memorization. Who needs multiplication tables when you have a handy tool the teacher doesn't know how it's used or what it actually does?
Re: (Score:3)
Re: (Score:2)
But it's ultimately impossible to build a computer that calculates with arbitrary precision.
Excuse me but not quite, assuming you don't mean absolute precision, we already use multiple precision calculations based on need, speed or memory foot print. We have multiple sizes of floating point number representations as well as integers of varying sizes. Plus, there is nothing that prevents you from doing X-Bit floating point number calculations if you wanted.
Re: (Score:1)
You can already do this using pseudo-random number generators. While pseudo-random numbers may not be random enough for certain scientific computation purposes, they are more than adequate for gaming. There seems to be a common misconception that computers are incapable of producing randomness. Pseudo-random number-generating algorithms, seeded with simple things like the system time and keyboard events, are good enough for 99% of common everyday computing tasks.
The advantage of this 'approximate computing'
Re:Finally some better 'Ai' (Score:4, Funny)
It's just another example of the 'Approximate Spelling' technique. The parent poster is illustrating significant savings in mental energy.
Re: (Score:2)
Physics doesn't care about complete precision
But if I don't know how precisely I know a particle's momentum, how can I tell how vague I have to be about it's position?