Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware Technology

Unpredictability in Future Microprocessors 244

prostoalex writes "A Business Week article says increase in chip speeds and number of transistors on a single microprocessor leads to varying degrees of unpredictability, which used to be a no-no word in the microprocessor world. However, according to scientists from Georgia Tech's Center for Research in Embedded Systems & Technology, unpredictability becomes a great asset leading to energy conservation and increased computation speeds."
This discussion has been archived. No new comments can be posted.

Unpredictability in Future Microprocessors

Comments Filter:
  • by Electroly ( 708000 ) on Saturday February 12, 2005 @09:28PM (#11656133)
    Three cheers for entropy!
  • Well... (Score:4, Funny)

    by Realistic_Dragon ( 655151 ) on Saturday February 12, 2005 @09:29PM (#11656140) Homepage
    I'd be a lot more trusting of their results if they had worked it out on a processor with 100% certainty.
    • Re:Well... (Score:2, Funny)

      by thpr ( 786837 )
      I'd be a lot more trusting of their results if they had worked it out on a processor with 100% certainty.

      Think of the potential heartburn for the CEOs and CFOs who might have to sign off the financial statements (ala Sarbanes-Oxley) after the calculations were done using one of these processors... :*)

  • Soo (Score:5, Funny)

    by NIK282000 ( 737852 ) on Saturday February 12, 2005 @09:30PM (#11656153) Homepage Journal
    Will the number of windows errors increase or will they just occur at even more improbable times?
  • Another use (Score:3, Insightful)

    by physicsphairy ( 720718 ) on Saturday February 12, 2005 @09:35PM (#11656184)
    unpredictability becomes a great asset leading to energy conservation and increased computation speeds

    Probably and even bigger boon for encryption and key-generation.

    • Re:Another use (Score:4, Insightful)

      by thpr ( 786837 ) on Saturday February 12, 2005 @10:04PM (#11656361)
      Probably and even bigger boon for encryption and key-generation.

      I vote key-generation and not encryption. Otherwise, how would you decrypt it? (given that the key generation and decryption are non-deterministic with one of these...)

      • Re:Another use (Score:3, Informative)

        by PingPongBoy ( 303994 )
        Probably and even bigger boon for encryption and key-generation.
        I vote key-generation and not encryption. Otherwise, how would you decrypt it?


        Unpredictability really is useful for encryption because random numbers are very important for better encryption.

        The second application that comes to mind is the one-time pad. Of course you have to save the random padding data somewhere but you always had to do that. The unpredictability just makes one-time pad that much better.

        Random numbers may be used to genera
    • Is it just an accident (pun intended) that Slashdot ran this story [slashdot.org] at the same time as the current one on including random processes on-die?

      Kinda spooky (or is it the global collective consciousness expressing it's desire to us, the technical geeky guys, to do it's will)...
  • by Anonymous Coward
    "unpredictability becomes a great asset leading to energy conservation and increased computation speeds."

    When robots have this "unpredictability" tell me not to worry!
  • by swg101 ( 571879 ) on Saturday February 12, 2005 @09:37PM (#11656198)
    Degrees of probability and uncertainty have been in given in the communications industry for quite some time. This just seems to be pointing out that the same ideas can be applied to the actual processing of the data.

    Now that I think about it, it does seem to make some sense. I am not sure that I would want to program on such a chip right now though (I imagine that debugging could become a nightmare really quickly!).
  • by rhaikh ( 856971 ) on Saturday February 12, 2005 @09:39PM (#11656212)
    Well, there's a 99.99% chance that airbag shouldn't be deployed right now, I'm just gonna disregard that "1".
  • TFA (Score:5, Interesting)

    by shirai ( 42309 ) on Saturday February 12, 2005 @09:42PM (#11656222) Homepage
    It is an interesting idea but I think there would have to be a lot of research that goes into this and here's what I mean.

    The article is right in that certain things don't need 100% accuracy and that small variations in the answers can yield very good results. This could be important when time is more important than 100% accuracy.

    That said, how do we know if the variations are small? Only 1 bit can change a huge negative number into a huge positive number in a standard integer (Okay, I haven't looked at the bit layout of an integer lately but I think it's encoded like this. If not, you still get my point right?).

    So perhaps then this idea sort of works when we are aggregating lots of small calculated numbers but then switch to a traditional chip to add them together.

    You see what I'm getting at? Computers don't really know that the small variation at the most significant bit is actually a huge variation.

    I think there would also have to be a lot of analysis based on understanding how the variations add up and their cumulative effect. For example, a well written app under this scenario means that the errors basically average out over time as opposed to errors that blow out of proportion.

    Anyways, I can think of a few good uses for this. Probably the most notable being down the DSP path (which the article metions). Our eyes probably wouldn't see small errors in an HD display during processing or hear small errors in audio processing.

    This is parallel to the fact that there is less error checking in audio CDs and video DVDs than their computer counterparts CD-ROM and DVD-ROM (or the R/RW/etc.etc. counterparts).
    • I think the idea is that we have enough computing power to be able to throw a couple of checks at each operation...

      what a waste.
      • by shawb ( 16347 )
        Actually it looked like the article was saying that all of those checks are wasteful, and there are situations where we should be able to deal with a wrong answer.

        My question is, what happens when that error comes not in a number being worked with, but an operation? Operations are just thrown at the CPU as a bunch of 1s and 0s, so would be succeptible to the same flaws.
    • Re:TFA (Score:2, Interesting)

      by buraianto ( 841292 )
      Or, you have a random bit change in your opcode and suddenly you're doing a muliply instead of an add. Or your opcode is an invalid one and your processor halts. Yeah, I don't think this makes sense given our current way of doing microprocessors. We'd have to do it some other way.
      • We would simply need to gradually redesign our software that a larger fraction of its processing can be done on parallel unreliable processors. You can have part of the processor working as a traditional CPU (with more error checking), while other areas would be designed to carry out simple parallel tasks. The traditional CPU part would control the execution of the main logic, but it would constantly outsource some jobs to those unreliable part of the processors where they do not speak good English but can
    • Re:TFA (Score:2, Insightful)

      by krunk4ever ( 856261 )
      also, given these are microprocessors, when they have instruction jumps, wouldn't it be a concern if the address they're jumping to is slightly off?
    • Re:TFA (Score:3, Interesting)

      by fm6 ( 162816 )

      That said, how do we know if the variations are small? Only 1 bit can change a huge negative number into a huge positive number in a standard integer (Okay, I haven't looked at the bit layout of an integer lately but I think it's encoded like this. If not, you still get my point right?).

      Sure, if you continue to use an encoding that doesn't tolerate errors. The math is beyond me, but I know there are ways to encode numbers so that a single-bit error nudges a value slightly, instead of changing it in wildly

      • I know -- let's start encoding numbers in unary! Unary is highly tolerant to bit errors! Hurray!
      • you guys are talking about binary like its some sort of encoded format. binary is just base 2 nomenclature for numbers.. an estimation can be off by one bit but you know its minor because it's the bottom bit or the bottom 4 bits (the bottom 4 bits of a number could only change the number by 15). you can't write an estimation that does one arbitrary bit at a time, that's hollywood bullshit (where the guy slaps the codebreakermajig on the vault and the spinning numbers stop one random digit at a time). if you
      • Sure, if you continue to use an encoding that doesn't tolerate errors. The math is beyond me, but I know there are ways to encode numbers so that a single-bit error nudges a value slightly, instead of changing it in wildly unpredictable ways.

        Yeah, it's called base-1.
      • You need to use an error-correcting code that has a Hamming distance [nist.gov] between code words that is great enough to allow for the correction of N bit errors. You need a Hamming distance of 2N+1 for N bits of error correction.
    • This is not a troll, but an objective observation. Take any given rented, badly-scratched DVD, pop it in an off-the-shelf DVD player, get tons of artifacts, stuttering, etc. Pop it in a PS2 (which is DVD-ROM in theory), and watch the video almost stop playing. Pop it in a Lite-On DVD-ROM drive, watch it play all the way through, at most stalling for a second or two at the bad parts.

      It's true that, scaling up, it may be better to have a six billion by four billion pixel display that has 90% accuracy than
    • Re:TFA (Score:4, Interesting)

      by Epistax ( 544591 ) <epistax @ g m a il.com> on Sunday February 13, 2005 @05:01AM (#11658162) Journal
      I think you're thinking of things to specifically. Think about the human mind. It's utterly insane in speed, yet completely analog. Digital systems cannot yet grasp anywhere near the computing power of the human brain, yet they can beat the brain in any single (non-fuzzy) pursuit (such as chess). The human brain accels at fuzzy (yes, fuzzy) calculations such as identifying faces. It's quite possible we'll never be able to make a computer faster at identifying faces than the human brain without directly stealing our algorithm because it's the single thing we're best at.

      So, we can identify faces really, really well. So well that :) looks like a face despite the fact that it seriously shouldn't. Well, guess what? What if we change identifying faces into identifying something else? Perhaps a computer can identify things in data using fuzzy logic very easily--- what could we do with that? Now we'll always need the truly quasi-100% accurate side unless we're relying on a self-evaluating true AI, so I'm not arguing with that.

      Looking back, this isn't really a response to what you wrote, but moreover it's a thing that I type after I drank alcohol, but I think it stands on its own merit. Anyway a lot of research is needed and I'm, sure we can agree on that. It'a certainly interesting.
  • by craXORjack ( 726120 ) on Saturday February 12, 2005 @09:47PM (#11656255)
    Whether any Wall Street firms are getting regular briefs on Palem's research, as Intel and IBM (IBM ) are, he won't say. Wall Street doesn't like people blabbering about technology that promises a competitive advantage.

    Actually this sounds more useful to Diebold and the Republican National Committee.

  • by Anonymous Coward on Saturday February 12, 2005 @09:50PM (#11656270)
    Signed,

    FDIV
  • by mOoZik ( 698544 ) on Saturday February 12, 2005 @09:51PM (#11656274) Homepage
    You're sitting at your desk and out of nowhere, bam! You are transported to the edge of the galaxy. Weird.

    • Um, you already ARE at the edge of the galaxy. Perhaps you meant the OTHER edge of the galaxy?
      • well no we're not at the edge of the galaxy, we're about two-thirds of the way there if I remember correctly. The edge of the galaxy is a long long way away.
    • However, if a chip can get by without all the double checks to assure absolute certainty, then energy consumption could be slashed -- and speed would get a simultaneous boost. That's the notion behind Palem's concept of probabilistic bits, or Pbits. As he puts it: "Uncertainty, contrary to being an impediment, becomes a resource."

      Dude... this is only a FINITE improbability generator. I'll leave it to you to figure out the exact finite improbability of using a few of these to generate an *infinite* improba

  • We have this now (Score:4, Insightful)

    by drsmack1 ( 698392 ) * on Saturday February 12, 2005 @10:00PM (#11656334)
    Before I found memtest [memtest86.com] my computers were VERY unpredicable.
  • by layingMantis ( 411804 ) on Saturday February 12, 2005 @10:00PM (#11656338) Homepage
    so a "random" number could be ...actually random right, as opposed to the now deterministically computed pseudo random numbers....how could this NOT be useful!? The AI ramifications alone are fascinating to imagine...
    • I thought there was hardware to give truly random numbers, it just reads the noise in silicon.

      A random error in a digital number doesn't seem to bode well. Might as well stick to analog for those needs, because one of the benefits of digital processing is that transmission and storage errors can be correctable provided proper correction algorithm, and computations can be re-run.
    • What AI ramifications?

      We've been able to obtain truly random numbers for a long, long time. All you need to do is get information from some physical device - the Linux kernel has a random function that gets some random information from the keyboard. Sound cards work too.

      But in most applications, a simple pseudo-random number generator is going to be indistinguishable from truly random numbers.
  • by Tibe ( 444675 )
    "... three to one... two... one... probability factor of one to one... we have normality, Anything you still can't cope with is therefore your own problem. Please relax."
  • Little problem (Score:3, Insightful)

    by Anonymous Coward on Saturday February 12, 2005 @10:08PM (#11656379)
    While certainly many problems can be solved using less than perfect measures, building an entire chip based on this would not work out so well. For example, while a DSP app might deal fine with small variations in results, a device driver or chunk of crypto code is probably not going to be very happy with close-but-not-quite-right results.

    Why do I have a feeling these guys have done simulations with single applications, ignoring the surrounding OS environment?
  • Intel (Score:3, Funny)

    by TWX ( 665546 ) on Saturday February 12, 2005 @10:10PM (#11656391)
    I'd say that with Intel's various errors over the last fifteen years, like the fourth and ninth digit floating point division errors in the Pentium 60, and the heat throttleback due to normal operating conditions on their newer processors, Intel had done a wonderful job of embracing this new unpredictability technology.
  • Analog Processor (Score:5, Interesting)

    by 10101001 10101001 ( 732688 ) on Saturday February 12, 2005 @10:15PM (#11656416) Journal
    It sounds like this is just another implementation of an analog processor, which is far from a new idea. Really simple analog processors are just a bit of plastic foam used as a manifold. There's even the idea of having 0, 1, and 1/2 (where 1/2 is seen as uncertain) in something called a Lukasiewicz Logic Array. Anyways, I wish the guy good luck with it, though it might be a good idea if he did some more reading on ideas already presented on the subject.

    Obvious google search link:
    Google Search for "lukasiewicz analog" [google.com]
  • There is already acceptable levels of randomness in the form of soft errors. Designs already take into account the fact that you just have to live with a certain rate of error because of cosmic rays or alpha particles. It looks like they are just extending such techniques to transistor tolerances.
  • by Inmatarian ( 814090 ) on Saturday February 12, 2005 @10:27PM (#11656482)
    Intel has hit a brick wall in terms of their processors. They invested heavily in their processor fabrication centers and are now coming to terms that they won't be able to produce reliably anymore. That said, lets discuss the nature of 1s and 0s. Typically, a 0 is broadcast across a chip as a lack of voltage, and a 1 brodcast as a +5 volts. Each transistor has to be capable of being just right of a resistor to not degrade the +5 volts. Heres where "unpredictability" comes into play: you have a handful of volts to play with. The article's talking about having unpredictable algorythms is the press agent not knowing what he's talking about, but certainly allowing a voltage threshold within the confines of the transistors is an okay thing. The only problem is when its across a lot of serial lines, because that compounds into significant loss. This is just my opinion, but I think this guy is talking about chip designs where the data isn't broadcast in 1s and 0s anymore, but in whatever multiples of electronvolts that would correspond to a number. I'm not comfortable with this, and I would like someone to tell me I'm just paranoid.
    • Okay. You're paranoid. But we are still out to get you.
    • "informative" ?!!

      Do you have even the faintest idea how logic actually works? Or did I just mis-read what you wrote?

      None of the gates have to reliably reproduce that actual voltage (+5, +3.3, +2.8, +/-12v, or whatever) that represents a "1" or "0", they just have to reliable recognise that it's "smallish" (less than halfway, logic 0), or "biggish" (more than halfway, logic 1) and in turn produce a voltage themselves that's reasonably close to whatever represents a "0" or "1". Binary is used for exactly th
    • Typically, a 0 is broadcast across a chip as a lack of voltage, and a 1 brodcast as a +5 volts.

      1974 called. It wants its CMOS logic signal voltages back.

    • I'm not comfortable with this, and I would like someone to tell me I'm just paranoid.

      You are not paranoid and your comfort level is well tuned. Unreliable behavior cannot be tolerated unless it is entropy.

      I figure microprocessor development as we have seen it is nearing its end without new ideas. For example, gigahertz isn't by it self an indication of computational capability. It takes hundreds of CPU cycles on a P4 to do common operations that on an 8080 would take one CPU cycle. If you can redu

  • by timeOday ( 582209 ) on Saturday February 12, 2005 @10:36PM (#11656548)
    In college, my professor challenged the entire class to find an algorithm that takes an array, and returns a single value larger than the median of values in the array, in sub-linear time.

    Naturally, he had us stumped, because the task is impossible. Without checking at least half the numbers, you can't be sure of the answer.

    But, he pointed out, here's what you can do: pick 1000 numbers from the array at random and return the largest - a constant time operation! This "algorithm" just might return a wrong answer. But the chances of that happening are far less than the odds that you're in a nuthouse hallucinating this message right now. The odds are far less than the liklihood that a computer would botch a deterministic algorithm during executation anyways. The odds of making a mistake with the algorithm are 0, for all intents and purposes. So is that OK?

    • You have to be careful. Given an array of 999,999 copies of the number 63, and a single 64, that probabilistic method has a 99.9% chance of failing.

      Depending on the scenario, that degenerate case can be quite common.
      • heh, 63 is the median in your given array. since the returned value does not need to be in the array, just add 1 and return that. or multiply by some large constant. or just return the sum of all the numbers you encountered. theres lots of options here.
        • Then take this array:

          999,999 copies of -1
          1 copy of 1,000,000,000

          Method 1 (add 1) return 0.
          Method 2 (multiply by -1,000) return -1,000.
          Method 3 (sum all numbers encountered) return -1,000 in most cases.

          Since the answer doesn't have to be in the array, you can get a valid answer (if there exists one) by returning 0x7FFF....FFFFF.
    • Assuming e.g. an array of ints the answer to this problem is:

      return INT_MAX;
      • To do any theory you have to specify your computational model, and I seem to have fallen short there. I was assuming unlimited memory and range, as is usual.
        • Dangit, you dodged the point.


          return infinity;


          Any machine capable of handling unlimited range can handle this return value. By definition. It may be algorithmically inpossible to determine the median of an array without examining each element, but finding a number larger than the median is dirt simple. The only way this simple algorithm could fail is if half or more of the values are infinity. (oh, and any algorithm would fail, 'cause there isn't a larger number!)

    • by slashnot007 ( 576103 ) on Saturday February 12, 2005 @11:53PM (#11656911)
      Problem: find a number larger than the median

      proposed solution: pick 1000 entires at random and retain the highest.

      analysis: at first glance it might seem that the problem seems ill formed since the size of the array is not specified. But note that this is not a parametric problem. You are asked for the median, so the actual numerical values of the array irrelevant, only the rank order. Some wiseguys here have suggested returning the largest double precision number as a gaurenteed bound. While a wise ass answer it does raise a second interesting false lead. Even if the number were represented in infinite precision and this could be aribtrarily large or small the proposed solution does not care. Again this is because all that matters is the ranking of the numbers not their values.

      COnsider the proposed solution. pick any cell at random and examine the number. if this number is returned there is a 50% chance it is equal to or greater than the median of the set. (if this is not obvious, dwell on the meaning of the word median: it means half the numbers are above/below that number.). So the chance it is below the median is 0.5. if you choose 1000 numbers the chance that all are below the median is 0.5^1000 which is roughly 1 part in a google.

      So the author is right, this algorithm fails less often than the probability that there is a cosmic ray that corrupts the calculation or their is a power blackout in the middle of it or that you have a heart attack.
      • ...if the values in the array are distributed randomly (or a reasonable approximation thereof). If they're not, you might just be screwed.
        • if the values in the array are distributed randomly

          That doesn't matter. The analysis never said anything about the distribution of the numbers in the array. You may assume a worst case distribution of the numbers in the array, and the algorithm still works. Probability is meassured over random choices made by the algorithm, not over inputs. So given an input, you can compute the probability that the algorithm fails, which is always going to be a small number. Then you take the input giving the highest er
    • Why not just take an unsigned 0, subtract one, and return that?

  • more info (Score:5, Informative)

    by mako1138 ( 837520 ) on Saturday February 12, 2005 @10:42PM (#11656569)
    This article left me rather insatisfied, so I looked for a better one. I found it here [gatech.edu], a collection of papers on the subject, with real-world results, it seems. The first article is a nice overview, and there's some pics of odd-looking silicon. They have funding from DARPA, interestingly enough.
  • I, for one... (Score:2, Interesting)

    by melikamp ( 631205 )

    That is one step closer to a human-like AI -- reminds me of a neural net. The technology from TFA may be just what they (computers) need to become like us: i.e. an ability to make quick decisions about complex problems, and succeeding more often than failing.

    I, for one, welcome our unpredictable silicon overlords.

  • bad story (Score:3, Insightful)

    by msblack ( 191749 ) on Saturday February 12, 2005 @11:14PM (#11656722)
    It's rather a poorly-written article with a lot of 1950's science fiction predictions about the future. The field of fuzzy algorithms has existed for ages. Fuzzy algorithms don't rely on random results. Rather, they use the "p-bits" to perform their calculations. P-bits are not the same as random bits. On the contrary, p-bits are "don't care" or "flexible" values that take into account multiple possibilities at the same time.

    Random results are terrible because they are random. The scientific method [rochester.edu] depends upon experiments that can be repeated by other researchers. You can't base a theory on results that don't correlate with the inputs. You can repeat the experiment to obtain a probablistic model but not certainty.

    A computer chip that yields unpredictable results is not going to magically recognize the image of a chair, much less a face because a chip that can't execute a program is more akin to the movie Short Circuit where the appliances go whacky. To me the author confuses the concept of fuzzy algorithms with random trials.

    • Johnny 5 could recognize a chair or a face... he could even see shapes in the clouds! Your whole argument fell apart at the end, hah!
    • Random results are terrible because they are random. The scientific method [rochester.edu] depends upon experiments that can be repeated by other researchers. You can't base a theory on results that don't correlate with the inputs. You can repeat the experiment to obtain a probablistic model but not certainty.

      Most of the posts are from the traditional algorithmic view of the world. e.g., How well can we survive if our multiply instruction gives us back the wrong answer?

      Where this stuff is really useful

  • Famous ex-wrestler, turned programmer, Zangief, claims that the unpredictability in his programs is not caused by bugs, but by a "energy conservation" feature!

  • if he was flying on an aircraft controlled by systems that gave somewhat unpredictable results.

  • Something that was not raised was what the randomness applied to. If you're dealing with an error factor in the fabrication of the processor, you are going to more than likely alter the program running on the machine. Even a single instruction being altered in most programs generally leads to catastrophic failure of the program. Now on the other hand, if the data was being handled with 30% inaccuracy, I believe that 9assuming the program didn't crash as a result of absurd data) what we got out would be w
    • There are some mathematical algorithms that could still operate on a slightly-nondeterministic machine. The Miller-Rabin test is an algorithm to determine whether or not a given number is prime, by trying to find "witness" numbers to its compositness. (http://en.wikipedia.org/wiki/Miller-Rabin_test) The more numbers you search w/o finding a witness, the more likely you are to have a prime number. By choosing a high enough number of iterations, you can prove that the probability that the number is prime is h
  • Hybrid machines. (Score:3, Interesting)

    by headkase ( 533448 ) on Sunday February 13, 2005 @12:14AM (#11657008)
    It seems to me that these chips would not replace the standard digital cpu's we have today, however they would instead complement their abilities. Adding a stochastic simulator chip would create a hybrid digital/probabilistic computer. Depending on the type of information that was being processed different chips would be employed. Your intel/amd chip would still do the digital/lossless functions while the stochastic chip would process data that is more resistant to loss of information or lossy.
  • by buckhead_buddy ( 186384 ) on Sunday February 13, 2005 @12:16AM (#11657020)
    While FPU calculations still require the precision, reproducability, and speed limitiations currently applied to all CPU chip work, this article seems to be aiming at loosening up the precision of calculations we don't really need right now.
    • the initial JPEG lossy compression
    • the texture mapping of blood splatters in first person shooters
    • mp3 playback for people who don't care about things like fidelity and consistency
    • voice compression in internet telephony
    • barcode analysis when using a symbology developed in the days of 72 dpi printers but where everyone today uses 1200dpi readers and printers.
    • people deliberately trying to add entropy for security or artistic reasons.

    I seriously doubt any accountant, music snob, or cs major would allow the main cpu to become inconsistent, but if Apple or some other trendsetting company offered a new computer with a "Right Brain" chip just for these entropic applications I'd expect it to start a whole new fashion in desktop computers.
    • What a fascinating idea! Since there are so many processes nowadays that are not precision-critical, a co-processor that could take advantage of a bit of indeterminism for a significant speedup would be more than worthwhile. This would even work in nicely with the Cell processor... units would be classified by precision. I think some work would have to be done on error-correcting coding, though. Anyway, I'd mod you up, but I already posted :)
  • ... but it's close enough. Welcome to your computer.
  • There's a well-developed theory of how to maximize data transmission through a noisy channel. You can use more power or more redundancy. There's a tradeoff and an optimum, and most modern transmission systems work near it. Modems (including DSL), digital radio systems (including cell phones), and recording media (including DVD and CD) all operate below the threshold where all the bits get through correctly. Redundancy and error correction is used to compensate.

    The optimal error rate before correction i

  • Error correction (Score:4, Insightful)

    by jmv ( 93421 ) on Sunday February 13, 2005 @04:00AM (#11657974) Homepage
    I'm surprised nobody has really mentioned error correction. In the same way that correction codes can work around RAM unreliability, you could have checksums built into each instruction to detect and correct errors. You would basically trade speed for reliability, something that has existed in communications for decades (refering to Shannon's work). I don't see why it wouldn't be the same for CPUs. I also remember clearly Richard Feynman proposing the idea (sorry, don't remember which book), so the idea isn't exactly new.
    • In the same way that correction codes can work around RAM unreliability, you could have checksums built into each instruction to detect and correct errors.

      Question is, how do you intend to do that?

      With RAM, it's trivial... The data you put in SHOULD be exactly the same as the data you get out. However, with a CPU, you put in a few numbers, are you get entirely different numbers back. How could you possibly checksum that?

      I do have a workable alternative in-mind... You just need multiple chips, perform

  • Is nothing more than statistical likelyness... same things were done in various fuzzy algorithms in the past. So what you do is to feed input in and get a likelyness of something. The problem is likelyness only works in an endless domain, you feed something in and you get a likelyness which is true to 100% over endless seeds during an endless period of time. A snapshot back in time always will give a significant number of false positives or negatives. The main problem is, that if you apply those methods to
  • ... thinking about the headline it musn't be bad at all. He's not advocating flipping opcodes in a cpu and going with the joyride guys. Currently chips work on full swing voltages, beefy noise margins and quite conservative settle timings. In signaling theory there's a lot more than that and the telecomm people developed elaborate modulation schemes that permit data encoding and very good error rejection. I'm talking out of my ass but there are codes that reliably pack a lot of data; why shouldn't computing
  • Chips will get faster and faster, but with more and more uncertainty until we have computers that can understand everything in a split second, but with a 50/50 chance of being wrong.

    We're inventing Dad.

  • My father used to say sometimes:

    "We have an agreement with the bank, they don't make pizzas and we don't cash checks"

    Going to Business Week for accurate technical articles is like going to Phrack to get the latest prime rate prediction. Like having Paris Hilton teach string theory (not the -bikini kind). Like asking Janet Jackson to teach classes in modesty. Like having Dick Cheney lead the Andes 10-mile run.

There is very little future in being right when your boss is wrong.

Working...