Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Chips That Flow With Probabilities, Not Bits 153

holy_calamity writes "Boston company Lyric Semiconductor has taken the wraps off a microchip designed for statistical calculations that eschews digital logic. It's still made from silicon transistors. But they are arranged gates that compute with analogue signals representing probabilities, not binary bits. That makes it easier to implement calculations of probabilities, says the company, which has a chip for correcting errors in flash memory claimed to be 30 times smaller than a digital logic-based equivalent."
This discussion has been archived. No new comments can be posted.

Chips That Flow With Probabilities, Not Bits

Comments Filter:
  • Analog Computers (Score:4, Insightful)

    by timgoh0 ( 781057 ) on Wednesday August 18, 2010 @07:04AM (#33286370)

    It would seem that they have reinvented the analog computer, but this time entirely on a chip. And probably (hopefully) with some logic that prevents errors due to natural processes like capacitive coupling.

    • Re: (Score:3, Insightful)

      by Sockatume ( 732728 )

      Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        This has nothing to do with analog computers. It has to do with probability of error:
        ref1: http://www.hindawi.com/journals/vlsi/2010/460312.html
        ref2: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5118445

        • Re:Analog Computers (Score:5, Informative)

          by Anonymous Coward on Wednesday August 18, 2010 @07:29AM (#33286528)

          No, it does. We aren't trying to reduce error in logic operations. We're passing analog values between one and zero into logic circuits. Literally, at the lowest level, the "bits" pumping through the chip are probabilities. It's not analog in the sense that we use op amps, we still use gates, but the inputs and ouptuts of the gates are probabilities, not hard bits.

          • Re: (Score:3, Informative)

            by crgrace ( 220738 )

            It's not analog in the sense that we use op amps, we still use gates

            What's the difference? A gate is just a high speed high gain ultra high distortion opamp.

            Op amps have differential inputs, for one thing. They also generally have much, much higher gain than a gate. Do a voltage transfer characteristic of an inverter in the process of your choice and look at the slope when it is in its linear region. The case won't be any larger than 10 - 15 Volts/Volt. Can't hardly use *that* as an op amp.

      • Re: (Score:3, Interesting)

        by tenco ( 773732 )

        Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

        Sure, you can make them cheap. But QA could be a bitch, I imagine. Simply ensuring that all used gates operate linear within a small error margin should be hard. And how you gonna give error margins for each output it calculated? After all, it's analog not digital.

      • by tlhIngan ( 30335 )

        Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

        Analog ICs have been around since they put two transistors on a base. There's nothing new about an analog computer, other than maybe putting all the pieces together onto a single piece of silicon, but analog ICs are plentiful. The lowly op-amp is a very common one, and there are often transistorized equivalents for many passive components (because makin

        • Re: (Score:3, Informative)

          by crgrace ( 220738 )

          Of course, if they managed to do this using digital IC fab technology (analog ICs are very "big" when you compare to modern digital deep submicron technology), that'll be a huge breakthrough.

          That actually wouldn't be a breakthrough at all. I've been designing analog ICs in digital deep submicron technology for 8 years. Some really big companies (Broadcom, Marvel, etc.) have built their businesses on it. I'm currently working on a Pipelined ADC in 65nm CMOS. You may be thinking about Bipolar Analog ICs, which are still important in the marketplace. But, for communications or imaging systems work, the vast majority of analog circuits are on digital CMOS (with or without a special capacitor o

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Probability computing is not analog computing. Nor is it digital. Nor is it limited to error correction and search engines. It's a new implementation of a mathematical concept that allows arbitrary logic to be implemented smaller and faster than traditional digital chips.

      Calling it analog is an insult.

      • I am assuming that, by your statement, you mean that probability computing can be analog or digital, and is not definitively one or the other? I was reading your post, and I first thought you were saying that it is a third category (which makes no sense).

        But, that being said, why is calling it analog an insult? If analog (continuous) logic/numbers are being used rather than digital (discrete) logic/numbers, then analog is not an insult, simply accurate - it is describing how it works, not what it is focused

        • Re: (Score:3, Informative)

          by denobug ( 753200 )
          Based on the reference given above. The idea is to use the possible error rate of a particular assembly of gates to generate a result tha represents a probability. So say, if by lowering the voltage level intentionally and run a particular logic through, the probability of the result is wrong (because of the physical limitations of the device), that would become the desired output, rather than having to raise the voltage to insure the logic is right all the time.

          The whole idea is to use less gates and
      • Ergo its an analog system. What those signals represent is irrelevant.

    • Mod shit down (Score:2, Interesting)

      It's got absolutely nothing to do with analog computers. At all. The first application cited is even digital storage.

      • Re: (Score:3, Interesting)

        by plcurechax ( 247883 )

        It's got absolutely nothing to do with analog computers.

        Really? Because the Fine Article from the OP, says:

        Internally, Lyric's probability gates are essentially analog devices typically working with analog values called pbits that have a digital resolution of approximately 8-bits although the approach is applicable for different resolutions as well.

        "[A]nalog devices working with analog values" does actually imply it is an analog computer, at least in part. Still, the overall usage sounds does novel, through the usage of Bayesian statistics "operations" logic as an alternative to the better known Boolean logic operations used in binary digital computers.

        While electronic analog computers [wikipedia.org] are primarily considered rare [science.uva.nl] artifacts [caltech.edu] these days, analog electronics still exist, and continue to be used in variou

        • under my desk. After all, quantum mechanics is used inside it. So it's a quantum computer, right?

          • Re: (Score:3, Interesting)

            No, because it doesn't directly use entanglement and superposition.

            Like your car uses electricity, but it's probably not an electric car.

        • Re: (Score:3, Insightful)

          by crgrace ( 220738 )

          "[A]nalog devices working with analog values" does actually imply it is an analog computer, at least in part. Still, the overall usage sounds does novel, through the usage of Bayesian statistics "operations" logic as an alternative to the better known Boolean logic operations used in binary digital computers.

          I have to disagree with you here. An analog computer is not the same thing as analog electronics in general. As an analogy, using a few digital gates to control an alarm doesn't mean you just built a digital computer.

          An analog computer is a special system that uses analog circuits to solve systems of differential equations. It is uniquely analog in the sense that it is continuous-time and has a continuously-variable output (not quantized). In the 40s and 50s, it was cheaper, more accurate (usually) and

          • Analog computers were cost-effective when a "floating-point option" came in a 5' cabinet and cost more than a luxury home.

            ICs and cheap memory fixed that problem and analog calculation went the way of the dodo.

          • by imgod2u ( 812837 )

            No but an analog mixer is very much a multiplication unit. Which is what these guys seem like they're trying to improve; a faster math unit to perform probability calculations on fixed-point (with the binary point always at the msb) numbers.

    • Re: (Score:3, Interesting)

      by plcurechax ( 247883 )

      It would seem that they have reinvented the analog computer, but this time entirely on a chip.

      Actually it would be sweet to have an equivalent to a CPLD or FPGA [wikipedia.org] for analog electronics, where an entire analog sub-system could be reduced to a single chip, reducing the cost and board real estate for usage in low-cost electronics, reduce noise levels. Many it's my math background, or working in scientific computing, but being able to work natively with continuous number versus discrete representation that are often only approximate (a la floating point number in a digital computer), would be nice.

      If suc

    • Re: (Score:3, Interesting)

      by crgrace ( 220738 )

      Actually, the article doesn't say that at all. In fact, it gives virtually no indication about how these new devices work. An analog computer uses op amps to solve differential equations. I highly, highly doubt that is what this new device is doing.

  • by Deus.1.01 ( 946808 ) on Wednesday August 18, 2010 @07:05AM (#33286374) Journal

    12.5% that understands binary 87.5 that don't...

    • by jimicus ( 737525 ) on Wednesday August 18, 2010 @07:46AM (#33286652)

      Probably.

      • by Thanshin ( 1188877 ) on Wednesday August 18, 2010 @08:00AM (#33286790)

        Probably.

        User: Are we in the right road to the beach?
        Google maps: Probably.
        User: the fuck?... Is this the beach road or not.
        Google maps: I'd say yes...ish. Most likely. ...
        User: The road is cut! It ends like right here!
        Google maps: Let me change my first answer to "I wouldn't bet on it. Much. I wouldn't bet much on it. ... Ok no, it's not likely to be the road. I'm turning off now. Good luck!"

        • by WED Fan ( 911325 )
          Now, if you were a snowbound driver, in...let's say, Oregon. Your family is in the car, and you have to get back to your online magazine job in the Bay Area, and Google maps says to take that seasonal road through the woods...
      • by treeves ( 963993 )

        Magic 8-ball computing.
        As I see it, yes
        It is certain
        It is decidedly so
        Most likely
        Outlook good
        Signs point to yes
        Without a doubt
        Yes
        Yes - definitely
        You may rely on it
        Reply hazy, try again
        Ask again later
        Better not tell you now
        Cannot predict now
        Concentrate and ask again
        Don't count on it
        My reply is no
        My sources say no
        Outlook not so good
        Very doubtful

  • by Handbrewer ( 817519 ) on Wednesday August 18, 2010 @07:08AM (#33286392) Homepage
    So basically its a computer that makes up statistical computations and corrects them to fit the models on the fly? Lazy scientists, rejoice!
  • Been there, done that. Analog computers existed 50 years ago because digital computers were too slow. Even then, they were a nice market. Calibration is a big issue, and even with a perfectly calibrated machine you don't have a lot of accuracy. With the speed of today's digital computers, this is a (poor) solution in search of a problem.
    • Re: (Score:1, Informative)

      by Anonymous Coward

      It's not the same kind of analog. It's analog in the sense that it operates on things between 1 and 0, but it still uses logic.

      • by imgod2u ( 812837 )

        Logic is when you're operating on 1's an 0's (hence the term: logic). When you're operating on a continuous scale of voltage values, it's analog.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Just like nobody needs enough vector float computations and SIMD instructions at once to justify making a card unit that does a @$%#$ ton of them at once. This chip, in a PCIE card could make a lot of sense.

    • by Rich0 ( 548339 )

      I think the concept is a good one.

      An area this technology might be used in could be embedded controllers, which are not general-purpose devices.

      If you're building a thrust vectoring system for a plane, and the servos have an accuracy of 1%, then it is more important to deliver more frequent servo updates than to deliver those updates with 0.0001% accuracy. If your device is attached to a sensor that has 5% manufacturing tolerances then you may not need even 8-bit precision on the math.

      In the IS discrete-ma

      • by imgod2u ( 812837 )

        Control systems were analog long before they were digital. But transistors got smaller and smaller and faster and faster (and lower power to boot!) to the point where even today's sub-mW microcontrollers are fast enough to do the calculations in real time (while running an RTOS even!) such that it's not necessary to use an analog circuit. There are just so many advantages (easier maintenance, upgrading, troubleshooting, consistency, configurability) that even if the A/D -> microcontroller -> D/A chain

    • Been there, done that. Analog computers existed 50 years ago because digital computers were too slow. Even then, they were a nice market. Calibration is a big issue, and even with a perfectly calibrated machine you don't have a lot of accuracy. With the speed of today's digital computers, this is a (poor) solution in search of a problem.

      Unless, of course, they've improved the calibration suitably, and, by implementing them in silicon rather than with the techniques used 50 years ago, also kept the speed adv

      • by crgrace ( 220738 )

        It doesn't matter how good your calibration is, the accuracy is still very low for analog computers (compared to digital ICs, for example). Anything above maybe 4 to 6 bits of accuracy will start being slower than digital circuits in the same process (speed/accuracy tradeoff. There may be really, really specialized situations where this is useful, but not many. But that's a bit off-topic, since this company isn't trying to sell an analog computer IC.

  • by Z8 ( 1602647 ) on Wednesday August 18, 2010 @07:29AM (#33286526)

    The article mentions Bayesian calculations. Can these computers really speed up those calculations? Nowadays Bayesian calculations usually involve thousands of iterations of a technique called Markov Chain Monte Carlo [wikipedia.org] (MCMC) unless the distributions in question are conjugate priors [wikipedia.org]. The simulation then converges to the right answer.

    The issue I see is that all these techniques are just math. They are either analytic (conjugate priors) or require strict error bounds in order get sensible answers (MCMC). There's no separate system of math that Bayesians use. Like many others, Bayesians just need quick reliable floating point mathematics. So anyway, I don't see how this can help Bayesian statisticians, unless it also revolutionizes engineering, physics, etc.

    • by ModelX ( 182441 )

      I've been dealing with Bayesian methods for a few years, too. I understand the goal of the hardware is not to run everything that is being sold as Bayesian methods. Basically Bayesian calculations mean computing conditional probabilities, which usually gets down to a ton of multiplications and additions. If the analog hardware can produce results for a particular subproblem with sufficient accuracy, then you are saving a lot of power and time. If it can produce estimates that are not entirely accurate but w

    • by CptNerd ( 455084 )
      We were using Bayesian nets on a project back in '89, using them to estimate probabilities on the state of certain installations based on text reports. It was pretty hairy, we were implementing Judea Pearl's algorithm, which was a pain to implement (actually we were re-implementing it from Lisp to C) and not that fast on the old Mac IIfx. It was quite powerful, though, depending on the quality of the knowledge input. I never got into the math, but I understand that other algorithms have been developed th
    • by femto ( 459605 )
      This technolgy seems to be a marriage between analogue computing and forward error correction (FEC) algorithms. FEC algorithms are "nice" in that you can have minor errors in their implementation, and they still work, albeit with slightly lower coding gain. (This also makes them hard to debug, as they tend to correct their own errors!). Generally, errors accumulate in analogue computing, but in FEC algorithms they should get corrected. The savings come from replacing an array of logic gates (as required
    • by mea37 ( 1201159 )

      I suspect the variable you're leaving out is, in a binary computer context "floating point math" usually means IEEE floating point, which is a very different animal than the abstract concept that comes to mind when you say "floating point math". Even when it doesn't mean IEEE floating point, every binary floating point implementation is a compromise with some combination of limited performance, limited range, and limited precision.

      Consdier that IEEE floating point has no exact representation of numbers lik

      • Sounds like someone has been living in a perfect reality (aka drugs). I dare you to make a voltage regulator with better precision than an IEEE 64-bit floating point number... or even a 32-bit one.

        Then try to implement mixers and actual logic.

        Then embed it into a tiny circuit amid an extremely noisy environment.

        Of course, the last two are just academic, as you're never even going to manage the voltage regulator without some extreme equipment.
      • by imgod2u ( 812837 )

        How exactly can't IEEE FP represent 0.1?

        0 0111_1110 000_0000_0000_0000_0000_0000

    • by Frequency Domain ( 601421 ) on Wednesday August 18, 2010 @09:31AM (#33288096)

      [...] Nowadays Bayesian calculations usually involve thousands of iterations[...]. The simulation then converges to the right answer.

      The convergence you refer to is asymptotic. In practice it takes about 10000 iterations to get around a 1% bound on a single probability point estimate, and a factor of a hundred for each order of magnitude improvement. On top of that, if you're dealing with multiple distributions the overall expectation is not just a simple function of the component expectations unless the whole system is linear, you need to use convolution to combine results. And on top of that, lots of interesting problems are based on order statistics, not means/expectations. Having hardware that correctly manipulates distributional behavior in a few CPU cycles would blow the doors off of MCMC.

      • Yeah, fine, we'd all like to compute our likelihoods faster, and you can imagine hardware which does it, but it's not clear from the article why doing this analog is superior. Or even how you can avoid sampling algorithms like MCMC using this approach. How does making probabilities analog get you a joint probability distribution over a parameter space, let you marginalize parameters to a lower dimensional distribution, and all the other things MCMC is for? I have a feeling that this is chip is targeted a

      • by Z8 ( 1602647 )

        Yep, monte carlo techniques typically converge at O(n^.5), unless possibly if you are low discrepancy sequences/quasi-random numbers. But I guess I don't see how the article will result in hardware that can manipulate distributions in "a few CPU cycles". First of all, manipulating a probability distribution often does not involve many numbers between 0 and 1 (e.g. if it is continuous you are dealing with probability densities, not probabilities).

        About convolutions, those are usually most quickly calculat

      • Having hardware that correctly manipulates distributional behavior in a few CPU cycles would blow the doors off of MCMC.

        No, it wouldn't. MCMC is used on problems where the curse of dimensionality makes the problem intractable with direct methods. You can't beat this with hardware that calculates distributions directly, because the complexity in such problems is exponential. (You also can't beat this with low discrepancy sequences, because they're designed to fill up hypercubes, but in usual applic

    • Those "thousands of iterations" on a modest PC are finished and the results presented before the Enter key is back to rest position.

      The intended use--error detection and correction, involving a single computation, is probably ideal. You can't gang these things to do more complex work because calculating with simple probabilities, in analog or digital, quickly runs into the flaw of averages and gives you wrong numbers. That's why we use Monte Carlo simulations and uniform partitions to preserve the integrity

      • by Z8 ( 1602647 )

        Agree on the GPU part. If you haven't seen them, check out the gnutools [r-project.org] and cudaBayesreg [r-project.org] R packages. They don't look too easy to use now, but eventually this will become mainstream.

  • Analogue Computing (Score:2, Insightful)

    by Anonymous Coward

    This is potentially a great advance. Everyone knows that analogue computing can greatly outperform digital computing (now each bit has a continuum of states so stores infinitely more data, each operation on 2 'bits'....you get the idea)....but there are many issues to resolve i.e.

    1) Error correction - every 'bit' is in an erroneous state
    2) Writing code for the thing - anyone got analogue algorithm design on their CVs?

  • I can see how an AND gate would work.
    Anyone want to guess how the others function?

    Or am I on completely the wrong track here.
    • Acually, NOT is easy
      0.7 NOT = 0.3
    • by selven ( 1556643 ) on Wednesday August 18, 2010 @07:47AM (#33286664)

      If 0.8 AND 0.6 = 0.7 (I assume you're taking the average here), then 1 AND 0 would be 0.5, when it's supposed to be 0. The only answers I would accept for 0.8 AND 0.6 are 0.6 (min) and 0.48 (multiplication). An OR gate is constructed by attaching NOT (1 - x here) gates to the inputs and output of an AND gate, yielding 0.8 or 0.92 depending on which rule you go with.

      • Of course, multiplicative is what I should have done
        XOR is a bit of a bugger to figure out, so I will cheat and use this [wikipedia.org].
        That's all the gates covered.
      • by bondsbw ( 888959 )

        I believe you are correct with the multiplication rule. According to the article,

        Whereas a conventional NAND gate outputs a "1" if neither of its inputs match, the output of a Bayesian NAND gate represents the odds that the two input probabilities match. This makes it possible to perform calculations that use probabilities as their input and output.

        But I'm not clear as to what "the odds that the two input probabilities match" means... that implies, to me, that it returns a 1 if the inputs are identical and 0 if not. I'm thinking it instead means, "Given events A and B with inputs p(A) and p(B), Bayesian NAND represents p(A and B)." Or perhaps p(A nand B)... I don't know.

        • But I'm not clear as to what "the odds that the two input probabilities match" means...

          It means that the reporter hasn't the foggiest idea how it works but had to write some sort of balderdash anyway.

        • But I'm not clear as to what "the odds that the two input probabilities match" means... that implies, to me, that it returns a 1 if the inputs are identical and 0 if not. I'm thinking it instead means, "Given events A and B with inputs p(A) and p(B), Bayesian NAND represents p(A and B)." Or perhaps p(A nand B)... I don't know.

          It's not possible to compute p(A and B) from just p(A) and p(B). You need other information, like p(A|B) or p(B|A). For example, flip two coins, X and Y. If A = "X is heads", B = "Y is heads", then p(A) = 1/2, p(B) = 1/2, p(A and B) = 1/4. If A = "X is heads", B = "X is tails", then p(A) = 1/2, p(B) = 1/2, p(A and B) = 0. So the gate couldn't possibly mean either of those things.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        It's called fuzzy logic [http://en.wikipedia.org/wiki/Fuzzy_logic].

        One way to define it is NAND(x,y) = 1-MIN(x,y)
        and the rest follows using usual logic rules.

        I have no idea if that's what they do though.

    • Re: (Score:3, Informative)

      by ikkonoishi ( 674762 )

      I think it is more of a probability thing then what you are thinking of. The return is the probability that the two values are the same. So 0.5 AND 0.5 would be 100% while 0.5 AND 0.6 would like 80% or something depending on the allowed error and uncertainty.

      Thinking of this reminded me of BugBrain [biologic.com.au]. If you want to play with Bayesian logic it has a pretty good set of examples including building a neural network to perform simple character recognition.

  • Awesome! (Score:4, Funny)

    by UID30 ( 176734 ) on Wednesday August 18, 2010 @07:41AM (#33286622)
    How much longer before we get the "infinite improbability machine"?
    • Sorry, we'll never have one in America. We can't make proper tea, and I don't believe they can run on coffee.

      We shall never experience the WHUMP-thunk of a whale and a pot of petunias landing on our shores, unless one of the Brit boffins makes a mistake and as you know that never happens.

    • That's not likely to happen any time soon.

      So, next week.

    • How much longer before we get the "infinite improbability machine"?

      As soon as someone hooks it up to a nice, hot cup of tea.

  • One step closer to the Infinite Improbability Drive (http://en.wikipedia.org/wiki/Technology_in_The_Hitchhiker's_Guide_to_the_Galaxy#Infinite_Improbability_Drive)
  • It's still just math. How will this be any different from digital calculations except for maybe the level of precision?
  • The actual thesis (Score:5, Informative)

    by Mathiasdm ( 803983 ) on Wednesday August 18, 2010 @08:02AM (#33286810) Homepage
    By Ben Vigoda, Co-Founder and CEO: http://phm.cba.mit.edu/theses/03.07.vigoda.pdf [mit.edu]
    • by Born2bwire ( 977760 ) on Wednesday August 18, 2010 @08:09AM (#33286892)

      By Ben Vigoda, Co-Founder and CEO: http://phm.cba.mit.edu/theses/03.07.vigoda.pdf [mit.edu]

      Huh, I thought he was dead.

    • Some actual facts. Thank you.

    • See slide 41 for the NAND gate they are bragging about.

      I'm a bit worried about them being completely fabless. I'm sure all their circuits work in SPICE, but how is this going to deal with real world noise, especially embedded on some other digital chip? The powerpoint explicitly states that is adversely affected especially by the sudden spikes caused by digital noise...

      I was about to post the slideshow myself, but I see you beat me to it :)
      • Re: (Score:3, Insightful)

        by crgrace ( 220738 )

        I'm a bit worried about them being completely fabless. I'm sure all their circuits work in SPICE, but how is this going to deal with real world noise, especially embedded on some other digital chip? The powerpoint explicitly states that is adversely affected especially by the sudden spikes caused by digital noise...

        I wouldn't worry too much. More companies than not are fabless you know. They are going to deal with noise the way all analog designers deal with noise. They are going to use a deep n-well with guard rings. They are going to bypass the hell out of their supplies. They are going to run their signals on high metal with shields beneath. The same things we all do.

    • Re:The actual thesis (Score:4, Informative)

      by Asic Eng ( 193332 ) on Wednesday August 18, 2010 @11:37AM (#33290200)
      Being a chip designer I quite frequently encounter articles which claim that there is going to be a "new way to design chips" coming soon. I'm admittedly a bit jaded hearing about another one.

      Often these approaches overstate the problems of current methodologies quite significantly. This thesis too, seems to hit the old favorites. Here is an example: In clocked digital systems, speed and throughput is typically limited by worst case delays associated with the slowest module in the system.

      This would be true if clocked digital systems would be restricted to a single clock. Some are, but the embedded devices I work on usually have half a dozen clocks or more. Some modules run with fairly high speeds, others at relatively low speeds - synchronizing them is not only a standard task, it's actually reasonably easy compared with other problems we face.

      Very closely related another of their claims: The larger the area over which the same clock is shared, the more costly and difficult it is to distribute the clock signal.

      Again - true in principle, but exaggerating the problem. It's not so difficult to distribute a clock over a large area if you allow skew between different areas. That might appear to defeat the purpose, but you really only need to interface reliably between those areas. Skew can even be helpful in some cases: if you send signal X from block A to be clocked-in by block B - then it helps if the clock arrives later at block B than at block A. Of course it's a disadvantage for a signal Y driven from B to A - but that signal might be faster (less logic to go through in block B). Modern design tools can automatically use clock skew to achieve better timing.

      One more: Building in redundancy to avoid catastrophic failure is not a cost-effective option when manufacturing circuits on a silicon wafer

      Well, we happen to do that regularly, it's cost-effective if you know what you are doing. There are parts of the chip which are much more likely to fail than others - RAMs are more prone to defects than ordinary digital logic. So as part of device testing defective areas of a RAM block can be mapped to a handful of spare cells. doubling every transistor as suggested in the thesis, is not necessary, obviously.

      Any of these "fundamentally new" approaches have to compete with the evolutionary solutions which people find for the same problems. That's hard because some of these are at least as clever as the "fundamental" ones, and they are much easier to adapt in existing design flows. I'm not ruling out that at some point we'll switch to a completely different design methodology, just as I'm not excluding the possibility that lighter-than-air travel will at some point find a place in commercial aviation again. I'm just not holding my breath.

      • by imgod2u ( 812837 )

        From the paper, it appears they are using 8 states per signal and custom built a Bayesian NAND gate. I have to question whether or not this could've been better achieved (and is being better achieved) with a fully custom 8-bit Bayesian NAND cell.

  • It's vaguely familiar, but since no two circuits are *truly* identical at the analog layer, *and* change as the temperature changes, people used digital instead where 'mostly 0' is still '0' and 'mostly 1' is still '1' regardless. Otherwise you can't mass produce them.

    Of more interest is people using analog-alike bitstreams, where the average number of 1's vs 0's in a random stream is the amplitude of the analog wave. They then blend the input streams together to produce the output stream. I've mostly seen

  • First probability on a chip, next an improbability drive!

  • Am I really the only person left that hates this construction? I know that it has become (very) common usage, but we, as nerds, should understand that details matter.

    If one says that something is 50% smaller, we understand that to mean half the size. And if one says that something is 3000% smaller, or 30 times smaller, should we not understand that as not only taking no space, but actually giving us 29 times the original space back?

    Unless we are making a three part comparison, which has new perils. I
    • While I agree with you that it is unclear or at least not intuitively obvious, the plain fact is that it has been in use for a long time and is very common. It's not a recent development (Jonathan Swift is known to have used the construction in the early 1700s) nor is it rare (almost 500,000,000 Google results for 'times less than'). For better or worse, language is not tied directly to math, nor is the meaning of a phrase necessarily tied to the meaning of the individual words that make it up.

    • 10x smaller means: one tenth of the original size.
      Hence, 50% smaller means: double the size.
      Simple, eh?

  • Analog VLSI and Neural Systems [amazon.com] by Ben Vigoda, MIT Press 2010.

    Ooops... make that Carver Mead, Addison Wesley 1989

The first 90% of a project takes 90% of the time, the last 10% takes the other 90% of the time.

Working...