## Chips That Flow With Probabilities, Not Bits 153

Posted
by
samzenpus

from the new-angle dept.

from the new-angle dept.

holy_calamity writes

*"Boston company Lyric Semiconductor has taken the wraps off a microchip designed for statistical calculations that eschews digital logic. It's still made from silicon transistors. But they are arranged gates that compute with analogue signals representing probabilities, not binary bits. That makes it easier to implement calculations of probabilities, says the company, which has a chip for correcting errors in flash memory claimed to be 30 times smaller than a digital logic-based equivalent."*
## Analog Computers (Score:4, Insightful)

It would seem that they have reinvented the analog computer, but this time entirely on a chip. And probably (hopefully) with some logic that prevents errors due to natural processes like capacitive coupling.

## Re: (Score:3, Insightful)

Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

## Re: (Score:2, Informative)

This has nothing to do with analog computers. It has to do with probability of error:

ref1: http://www.hindawi.com/journals/vlsi/2010/460312.html

ref2: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5118445

## Re:Analog Computers (Score:5, Informative)

No, it does. We aren't trying to reduce error in logic operations. We're passing analog values between one and zero into logic circuits. Literally, at the lowest level, the "bits" pumping through the chip are probabilities. It's not analog in the sense that we use op amps, we still use gates, but the inputs and ouptuts of the gates are probabilities, not hard bits.

## Re: (Score:3, Informative)

It's not analog in the sense that we use op amps, we still use gates

What's the difference? A gate is just a high speed high gain ultra high distortion opamp.

Op amps have differential inputs, for one thing. They also generally have much, much higher gain than a gate. Do a voltage transfer characteristic of an inverter in the process of your choice and look at the slope when it is in its linear region. The case won't be any larger than 10 - 15 Volts/Volt. Can't hardly use *that* as an op amp.

## Re: (Score:3, Informative)

It's not analog in the sense that we use op amps, we still use gatesWhat's the difference? A gate is just a high speed high gain ultra high distortion opamp.

Forgot your introductory digital design courses already?

Digital circuits are designed to reliably transmit or compute a digital value in to presence of noise. The way this is done is by excluding huge ranges of voltages and making very high gain op-amps that, while fast, do not need to be accurate. Accuracy is thrown out the window in favor of speed and noise immunity. You will (or should) never see a properly operating op-amp in a digital circuit putting out a voltage other than something in the range r

## Re: (Score:3, Interesting)

Forgot your introductory digital design courses already?

I think you forgot your analog circuit class. Functionally speaking, an inverter is no different than a high-gain, rail-to-rail voltage controlled op-amp. It's just far more susceptible to noise and has a very distorted IV curve. The reason digital has taken on so much popularity is that since you're either railing the amplifier to its VDD rail or GND rail and don't care about the in-betweens, you can use very small, very fast and sometimes lower power

## Re: (Score:2)

It's not analog in the sense that we use op amps, we still use gatesWhat's the difference? A gate is just a high speed high gain ultra high distortion opamp.

And worse, in this application neither high gain nor high distortion are desired properties.

## Re: (Score:3, Interesting)

Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

Sure, you can make them cheap. But QA could be a bitch, I imagine. Simply ensuring that all used gates operate linear within a small error margin should be hard. And how you gonna give error margins for each output it calculated? After all, it's analog not digital.

## Re: (Score:2)

Analog ICs have been around since they put two transistors on a base. There's nothing new about an analog computer, other than maybe putting all the pieces together onto a single piece of silicon, but analog ICs are plentiful. The lowly op-amp is a very common one, and there are often transistorized equivalents for many passive components (because makin

## Re: (Score:3, Informative)

Of course, if they managed to do this using digital IC fab technology (analog ICs are very "big" when you compare to modern digital deep submicron technology), that'll be a huge breakthrough.

That actually wouldn't be a breakthrough at all. I've been designing analog ICs in digital deep submicron technology for 8 years. Some really big companies (Broadcom, Marvel, etc.) have built their businesses on it. I'm currently working on a Pipelined ADC in 65nm CMOS. You may be thinking about Bipolar Analog ICs, which are still important in the marketplace. But, for communications or imaging systems work, the vast majority of analog circuits are on digital CMOS (with or without a special capacitor o

## Re: (Score:2, Interesting)

Probability computing is not analog computing. Nor is it digital. Nor is it limited to error correction and search engines. It's a new implementation of a mathematical concept that allows arbitrary logic to be implemented smaller and faster than traditional digital chips.

Calling it analog is an insult.

## Re: (Score:2)

I am assuming that, by your statement, you mean that probability computing can be analog or digital, and is not definitively one or the other? I was reading your post, and I first thought you were saying that it is a third category (which makes no sense).

But, that being said, why is calling it analog an insult? If analog (continuous) logic/numbers are being used rather than digital (discrete) logic/numbers, then analog is not an insult, simply accurate - it is describing how it works, not what it is focused

## Re: (Score:3, Informative)

The whole idea is to use less gates and

## It uses analog signals internally (Score:2)

Ergo its an analog system. What those signals represent is irrelevant.

## Mod shit down (Score:2, Interesting)

It's got absolutely nothing to do with analog computers. At all. The first application cited is even digital storage.

## Re: (Score:3, Interesting)

It's got absolutely nothing to do with analog computers.

Really? Because the Fine Article from the OP, says:

Internally, Lyric's probability gates are essentially

analog devicestypically working withanalog valuescalled pbits that have a digital resolution of approximately 8-bits although the approach is applicable for different resolutions as well."[A]nalog devices working with analog values" does actually imply it is an analog computer, at least in part. Still, the overall usage sounds

doesnovel, through the usage of Bayesian statistics "operations" logic as an alternative to the better known Boolean logic operations used in binary digital computers.While electronic analog computers [wikipedia.org] are primarily considered rare [science.uva.nl] artifacts [caltech.edu] these days, analog electronics still exist, and continue to be used in variou

## Oh looksy me's got a quantum computer (Score:2)

under my desk. After all, quantum mechanics is used inside it. So it's a quantum computer, right?

## Re: (Score:3, Interesting)

No, because it doesn't directly use entanglement and superposition.

Like your car uses electricity, but it's probably not an electric car.

## A wheel is not a car (Score:2)

And an analog component within a digital computer does not an analog computer make -- they all have analog components anyway, for fuck's sake.

## Re: (Score:3, Insightful)

"[A]nalog devices working with analog values" does actually imply it is an analog computer, at least in part. Still, the overall usage sounds does novel, through the usage of Bayesian statistics "operations" logic as an alternative to the better known Boolean logic operations used in binary digital computers.

I have to disagree with you here. An analog computer is not the same thing as analog electronics in general. As an analogy, using a few digital gates to control an alarm doesn't mean you just built a digital computer.

An analog computer is a special system that uses analog circuits to solve systems of differential equations. It is uniquely analog in the sense that it is continuous-time and has a continuously-variable output (not quantized). In the 40s and 50s, it was cheaper, more accurate (usually) and

## The short life of analog computers (Score:2)

Analog computers were cost-effective when a "floating-point option" came in a 5' cabinet and cost more than a luxury home.

ICs and cheap memory fixed that problem and analog calculation went the way of the dodo.

## Re: (Score:2)

No but an analog mixer is very much a multiplication unit. Which is what these guys seem like they're trying to improve; a faster math unit to perform probability calculations on fixed-point (with the binary point always at the msb) numbers.

## Re: (Score:2)

Nothing remotely like an analog computer in the TR article, and nothing still on the page you link to.

## Re: (Score:3, Interesting)

It would seem that they have reinvented the analog computer, but this time entirely on a chip.

Actually it would be sweet to have an equivalent to a CPLD or FPGA [wikipedia.org] for analog electronics, where an entire analog sub-system could be reduced to a single chip, reducing the cost and board real estate for usage in low-cost electronics, reduce noise levels. Many it's my math background, or working in scientific computing, but being able to work natively with continuous number versus discrete representation that are often only approximate (a la floating point number in a digital computer), would be nice.

If suc

## Re: (Score:3, Informative)

You mean like a Field Programmable Analog Array?

http://en.wikipedia.org/wiki/Field-programmable_analog_array [wikipedia.org]

## Re: (Score:3, Interesting)

Actually, the article doesn't say that at all. In fact, it gives virtually no indication about how these new devices work. An analog computer uses op amps to solve differential equations. I highly, highly doubt that is what this new device is doing.

## Oh hilarious (Score:2)

Those BSOD jokes were old 10 years ago. Did your time machine take a wrong turn and you ended up in 2010 instead of 1995?

## There are 10 kinds of people in the world.. (Score:5, Funny)

12.5% that understands binary 87.5 that don't...

## Re:There are 10 kinds of people in the world.. (Score:4, Funny)

Probably.

## Re:There are 10 kinds of people in the world.. (Score:5, Funny)

Probably.

User: Are we in the right road to the beach? ... ... Ok no, it's not likely to be the road. I'm turning off now. Good luck!"

Google maps: Probably.

User: the fuck?... Is this the beach road or not.

Google maps: I'd say yes...ish. Most likely.

User: The road is cut! It ends like right here!

Google maps: Let me change my first answer to "I wouldn't bet on it. Much. I wouldn't bet much on it.

## Re: (Score:2)

## Re: (Score:2)

Magic 8-ball computing.

As I see it, yes

It is certain

It is decidedly so

Most likely

Outlook good

Signs point to yes

Without a doubt

Yes

Yes - definitely

You may rely on it

Reply hazy, try again

Ask again later

Better not tell you now

Cannot predict now

Concentrate and ask again

Don't count on it

My reply is no

My sources say no

Outlook not so good

Very doubtful

## A Computer for Truth Challenged Scientists? (Score:3, Funny)

## Re: (Score:2)

No.

## Analog computers live again!! (Score:1, Insightful)

## Re: (Score:1, Informative)

It's not the same kind of analog. It's analog in the sense that it operates on things between 1 and 0, but it still uses logic.

## Re: (Score:2)

Logic is when you're operating on 1's an 0's (hence the term: logic). When you're operating on a continuous scale of voltage values, it's analog.

## Re: (Score:1, Interesting)

Just like nobody needs enough vector float computations and SIMD instructions at once to justify making a card unit that does a @$%#$ ton of them at once. This chip, in a PCIE card could make a lot of sense.

## Re: (Score:2)

I think the concept is a good one.

An area this technology might be used in could be embedded controllers, which are not general-purpose devices.

If you're building a thrust vectoring system for a plane, and the servos have an accuracy of 1%, then it is more important to deliver more frequent servo updates than to deliver those updates with 0.0001% accuracy. If your device is attached to a sensor that has 5% manufacturing tolerances then you may not need even 8-bit precision on the math.

In the IS discrete-ma

## Re: (Score:2)

Control systems were analog long before they were digital. But transistors got smaller and smaller and faster and faster (and lower power to boot!) to the point where even today's sub-mW microcontrollers are fast enough to do the calculations in real time (while running an RTOS even!) such that it's not necessary to use an analog circuit. There are just so many advantages (easier maintenance, upgrading, troubleshooting, consistency, configurability) that even if the A/D -> microcontroller -> D/A chain

## Re: (Score:2)

Unless, of course, they've improved the calibration suitably, and, by implementing them in silicon rather than with the techniques used 50 years ago, also kept the speed adv

## Re: (Score:2)

It doesn't matter how good your calibration is, the accuracy is still very low for analog computers (compared to digital ICs, for example). Anything above maybe 4 to 6 bits of accuracy will start being slower than digital circuits in the same process (speed/accuracy tradeoff. There may be really, really specialized situations where this is useful, but not many. But that's a bit off-topic, since this company isn't trying to sell an analog computer IC.

## Probability in computers: it's called a float (Score:5, Insightful)

The article mentions Bayesian calculations. Can these computers really speed up those calculations? Nowadays Bayesian calculations usually involve thousands of iterations of a technique called Markov Chain Monte Carlo [wikipedia.org] (MCMC) unless the distributions in question are conjugate priors [wikipedia.org]. The simulation then converges to the right answer.

The issue I see is that all these techniques are just math. They are either analytic (conjugate priors) or require strict error bounds in order get sensible answers (MCMC). There's no separate system of math that Bayesians use. Like many others, Bayesians just need quick reliable floating point mathematics. So anyway, I don't see how this can help Bayesian statisticians, unless it also revolutionizes engineering, physics, etc.

## Re: (Score:2)

I've been dealing with Bayesian methods for a few years, too. I understand the goal of the hardware is not to run everything that is being sold as Bayesian methods. Basically Bayesian calculations mean computing conditional probabilities, which usually gets down to a ton of multiplications and additions. If the analog hardware can produce results for a particular subproblem with sufficient accuracy, then you are saving a lot of power and time. If it can produce estimates that are not entirely accurate but w

## Re: (Score:2)

## Re: (Score:2)

## Re: (Score:2)

I suspect the variable you're leaving out is, in a binary computer context "floating point math" usually means IEEE floating point, which is a very different animal than the abstract concept that comes to mind when you say "floating point math". Even when it doesn't mean IEEE floating point, every binary floating point implementation is a compromise with some combination of limited performance, limited range, and limited precision.

Consdier that IEEE floating point has no exact representation of numbers lik

## Re: (Score:2)

Then try to implement mixers and actual logic.

Then embed it into a tiny circuit amid an extremely noisy environment.

Of course, the last two are just academic, as you're never even going to manage the voltage regulator without some extreme equipment.

## Re: (Score:2)

How exactly can't IEEE FP represent 0.1?

0 0111_1110 000_0000_0000_0000_0000_0000

## Re:Probability in computers: it's called a float (Score:5, Informative)

[...] Nowadays Bayesian calculations usually involve thousands of iterations[...]. The simulation then converges to the right answer.

The convergence you refer to is asymptotic. In practice it takes about 10000 iterations to get around a 1% bound on a single probability point estimate, and a factor of a hundred for each order of magnitude improvement. On top of that, if you're dealing with multiple distributions the overall expectation is not just a simple function of the component expectations unless the whole system is linear, you need to use convolution to combine results. And on top of that, lots of interesting problems are based on order statistics, not means/expectations. Having hardware that correctly manipulates distributional behavior in a few CPU cycles would blow the doors off of MCMC.

## Re: (Score:2)

Yeah, fine, we'd all like to compute our likelihoods faster, and you can imagine hardware which does it, but it's not clear from the article why doing this analog is superior. Or even how you can avoid sampling algorithms like MCMC using this approach. How does making probabilities analog get you a joint probability distribution over a parameter space, let you marginalize parameters to a lower dimensional distribution, and all the other things MCMC is for? I have a feeling that this is chip is targeted a

## Re: (Score:2)

Yep, monte carlo techniques typically converge at O(n^.5), unless possibly if you are low discrepancy sequences/quasi-random numbers. But I guess I don't see how the article will result in hardware that can manipulate distributions in "a few CPU cycles". First of all, manipulating a probability distribution often does not involve many numbers between 0 and 1 (e.g. if it is continuous you are dealing with probability densities, not probabilities).

About convolutions, those are usually most quickly calculat

## Re: (Score:2)

No, it wouldn't. MCMC is used on problems where the curse of dimensionality makes the problem intractable with direct methods. You can't beat this with hardware that calculates distributions directly, because the complexity in such problems is exponential. (You also can't beat this with low discrepancy sequences, because they're designed to fill up hypercubes, but in usual applic

## Re: (Score:2)

Those "thousands of iterations" on a modest PC are finished and the results presented before the Enter key is back to rest position.

The intended use--error detection and correction, involving a single computation, is probably ideal. You can't gang these things to do more complex work because calculating with simple probabilities, in analog or digital, quickly runs into the flaw of averages and gives you wrong numbers. That's why we use Monte Carlo simulations and uniform partitions to preserve the integrity

## Re: (Score:2)

Agree on the GPU part. If you haven't seen them, check out the gnutools [r-project.org] and cudaBayesreg [r-project.org] R packages. They don't look too easy to use now, but eventually this will become mainstream.

## Re: (Score:2)

## Analogue Computing (Score:2, Insightful)

This is potentially a great advance. Everyone knows that analogue computing can greatly outperform digital computing (now each bit has a continuum of states so stores infinitely more data, each operation on 2 'bits'....you get the idea)....but there are many issues to resolve i.e.

1) Error correction - every 'bit' is in an erroneous state

2) Writing code for the thing - anyone got analogue algorithm design on their CVs?

## Re: (Score:2)

"Hell, even Quaternary Computing would be better than crappy Binary."

It would make no difference - the algorithms would be identical. All you'd gain would be saved storage space as each "bit" could represent 4 values instead of 2. You'd still be dealing with a system that could only handle discrete values.

## 1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (Score:1)

Anyone want to guess how the others function?

Or am I on completely the wrong track here.

## Re: (Score:1)

0.7 NOT = 0.3

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (Score:5, Insightful)

If 0.8 AND 0.6 = 0.7 (I assume you're taking the average here), then 1 AND 0 would be 0.5, when it's supposed to be 0. The only answers I would accept for 0.8 AND 0.6 are 0.6 (min) and 0.48 (multiplication). An OR gate is constructed by attaching NOT (1 - x here) gates to the inputs and output of an AND gate, yielding 0.8 or 0.92 depending on which rule you go with.

## Re: (Score:1)

XOR is a bit of a bugger to figure out, so I will cheat and use this [wikipedia.org].

That's all the gates covered.

## Re: (Score:2)

I believe you are correct with the multiplication rule. According to the article,

Whereas a conventional NAND gate outputs a "1" if neither of its inputs match, the output of a Bayesian NAND gate represents the odds that the two input probabilities match. This makes it possible to perform calculations that use probabilities as their input and output.

But I'm not clear as to what "the odds that the two input probabilities match" means... that implies, to me, that it returns a 1 if the inputs are identical and 0 if not. I'm thinking it instead means, "Given events A and B with inputs p(A) and p(B), Bayesian NAND represents p(A and B)." Or perhaps p(A nand B)... I don't know.

## Re: (Score:2)

It means that the reporter hasn't the foggiest idea how it works but had to write some sort of balderdash anyway.

## Re: (Score:2)

But I'm not clear as to what "the odds that the two input probabilities match" means... that implies, to me, that it returns a 1 if the inputs are identical and 0 if not. I'm thinking it instead means, "Given events A and B with inputs p(A) and p(B), Bayesian NAND represents p(A and B)." Or perhaps p(A nand B)... I don't know.

It's not possible to compute p(A and B) from just p(A) and p(B). You need other information, like p(A|B) or p(B|A). For example, flip two coins, X and Y. If A = "X is heads", B = "Y is heads", then p(A) = 1/2, p(B) = 1/2, p(A and B) = 1/4. If A = "X is heads", B = "X is tails", then p(A) = 1/2, p(B) = 1/2, p(A and B) = 0. So the gate couldn't possibly mean either of those things.

## Re: (Score:3, Informative)

It's called fuzzy logic [http://en.wikipedia.org/wiki/Fuzzy_logic].

One way to define it is NAND(x,y) = 1-MIN(x,y)

and the rest follows using usual logic rules.

I have no idea if that's what they do though.

## Re: (Score:3, Informative)

I think it is more of a probability thing then what you are thinking of. The return is the probability that the two values are the same. So 0.5 AND 0.5 would be 100% while 0.5 AND 0.6 would like 80% or something depending on the allowed error and uncertainty.

Thinking of this reminded me of BugBrain [biologic.com.au]. If you want to play with Bayesian logic it has a pretty good set of examples including building a neural network to perform simple character recognition.

## Re: (Score:2)

## Re: (Score:2)

## Awesome! (Score:4, Funny)

## Re: (Score:1)

Sorry, we'll never have one in America. We can't make proper tea, and I don't believe they can run on coffee.

We shall never experience the WHUMP-thunk of a whale and a pot of petunias landing on our shores, unless one of the Brit boffins makes a mistake and as you know that never happens.

## Re: (Score:2)

That's not likely to happen any time soon.

So, next week.

## Re: (Score:2)

How much longer before we get the "infinite improbability machine"?

As soon as someone hooks it up to a nice, hot cup of tea.

## Douglas Adams would be proud. (Score:1, Insightful)

## Re: (Score:3, Funny)

My Bistromathic drive makes that look like an electric pram

## Remember Slide Rules? (Score:1)

## The actual thesis (Score:5, Informative)

## Re:The actual thesis (Score:4, Funny)

By Ben Vigoda, Co-Founder and CEO: http://phm.cba.mit.edu/theses/03.07.vigoda.pdf [mit.edu]

Huh, I thought he was dead.

## Re: (Score:2)

You're thinking of Abe Vagoda, the worlds first computer programmer

## Re: (Score:2)

Some actual facts. Thank you.

## Re: (Score:2)

I'm a bit worried about them being completely fabless. I'm sure all their circuits work in SPICE, but how is this going to deal with real world noise, especially embedded on some other digital chip? The powerpoint explicitly states that is adversely affected especially by the sudden spikes caused by digital noise...

I was about to post the slideshow myself, but I see you beat me to it

## Re: (Score:3, Insightful)

I'm a bit worried about them being completely fabless. I'm sure all their circuits work in SPICE, but how is this going to deal with real world noise, especially embedded on some other digital chip? The powerpoint explicitly states that is adversely affected especially by the sudden spikes caused by digital noise...

I wouldn't worry too much. More companies than not are fabless you know. They are going to deal with noise the way all analog designers deal with noise. They are going to use a deep n-well with guard rings. They are going to bypass the hell out of their supplies. They are going to run their signals on high metal with shields beneath. The same things we all do.

## Re:The actual thesis (Score:4, Informative)

Often these approaches overstate the problems of current methodologies quite significantly. This thesis too, seems to hit the old favorites. Here is an example:

In clocked digital systems, speed and throughput is typically limited by worst case delays associated with the slowest module in the system.This would be true if clocked digital systems would be restricted to a single clock. Some are, but the embedded devices I work on usually have half a dozen clocks or more. Some modules run with fairly high speeds, others at relatively low speeds - synchronizing them is not only a standard task, it's actually reasonably easy compared with other problems we face.

Very closely related another of their claims:

The larger the area over which the same clock is shared, the more costly and difficult it is to distribute the clock signal.Again - true in principle, but exaggerating the problem. It's not so difficult to distribute a clock over a large area if you allow skew between different areas. That might appear to defeat the purpose, but you really only need to interface reliably between those areas. Skew can even be helpful in some cases: if you send signal X from block A to be clocked-in by block B - then it helps if the clock arrives later at block B than at block A. Of course it's a disadvantage for a signal Y driven from B to A - but that signal might be faster (less logic to go through in block B). Modern design tools can automatically use clock skew to achieve better timing.

One more:

Building in redundancy to avoid catastrophic failure is not a cost-effective option when manufacturing circuits on a silicon waferWell, we happen to do that regularly, it's cost-effective if you know what you are doing. There are parts of the chip which are much more likely to fail than others - RAMs are more prone to defects than ordinary digital logic. So as part of device testing defective areas of a RAM block can be mapped to a handful of spare cells.

doubling every transistoras suggested in the thesis, is not necessary, obviously.Any of these "fundamentally new" approaches have to compete with the evolutionary solutions which people find for the same problems. That's hard because some of these are at least as clever as the "fundamental" ones, and they are much easier to adapt in existing design flows. I'm not ruling out that at some point we'll switch to a completely different design methodology, just as I'm not excluding the possibility that lighter-than-air travel will at some point find a place in commercial aviation again. I'm just not holding my breath.

## Re: (Score:2)

From the paper, it appears they are using 8 states per signal and custom built a Bayesian NAND gate. I have to question whether or not this could've been better achieved (and is being better achieved) with a fully custom 8-bit Bayesian NAND cell.

## Sounds like a bitstream chip, but with more issues (Score:2, Interesting)

It's vaguely familiar, but since no two circuits are *truly* identical at the analog layer, *and* change as the temperature changes, people used digital instead where 'mostly 0' is still '0' and 'mostly 1' is still '1' regardless. Otherwise you can't mass produce them.

Of more interest is people using analog-alike bitstreams, where the average number of 1's vs 0's in a random stream is the amplitude of the analog wave. They then blend the input streams together to produce the output stream. I've mostly seen

## Whats next then? (Score:2)

First probability on a chip, next an improbability drive!

## 30 times smaller? (Score:2)

If one says that something is 50% smaller, we understand that to mean half the size. And if one says that something is 3000% smaller, or 30 times smaller, should we not understand that as not only taking no space, but actually giving us 29 times the original space back?

Unless we are making a three part comparison, which has new perils. I

## Re: (Score:2)

While I agree with you that it is unclear or at least not intuitively obvious, the plain fact is that it has been in use for a long time and is very common. It's not a recent development (Jonathan Swift is known to have used the construction in the early 1700s) nor is it rare (almost 500,000,000 Google results for 'times less than'). For better or worse, language is not tied directly to math, nor is the meaning of a phrase necessarily tied to the meaning of the individual words that make it up.

## Re: (Score:2)

10x smaller means: one tenth of the original size.

Hence, 50% smaller means: double the size.

Simple, eh?

## Re: (Score:2)

## TF book on this tech! (Score:2)

Ooops... make that Carver Mead, Addison Wesley 1989

## Re: (Score:2)

can I get a simpler explanation of what this can do for you? I understand it says it will be used for probability, but what does that really let you *do*?

## Re: (Score:2)

## Re: (Score:2)

is that really a probability thing though?

okay, if it matches x number of filters, it's spam, right?

So how is that strictly probability though? That would seem quite absolute. Isn't that more "100% once it matches"? I'm trying to wrap my head around how this is different or how this is exclusively probability? Maybe the example is bad. I don't know, and I don't want to attack it. Sorry, ac's comment didn't help explain though :(

## Re:may i just say (Score:4, Funny)

as a machine learning person

This either means:

/.

You are a person who is learning from a machine or....

You are a learning machine who is now referring to itself as person! You also get excited about probabilities and you are posting on

A.I. has gone too far...

## Re: (Score:3, Funny)

You are a learning machine who is now referring to itself as person! You also get excited about probabilities and you are posting on /.A.I. has gone too far...On the plus side, it sounds like the robot revolution is going to be stymied for the same reason as my productivity. Destroy all humans! After I refresh /. one more time...

## Yes it is analogue. (Score:2)

If it uses analogue signals internally then its an analog computer whatever those signals may represent at a higher level in the same way that a DSP is just as digital as a crypto chip even though the binary data is used for different things.

## Re: (Score:2)

why? can you elaborate?

## Re: (Score:3, Funny)

If you're into the concept of fuzzy logic, then I strongly suggest reading Aldiss' Barefoot in the Head if you've not already done so.

I also recommend not reading it.