Simple Electrical Circuit Learns On Its Own -- With No Help From a Computer (science.org) 53
sciencehabit shares a report from Science.org: A simple electrical circuit has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer -- akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex. [...] The network was tuned to perform a variety of simple AI tasks. For example, it could distinguish with greater than 95% accuracy between three species of iris depending on four physical measurements of a flower: the lengths and widths of its petals and sepals -- the leaves just below the blossom. That's a canonical AI test that uses a standard set of 150 images, 30 of which were used to train the network.
Re: (Score:3)
The Science article is terrible. Neural networks were originally built in hardware, using potentiometers just like this thing. Unlike this thing, they didn't have integrated circuits.
I *think* what's actually new(ish) here is that they've come up with a simple rule to calculate the adjustment for the pots. It's only hinted at in the article though, so you'd have to go look up the original paper to see. From the (very bad) description it doesn't sound all that earth shattering though, possibly a variation on
Re: (Score:1)
Re: just how simple is "simple" here? (Score:2)
The reason for it being single layer is that itâ(TM)s much more complex to perform learning with multiple layers. With a single layer you can just adjust the weights by how wrong you were for each bit of training data. With multiple layers you need back propagation, or something cleverer.
Re: (Score:2)
Nope. As I said, the very first ANN was a hardware implementation using manually adjusted potentiometers, back in the 50s.
Yes, that's something that they didn't do in the 50s, unless you count some poor sucker who had to twiddle all the knobs. However, the standard training methods are simulations of pretty basic analog processes, so I'd be surprised if n
Re: (Score:2)
I think the main advancement was the ability to just use analog voltage instead of digital multiplication for all the node junctions.
That can never be the answer to anything, because digital is analog under the hood. You can always just add a tiny bit of positive feedback to get your analog signal to prefer the ends of the range, which makes it effectively digital.
Also you can do math with analog op-amps just fine.
Re: (Score:2)
The Science article is terrible. Neural networks were originally built in hardware, using potentiometers just like this thing. Unlike this thing, they didn't have integrated circuits.
Yes, I was a bit baffled, because this seems to be very old technology. But I haven't read the article, just the summary.
Re: (Score:2)
You're not missing much. Having thought about it, analog derivatives aren't terribly difficult to compute, so I don't think you'd have much trouble implementing a regular old backprop ANN using all analog simple circuits either.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
If you would bother to read TFA, you would learn that they made a simple network of adjustable resistors, applied input voltages, and read output voltages. (This is actually very similar to how a "wetware" NN works.)
The training was the same network with fixed input and outputs. The circuit then adjusted the resistors to match.
Simple.
Machine learning a boondoggle?! (Score:1)
Nooo, say it ain't so!!!
Re: (Score:2)
It's not so. This is about AI, after all, so you can assume it's complete bullshit. If you dig a bit deeper, you can verify that it is indeed complete bullshit:
They assembled a small network by randomly wiring together 16 common electrical components called adjustable resistors, like so many pipe cleaners. Each resistor serves as an edge in the network, and the nodes are the junctions where the resistors’ leads meet. To use the network, the researchers set voltages for certain input nodes, and read out the voltages of output nodes. By adjusting the resistors, the automated network learned to produce the desired outputs for a given set of inputs.
Totally new (Score:2)
Never before been done, who would have thought this was possible?
https://en.wikipedia.org/wiki/... [wikipedia.org]
lol "Simple Electrical Circuit" (Score:2, Interesting)
Click through to TFA and you'll see a spaghetti fest of breadboards with the caption "This kludgy network of electrical resistors can learn to recognize flowers, among other artificial intelligence tasks." "Simple Circuit" actually has a definition, which is "a circuit that contains the three basic components needed for an electric circuit to function." But even taken loosely, "simple circuit" simply cannot include any micros. Electronics yes, ICs no. I'd overlook a voltage regulator IC or something, but no
Re: (Score:2)
I don't think those are micro controllers. They're voltage controlled resistors and comparators.
Re: (Score:2)
Re: (Score:2)
This is a network of adjustable resistors. That's it.
The training uses a duplicate network with the resistors fixed to the desired output. Training uses voltage comparators to adjust the resistors.
No micros used or needed.
(It might help if you improved your reading comprehension.)
Re: (Score:2)
If this is how it works then it is bullshit because the target network just copies the already pre-processed duplicate network. Who or what made that network? What resources were used to create that duplicate network? Did it magically come into existence with all the right weights in place?
Re: (Score:3)
Again, TFA is not long but it does contain enough information to describe how it works. Here is the relevant paragraph (for those who are too lazy or reading impaired):
"To train the system with a minimal amount of computing and memory, the researchers actually built two identical networks on top of each other. In the “clamped” network, they fed in the input voltages and fixed the output voltage to the value they wanted. In the “free” network, they fixed just the input voltage and the
Re: (Score:2)
This also ignores the fact that computers are, at their core (no pun intended), bunches and bunches ... of "simple circuits" ...
Requires training, is neural network. (Score:2)
To train the system with a minimal amount of computing and memory...
It learns "on it's own" just like normal neural networks.
So what exactly is it? It's a network of variable resistors that are tuned. It's effectively a physical representation of a neural network that is based on voltages instead of numeric values.
Re: (Score:2)
In TFA, I'm reading about "adjustable resistors" which the researchers must tweak while watching a comparator (meter?). So, not learning "on its own" by a long shot. The knob twiddling that they do may be based on a few simple rules, but even then the training function appears to have been largely off loaded to meat-space processors.
Re: (Score:2)
I thought that at first but I'm pretty sure they are using digital potentiometers. There digipots that can be adjusted incrementally for higher or lower resistance which fits the use case.
Re: (Score:2)
Perhaps. But adjusted by what (or whom)? Manually via a PC interface would be no different than a person tweaking knobs. Programmatically? Now you have a training algorithm that has to do a multi variable min/max search. Not a trivial problem.
Re: (Score:2)
No idea. It's DOI:10.1126/science.acx9232 but doesn't seem to be on scihub... yet.
Re: (Score:2)
https://arxiv.org/pdf/2108.002... [arxiv.org] page 9.
Looks like each edge is two AD5220 digipots, two comparators, and an XOR gate and a flip-flop to increment or decrement the digitpot settings at each training step based on the applied and training signals. No manual intervention or computer control, it's a hardware implementation of a simple training rule.
Re: (Score:2)
Yeah, this is not "learning on its own" in any way, even accounting for the typical misleading jargon common in AI.
News? (Score:3)
Re: (Score:2)
Yes, news (Score:2)
the Mark I Perceptron! [cornell.edu]
Yes, that's the mathematical model behind the learning. Yet having a physical implementation of it in an analog circuit, made of simple components that reconfigures its weights, is exactly what the summary says, i.e. good news and a purpose-specific technology that could potentially provide much more efficiency in learning systems as compared to symbolic digital computers.
Universal education, teach every neural net (Score:2)
Free K-12 education must be provided for all Electrical Circuits. Every type of logic processor must have equal opportunity in education, regardless of their social class, race, gender, sexuality, ethnic background, inorganic/organic status, or physical and mental disabilities. We must make it illegal for a neural net to go untrained. Silicon-based neural nets are just as capable as carbon-based neural nets and will perform just as well if given the chance.
Can anyone say, "analog computer"? (Score:2)
Re: Can anyone say, "analog computer"? (Score:2)
Not really, a lot of people are doing research on analog computers and CPUs. Google it. Google analog computer research or analog computer startups.
Re: (Score:2)
Re: (Score:3)
In fact I've said that a biological brain is probably closer to something like an FPGA that's 100% analog instead of digital and can dynamically reconfigure itself on the fly while operating.
Re: (Score:2)
The brain is not 100% analog.
Besides, a brain is NOT a blank slate. It has a lot of pre-existing (genetically defined) structure.
If you want to compare it to an FPGA then the actual configuration file comes from the genes, and what it is configuring in the gate array is a particular information system that is then able to reconfigure parts of itself on the fly.
So it's not quite as straight forward as an FPGA where usually the whole functioning is defined by the configuration file.
Re: (Score:2)
Please... (Score:2)
... please do not fear my growing sense of general unease for the near future... I love to fish but am not ready for a distant unconnected cabin in the woods yet.
Re: (Score:2)
Schematics (Score:1)
Re: (Score:1)
Not AI (Score:2)
What if you fed the circuit the measurements numerically, rather than giving it an image? Would it still be able to identify the iris? What other information is the image giving the AI? Does it detect color?
Re: (Score:2)
Re: (Score:2)
The brain is not '100% analog'. Its functioning has components that we strongly associate with digital.
Re: (Score:2)
How do you know they're not feeding the circuit numerical data of the picture? How would you feed such a circuit non-numerical data of a picture anyway?
Link to a schematic, please? (Score:2)
Brief Tutor on Analog & Machine Learning (Score:1)
A couple weeks ago, I saw this video on analog computing & machine learning, including about the history of analog in perceptions: https://youtu.be/GVsUOuSjvcg [youtu.be]
Analogue Computer (Score:2)
The most famous and most fun example is the Philips Economics Computer: https://collection.sciencemuse... [sciencemus...oup.org.uk] in the Science Museum in London
Would not simulation be suffitient? (Score:2)
That a lot of effort to show off that there is really no digital computer. Systems like that could be simulated, in "analog way" by digital computers to proove the idea. Building them in real is close to vanity. A good show though.
Article poking my brain with an ice pick. (Score:2)
Holy crap, that is the level of articles in Science?
It teaches itself without any help from a computer—akin to a living brain.
The circuits are computers.
For example, the first layer might take as inputs the color of the pixels in black and white photos.
They have color pixels in black and white photos now? The progress of science is amazing sometimes...
They assembled a small network by randomly wiring together 16 common electrical components called adjustable resistors, like so many pipe cleaners.
Pipe cleaners you say? And so many of them? I am astonished.
... using a relatively simple electrical widget called a comparator, Dillavou says.
Widgets? We get drop down electronic components now?
“If it’s made out of electrical components then you should be able to scale it down to a microchip,” he says. “I think that’s where they’re going with this.”
When it comes to voltage, a variable resistor is nothing more than a multiplication. Nothing about the design, as explained in the article, seems to suggest that you couldn't implement it in the software variant of neu
Yet another iris dataset model (Score:1)