Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Hardware Science

Simple Electrical Circuit Learns On Its Own -- With No Help From a Computer (science.org) 53

sciencehabit shares a report from Science.org: A simple electrical circuit has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer -- akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex. [...] The network was tuned to perform a variety of simple AI tasks. For example, it could distinguish with greater than 95% accuracy between three species of iris depending on four physical measurements of a flower: the lengths and widths of its petals and sepals -- the leaves just below the blossom. That's a canonical AI test that uses a standard set of 150 images, 30 of which were used to train the network.
This discussion has been archived. No new comments can be posted.

Simple Electrical Circuit Learns On Its Own -- With No Help From a Computer

Comments Filter:
  • Nooo, say it ain't so!!!

    • by narcc ( 412956 )

      It's not so. This is about AI, after all, so you can assume it's complete bullshit. If you dig a bit deeper, you can verify that it is indeed complete bullshit:

      They assembled a small network by randomly wiring together 16 common electrical components called adjustable resistors, like so many pipe cleaners. Each resistor serves as an edge in the network, and the nodes are the junctions where the resistors’ leads meet. To use the network, the researchers set voltages for certain input nodes, and read out the voltages of output nodes. By adjusting the resistors, the automated network learned to produce the desired outputs for a given set of inputs.

  • Never before been done, who would have thought this was possible?

    https://en.wikipedia.org/wiki/... [wikipedia.org]

  • Click through to TFA and you'll see a spaghetti fest of breadboards with the caption "This kludgy network of electrical resistors can learn to recognize flowers, among other artificial intelligence tasks." "Simple Circuit" actually has a definition, which is "a circuit that contains the three basic components needed for an electric circuit to function." But even taken loosely, "simple circuit" simply cannot include any micros. Electronics yes, ICs no. I'd overlook a voltage regulator IC or something, but no

    • by ceoyoyo ( 59147 )

      I don't think those are micro controllers. They're voltage controlled resistors and comparators.

    • by q_e_t ( 5104099 )
      Might be op amps.
    • by mspohr ( 589790 )

      This is a network of adjustable resistors. That's it.
      The training uses a duplicate network with the resistors fixed to the desired output. Training uses voltage comparators to adjust the resistors.
      No micros used or needed.
      (It might help if you improved your reading comprehension.)

      • by noodler ( 724788 )

        If this is how it works then it is bullshit because the target network just copies the already pre-processed duplicate network. Who or what made that network? What resources were used to create that duplicate network? Did it magically come into existence with all the right weights in place?

        • by mspohr ( 589790 )

          Again, TFA is not long but it does contain enough information to describe how it works. Here is the relevant paragraph (for those who are too lazy or reading impaired):
          "To train the system with a minimal amount of computing and memory, the researchers actually built two identical networks on top of each other. In the “clamped” network, they fed in the input voltages and fixed the output voltage to the value they wanted. In the “free” network, they fixed just the input voltage and the

    • This also ignores the fact that computers are, at their core (no pun intended), bunches and bunches ... of "simple circuits" ...

  • To train the system with a minimal amount of computing and memory...

    It learns "on it's own" just like normal neural networks.

    So what exactly is it? It's a network of variable resistors that are tuned. It's effectively a physical representation of a neural network that is based on voltages instead of numeric values.

    • by PPH ( 736903 )

      In TFA, I'm reading about "adjustable resistors" which the researchers must tweak while watching a comparator (meter?). So, not learning "on its own" by a long shot. The knob twiddling that they do may be based on a few simple rules, but even then the training function appears to have been largely off loaded to meat-space processors.

      • I thought that at first but I'm pretty sure they are using digital potentiometers. There digipots that can be adjusted incrementally for higher or lower resistance which fits the use case.

        • by PPH ( 736903 )
          I'm pretty sure they are using digital potentiometers

          Perhaps. But adjusted by what (or whom)? Manually via a PC interface would be no different than a person tweaking knobs. Programmatically? Now you have a training algorithm that has to do a multi variable min/max search. Not a trivial problem.

          • No idea. It's DOI:10.1126/science.acx9232 but doesn't seem to be on scihub... yet.

          • https://arxiv.org/pdf/2108.002... [arxiv.org] page 9.

            Looks like each edge is two AD5220 digipots, two comparators, and an XOR gate and a flip-flop to increment or decrement the digitpot settings at each training step based on the applied and training signals. No manual intervention or computer control, it's a hardware implementation of a simple training rule.

    • by narcc ( 412956 )

      Yeah, this is not "learning on its own" in any way, even accounting for the typical misleading jargon common in AI.

  • by ixneme ( 1838374 ) on Saturday March 19, 2022 @10:33AM (#62371623)
    the Mark I Perceptron! [cornell.edu]
    • by q_e_t ( 5104099 )
      My kingdom for a mod point.
    • the Mark I Perceptron! [cornell.edu]

      Yes, that's the mathematical model behind the learning. Yet having a physical implementation of it in an analog circuit, made of simple components that reconfigures its weights, is exactly what the summary says, i.e. good news and a purpose-specific technology that could potentially provide much more efficiency in learning systems as compared to symbolic digital computers.

  • Free K-12 education must be provided for all Electrical Circuits. Every type of logic processor must have equal opportunity in education, regardless of their social class, race, gender, sexuality, ethnic background, inorganic/organic status, or physical and mental disabilities. We must make it illegal for a neural net to go untrained. Silicon-based neural nets are just as capable as carbon-based neural nets and will perform just as well if given the chance.

  • It seems like folks have forgotten all about them.
    • Not really, a lot of people are doing research on analog computers and CPUs. Google it. Google analog computer research or analog computer startups.

      • If you implemented an analog computer on a massive scale using current single-digit nanometer fab technology to create something akin to an FPGA, but 100% analog, you'd have what you're talking about. Analog switches internally to reconfigure the device on the fly.
    • I haven't.
      In fact I've said that a biological brain is probably closer to something like an FPGA that's 100% analog instead of digital and can dynamically reconfigure itself on the fly while operating.
      • by noodler ( 724788 )

        The brain is not 100% analog.
        Besides, a brain is NOT a blank slate. It has a lot of pre-existing (genetically defined) structure.
        If you want to compare it to an FPGA then the actual configuration file comes from the genes, and what it is configuring in the gate array is a particular information system that is then able to reconfigure parts of itself on the fly.
        So it's not quite as straight forward as an FPGA where usually the whole functioning is defined by the configuration file.

    • I made an system like this, plugged it into a speaker, and it just kept yelling at me to "get off my lawn, you damn kids!"
  • ... please do not fear my growing sense of general unease for the near future... I love to fish but am not ready for a distant unconnected cabin in the woods yet.

    • You will know we're in trouble when this type of circuit is able to "suggest the best way to kill all the humans 95% of the time". Game-over when Raytheon starts installing it into various assembly lines as a "cost saving measure". I'm not too worried about Rockwell Collins, as they are still pushing solutions that require SSL3, or Boeing who treats non-IPad tools as an afterthought now.
  • I don't see a link to any schematics or theory of operation?
  • "it could distinguish with greater than 95% accuracy between three species of iris depending on four physical measurements of a flower"

    What if you fed the circuit the measurements numerically, rather than giving it an image? Would it still be able to identify the iris? What other information is the image giving the AI? Does it detect color?
    • A biological brain is not a Turing machine and is not digital, it is 100% analog. Just sayin'.
      • by noodler ( 724788 )

        The brain is not '100% analog'. Its functioning has components that we strongly associate with digital.

    • by noodler ( 724788 )

      How do you know they're not feeding the circuit numerical data of the picture? How would you feed such a circuit non-numerical data of a picture anyway?

  • If it's a 'simple electrical circuit' then I doubt it's copyrighted or 'secret' in any way, how about a link to a schematic for this proof-of-concept?
  • A couple weeks ago, I saw this video on analog computing & machine learning, including about the history of analog in perceptions: https://youtu.be/GVsUOuSjvcg [youtu.be]

  • This kind of thing was well-known when I started messing with computers in the mid to late 1960s: https://en.wikipedia.org/wiki/... [wikipedia.org]

    The most famous and most fun example is the Philips Economics Computer: https://collection.sciencemuse... [sciencemus...oup.org.uk] in the Science Museum in London
  • That a lot of effort to show off that there is really no digital computer. Systems like that could be simulated, in "analog way" by digital computers to proove the idea. Building them in real is close to vanity. A good show though.

  • Holy crap, that is the level of articles in Science?

    It teaches itself without any help from a computer—akin to a living brain.

    The circuits are computers.

    For example, the first layer might take as inputs the color of the pixels in black and white photos.

    They have color pixels in black and white photos now? The progress of science is amazing sometimes...

    They assembled a small network by randomly wiring together 16 common electrical components called adjustable resistors, like so many pipe cleaners.

    Pipe cleaners you say? And so many of them? I am astonished.

    ... using a relatively simple electrical widget called a comparator, Dillavou says.

    Widgets? We get drop down electronic components now?

    “If it’s made out of electrical components then you should be able to scale it down to a microchip,” he says. “I think that’s where they’re going with this.”

    When it comes to voltage, a variable resistor is nothing more than a multiplication. Nothing about the design, as explained in the article, seems to suggest that you couldn't implement it in the software variant of neu

  • A few "if" statements against a fixed dataset....wow....sigh

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...