Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware Science

MIT Creates Chip to Model Synapses 220

MrSeb writes with this excerpt from an Extreme Tech article: "With 400 transistors and standard CMOS manufacturing techniques, a group of MIT researchers have created the first computer chip that mimics the analog, ion-based communication in a synapse between two neurons. Scientists and engineers have tried to fashion brain-like neural networks before, but transistor-transistor logic is fundamentally digital — and the brain is completely analog. Neurons do not suddenly flip from '0' to '1' — they can occupy an almost-infinite scale of analog, in-between values. You can approximate the analog function of synapses by using fuzzy logic (and by ladling on more processors), but that approach only goes so far. MIT's chip is dedicated to modeling every biological caveat in a single synapse. 'We now have a way to capture each and every ionic process that's going on in a neuron,' says Chi-Sang Poon, an MIT researcher who worked on the project. The next step? Scaling up the number of synapses and building specific parts of the brain, such as our visual processing or motor control systems. The long-term goal would be to provide bionic components that augment or replace parts of the human physiology, perhaps in blind or crippled people — and, of course, artificial intelligence. With current state-of-the-art technology it takes hours or days to simulate a simple brain circuit. With MIT's brain chip, the simulation is faster than the biological system itself."
This discussion has been archived. No new comments can be posted.

MIT Creates Chip to Model Synapses

Comments Filter:
  • by Narcocide ( 102829 ) on Wednesday November 16, 2011 @06:31AM (#38071670) Homepage

    Have you ever stood and stared at it, marveled at its beauty, its genius? Billions of people just living out their lives, oblivious. Did you know that the first Matrix was designed to be a perfect human world, where none suffered, where everyone would be happy? It was a disaster. No one would accept the program, entire crops were lost. Some believed we lacked the programming language to describe your perfect world, but I believe that, as a species, human beings define their reality through misery and suffering. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about. Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You've had your time. The future is our world, Morpheus. The future is our time.

    -- Agent Smith (The Matrix)

  • by Anonymous Coward on Wednesday November 16, 2011 @06:41AM (#38071720)

    "You cannot step twice in the same river". That means we are constantly changing. Over sufficiently long period, old person has all but died while new one has gradually taken over.

    Using that reasoning, we could replace biological brains bit-by-bit over long period of time, without killing the subject. In the end, if successful, the person has no more biological brains left. He'd have all digital brains. Backups, copies, performance tuning, higher clock rates, more memory, more devices, ... and immortality.

  • by ledow ( 319597 ) on Wednesday November 16, 2011 @06:52AM (#38071764) Homepage

    I think the REAL problem is that even the smallest brains have several billion neurons, with each having 10's of thousands of connections to other neurons. This chip simulates ONE such connection.

    That's a PCB-routing problem that you REALLY don't want, and way outside the scale of anything that we build (it's like every computer on the planet having 10,000 direct Ethernet connections to nearby computers - no switches, hubs, routers, etc. in order to simulate something approaching a small mouse's brain - not only a cabling and routing nightmare but where the hell do you plug it all in?). Not only that, by a real brain learns by breaking and creating connections all the time.

    The analog nature of the neuron isn't really the key to making "artificial brains" - the problem is simply scale. We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting (and certainly nothing that we could actually analyse any better than the brain of any other species). If we did, it would be unmanageably unprogrammable and unpredictable. If it did anything interesting on its own, we'd never understand how or why it did that.

    And I think the claim that they know EVERYTHING about how a neuron works (at least one part of it) is optimistic at best.

  • by Pegasus ( 13291 ) on Wednesday November 16, 2011 @07:26AM (#38071878) Homepage

    It may be faster, but what about performance per watt? You know, the whole brain does everything on only 40-50 watts. How does this MIT product compare to brains in this area?

  • by Ceriel Nosforit ( 682174 ) on Wednesday November 16, 2011 @07:33AM (#38071906)

    The way I remember it is that a transistor stops a much larger current from passing through until a signal is put on the gate in the middle. Then the current that passes through is in proportion to the signal strength.

    The circuit becomes digital when we decide that only very small and very large voltages counts as 0s and 1s.

  • Re:I have my doubts (Score:5, Interesting)

    by jpapon ( 1877296 ) on Wednesday November 16, 2011 @08:20AM (#38072106) Journal
    You don't see how it's ethically loaded? Really?

    Would the artificial brain have rights? If you wiped its artificial neurons, would it be murder? If you give it control of a physical robot arm and it hurt someone, how and to what extent could you "punish" it? The ethical questions are virtually endless when you start to play "god". I would think that would be obvious.

  • by ultranova ( 717540 ) on Wednesday November 16, 2011 @08:45AM (#38072264)

    That's a PCB-routing problem that you REALLY don't want, and way outside the scale of anything that we build (it's like every computer on the planet having 10,000 direct Ethernet connections to nearby computers - no switches, hubs, routers, etc. in order to simulate something approaching a small mouse's brain - not only a cabling and routing nightmare but where the hell do you plug it all in?). Not only that, by a real brain learns by breaking and creating connections all the time.

    A single neuron-neuron connection has very low bandwidth, in effect transferring a single number (activation level) a few hundred times a second. Even if timing is important, you can simply accompany the level with a timestamp. A single 100 Mbs Ethernet connection is easily able to handle all those 10 000 connections.

    Also, most of those 10 000 connections are to nearby neurons, presumably because long-distance communication involves the same latency and energy penalties in the brains as it does anywhere else. There are efficient methods to auto-cluster a P2P network so as to minimize total length of connections, for example Freenet does this; so, you could, in theory, run a distributed neural simulator even on standard Internet technology. In fact, I suspect that it could be possible to achieve human-level or higher artificial intelligence with existant computer power in this method right now.

    So, who wants to start HAL@Home ?-)

  • Re:I have my doubts (Score:4, Interesting)

    by amck ( 34780 ) on Wednesday November 16, 2011 @09:53AM (#38072702) Homepage

    How about: does it suffer?
    Does creating a "human" inside a device where it can presumably sense, but have no limbs or autonomy consitute torture? Can you turn it off?

    Why is "being natural" a defining answer to these questions?

  • by Anonymous Coward on Wednesday November 16, 2011 @10:08AM (#38072824)

    I think the REAL problem is that even the smallest brains have several billion neurons, with each having 10's of thousands of connections to other neurons. This chip simulates ONE such connection.

    I give it 30 years at most.

    Let's say several=5. times 5000, (10 thousand dived by to so as to not double-count both ends) = 2,500 billion connection.Let's assume 400 transistors per connection as in this study that comes out to 10,000,000 billions transistors, not counting the possibility of time-multiplexed busses as mentioned in a comment below (as biological neurons are slow compared to transistors)

    According to wikipedia [slashdot.org] a Xilinx Virtex 7 FPGA (More similar to an array of neurons than a CPU) has 6.8 billion transistors. This means we need 1,470,588 times more transistors. That's less than 2^20.5, or 20.5 doublings, which according to Moore's law would be about 30 years or so.

    So even without multiprocessing, simplification of this design, and other simple improvements, this will be possible to put on some sort of chip in 30 years time.

    Never say never. 2042 will be the year of the brain in the desktop! :)

"More software projects have gone awry for lack of calendar time than for all other causes combined." -- Fred Brooks, Jr., _The Mythical Man Month_

Working...