Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware Science

MIT Creates Chip to Model Synapses 220

MrSeb writes with this excerpt from an Extreme Tech article: "With 400 transistors and standard CMOS manufacturing techniques, a group of MIT researchers have created the first computer chip that mimics the analog, ion-based communication in a synapse between two neurons. Scientists and engineers have tried to fashion brain-like neural networks before, but transistor-transistor logic is fundamentally digital — and the brain is completely analog. Neurons do not suddenly flip from '0' to '1' — they can occupy an almost-infinite scale of analog, in-between values. You can approximate the analog function of synapses by using fuzzy logic (and by ladling on more processors), but that approach only goes so far. MIT's chip is dedicated to modeling every biological caveat in a single synapse. 'We now have a way to capture each and every ionic process that's going on in a neuron,' says Chi-Sang Poon, an MIT researcher who worked on the project. The next step? Scaling up the number of synapses and building specific parts of the brain, such as our visual processing or motor control systems. The long-term goal would be to provide bionic components that augment or replace parts of the human physiology, perhaps in blind or crippled people — and, of course, artificial intelligence. With current state-of-the-art technology it takes hours or days to simulate a simple brain circuit. With MIT's brain chip, the simulation is faster than the biological system itself."
This discussion has been archived. No new comments can be posted.

MIT Creates Chip to Model Synapses

Comments Filter:
  • by Robert Zenz ( 1680268 ) on Wednesday November 16, 2011 @06:29AM (#38071660) Homepage

    The problem is not providing such components, nor get them to work like the original nor getting it into your head. The real problem I see is interfacing with the rest of the brain.

    Because, let's face it, that's something every coder knows: Interfacing, working and supporting legacy systems just sucks.

  • by Eternauta3k ( 680157 ) on Wednesday November 16, 2011 @06:34AM (#38071682) Homepage Journal
    Not just with the brain, but also with itself. I heard the brain is ridiculously well interconnected.
  • I have my doubts (Score:3, Insightful)

    by Anonymous Coward on Wednesday November 16, 2011 @06:47AM (#38071744)

    get them to work like the original

    Is this really something that we could do in the foreseeable future ? My understanding is that the brain programs itself (or we program it if you like) during the first years of our lives (5 to 7) for the most part. An empty new 'brain part' would act just like some parts of the brain act after a stroke I suspect, meaning that it'll take years and years to (re)train it.

    Similarly, children that grew up with animals alone, with little or no interaction with other humans (there were some cases) are never able to learn to speak fluently, because that part of the brain never fully develops (ie. is never programmed).

    AFAIK we don't know enough about how the brain works to pre-program such components and it would need to be strongly tuned to the destination brain, otherwise it won't work very well or at all. We know about the lower-level stuff (neurons, synapses) and some things about the higher-level (regions and general functions), but not much in between (though, I'm not a specialist).

    Even so, I can see some medical uses for this, for people with disabilities. Though nothing like what you see in 'Ghost in the Shell'.

  • by Narcocide ( 102829 ) on Wednesday November 16, 2011 @07:01AM (#38071792) Homepage

    I agree with everything about this statement except the word "never."

    Never is a pretty bold word. It puts you in a pretty gutsy mindset; one that isn't entirely productive to rational scientific analysis. The word "never" is pretty commonly seen in the company of "famous last words."

  • Never??? (Score:3, Insightful)

    by mangu ( 126918 ) on Wednesday November 16, 2011 @08:05AM (#38072028)

    The analog nature of the neuron isn't really the key to making "artificial brains" - the problem is simply scale.

    Agreed.

    We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting

    Shall we cue here all the "never" predictions of the last century? By the year 1900 there were lots of experts predicting we would never have flying machines, by 1950 experts were predicting the whole world would never need more than a dozen computers.

    Moore's law, or should we say Moore's phenomenon, has been showing how much electronic devices scale in the long run.

  • by Anonymous Coward on Wednesday November 16, 2011 @08:08AM (#38072046)

    I think you have to credit MIT researchers for knowing better where the cutting edge is than you, and the writers of the article for including the 1960s in this paragraph:

    'Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says.'

    More than just spiking; from my AI lectures years ago I recall that the McCulloch-Pitts neuron model of the was a spiking model (excitatory inputs, inhibitory inputs, thresholds) etc.

  • by adam.dorsey ( 957024 ) on Wednesday November 16, 2011 @08:40AM (#38072224)

    My (hypothetical) baby is my (and my fiancee's) creation, why should I not have the right to "wipe" it?

  • by wdef ( 1050680 ) on Wednesday November 16, 2011 @08:43AM (#38072248)
    I might be out of date, but: the event itself requires the neuron's action potential to reach a threshold, then the synapse fires. It either fires or it does not. On or off. But the process of reaching the firing threshold is analog, since the physical geometry of the neuron and of its afferent neural feeds (inputs) determines at what point the neuron will fire. Neurotransmitter quantities in the synapse are also modifiable though eg by drugs and natural up/down regulation of receptors, enzymes or re-uptake inhibition. So a neuron is an analog computer having output with various amplitudes of on/off.
  • by Pollux ( 102520 ) <`speter' `at' `tedata.net.eg'> on Wednesday November 16, 2011 @08:56AM (#38072330) Journal

    MIT’s chip — all 400 transistors (pictured below) — is dedicated to modeling every biological caveat in a single synapse. “We now have a way to capture each and every ionic process that’s going on in a neuron,” says Chi-Sang Poon, an MIT researcher who worked on the project.

    Just because you finally can recognize the letters of the alphabet doesn't mean you can speak the language.

  • by Dr_Barnowl ( 709838 ) on Wednesday November 16, 2011 @10:10AM (#38072850)

    Application of intelligence is a natural biological process too, since the mind is running in a biochemical substrate (until the AI is working..)

    You're arguably more responsible for the AI than you are for the baby - it's possible to produce a baby without understanding what you are doing. You don't make an AI accidentally on a drunken prom date.

    The baby isn't even sentient until it reaches a certain level of development.

    So why do we value the child over the computer? Because we are biased towards humans? I'm not saying this is wrong, just saying it's not defensible from the purely intellectual point of view - if they are both sentient and have an imperative to survive, defending the destruction of the artificial sentience because it's easy and free of consequence is in the same ball park as shooting indigenous tribesmen because "they're only darkies".

  • by jpapon ( 1877296 ) on Wednesday November 16, 2011 @10:42AM (#38073078) Journal

    The AI, in essence is the sum of the inputs of its designers. Therefore they should decide what to do with the AI.

    I don't see how that follows. Just because you created something doesn't mean you should always have the power to destroy it.

    Neither did we define completely it's hardware.

    You seem to be saying that the degree to which something is designed by its creator determines whether or not they can destroy it. I find that completely irrelevant. If it is wrong to kill a human, then it is wrong to kill something else that has the intelligence of a human. It doesn't matter who created it or designed it, or to what to what degree they did so. If it has human-level intelligence, then it should possess human-level rights.

  • by jpapon ( 1877296 ) on Wednesday November 16, 2011 @11:42AM (#38073738) Journal

    If you kill a human they are gone.

    Gone in the sense of "not here". We have no way of proving anything beyond that.

    An AI may be re-creatable.

    We don't know one way or the other, so it's not really relevant. Besides, human consciousness might also be re-creatable. Can you say with 100% certainty that it is completely impossible to make an exact copy of the complete state of a human's neural network?

    Also, human level intelligence does not imply human level morality or ethics

    I would say that's exactly what it implies. I guess it depends on WHY you think killing humans is unethical, but killing insects, mice, cows, etc is fine. I say it's because of human intelligence. I can't figure out why you think so.

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...