Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Science

MIT Creates Chip to Model Synapses 220

MrSeb writes with this excerpt from an Extreme Tech article: "With 400 transistors and standard CMOS manufacturing techniques, a group of MIT researchers have created the first computer chip that mimics the analog, ion-based communication in a synapse between two neurons. Scientists and engineers have tried to fashion brain-like neural networks before, but transistor-transistor logic is fundamentally digital — and the brain is completely analog. Neurons do not suddenly flip from '0' to '1' — they can occupy an almost-infinite scale of analog, in-between values. You can approximate the analog function of synapses by using fuzzy logic (and by ladling on more processors), but that approach only goes so far. MIT's chip is dedicated to modeling every biological caveat in a single synapse. 'We now have a way to capture each and every ionic process that's going on in a neuron,' says Chi-Sang Poon, an MIT researcher who worked on the project. The next step? Scaling up the number of synapses and building specific parts of the brain, such as our visual processing or motor control systems. The long-term goal would be to provide bionic components that augment or replace parts of the human physiology, perhaps in blind or crippled people — and, of course, artificial intelligence. With current state-of-the-art technology it takes hours or days to simulate a simple brain circuit. With MIT's brain chip, the simulation is faster than the biological system itself."
This discussion has been archived. No new comments can be posted.

MIT Creates Chip to Model Synapses

Comments Filter:
  • by Robert Zenz ( 1680268 ) on Wednesday November 16, 2011 @06:29AM (#38071660) Homepage

    The problem is not providing such components, nor get them to work like the original nor getting it into your head. The real problem I see is interfacing with the rest of the brain.

    Because, let's face it, that's something every coder knows: Interfacing, working and supporting legacy systems just sucks.

    • by Eternauta3k ( 680157 ) on Wednesday November 16, 2011 @06:34AM (#38071682) Homepage Journal
      Not just with the brain, but also with itself. I heard the brain is ridiculously well interconnected.
    • by The Creator ( 4611 ) on Wednesday November 16, 2011 @06:34AM (#38071684) Homepage Journal

      Due to their incompatibility with newer systems, meat bags are now obsolete.

      • by mikael_j ( 106439 ) on Wednesday November 16, 2011 @06:39AM (#38071710)

        I'm sure someone will build an interface for it, and then there will be an open source driver within days.

        If not that then let's at least hope our robotic overlords have it in their perfectly synchronized hearts to backport some of the major features...

    • I think getting them to "work like the original" is the problem actually. That part covers interfacing all by itself. The brain is very highly interconnected in 3D.. and we don't have great 3D chip fabrication yet.

      • You don't need 3D chip fabrication to do 3D interconnects. The Connection Machine processors were interconnected in 8 dimensions, IIRC. Each node had 8 connections to its neighbors in a hypercube.

    • I have my doubts (Score:3, Insightful)

      by Anonymous Coward

      get them to work like the original

      Is this really something that we could do in the foreseeable future ? My understanding is that the brain programs itself (or we program it if you like) during the first years of our lives (5 to 7) for the most part. An empty new 'brain part' would act just like some parts of the brain act after a stroke I suspect, meaning that it'll take years and years to (re)train it.

      Similarly, children that grew up with animals alone, with little or no interaction with other humans (there were some cases) are never able

      • Creating an artificial human brain is too ethically loaded to even be considered in university research. They are more likely to try to get it to play a flight simulator since that's what someone did with a rat brain and they could compare their results, making for interesting data.

        Slashdot did however already welcome the flying rat overlords.

        • Ethically loaded? How? I don't see how the brain would be suffering? Or are they worried about skynet?

          • Re:I have my doubts (Score:5, Interesting)

            by jpapon ( 1877296 ) on Wednesday November 16, 2011 @08:20AM (#38072106) Journal
            You don't see how it's ethically loaded? Really?

            Would the artificial brain have rights? If you wiped its artificial neurons, would it be murder? If you give it control of a physical robot arm and it hurt someone, how and to what extent could you "punish" it? The ethical questions are virtually endless when you start to play "god". I would think that would be obvious.

            • Ethically controversial more than loaded. It is your creation, why should you not have the right to wipe it?

              • by adam.dorsey ( 957024 ) on Wednesday November 16, 2011 @08:40AM (#38072224)

                My (hypothetical) baby is my (and my fiancee's) creation, why should I not have the right to "wipe" it?

                • A hypothetical baby is created through normal (hopefully) natural biological process. Any AI is created through application of intelligence. Thus I don't find the analogy sufficiently good to base a decision on. It is unethical to kill the baby, but that does not imply it is unethical to wipe the AI. Though, why would you?

                  • Re:I have my doubts (Score:4, Interesting)

                    by amck ( 34780 ) on Wednesday November 16, 2011 @09:53AM (#38072702) Homepage

                    How about: does it suffer?
                    Does creating a "human" inside a device where it can presumably sense, but have no limbs or autonomy consitute torture? Can you turn it off?

                    Why is "being natural" a defining answer to these questions?

                    • How does it know what it is missing?

                      No Human is owned - that would be a violation of rights. But a computer for example (no matter how complex) is hardware which can be owned. Its software is merely a state of that machine which can also be owned. If I want to change the state of my software on my hardware, how can you say I'm ethically wrong? An AI I make is mine in a way no other intelligence can be. I can not own a person, neither can I own the state of their brain. But the state of the conditions inside

                    • [Quote]No Human is owned - that would be a violation of rights.[/quote]

                      That is (almost) true now but the age of slavery is not that long ago.

                      And the only reason owning* a human would be a "violation of rights" is that those rights have been granted by humans with human laws. And those laws can be changed so it is easy to imagine a future where humans again can own other humans(Not that I hope this happens, but I can imagine that it might).

                      The entire concept of "Ownership" is a human created thing based on l

                    • by Toonol ( 1057698 )
                      Why does a human have rights, as opposed to a rock?

                      If the attributes that give humans rights also exist in another object, that object should have recognized rights as well. I think most people would agree that it's self-awareness, perception, and reasoning that are the basis of rights (with some exceptions), not heredity or species.
                    • So, you're saying I can't own a self aware computer?

                      I can own the hardware. I can own the software it runs, but I can't own the unit as whole - i.e. the machine is the more than the sum of its parts.

                      Ownership of a human is a different matter. You can't practically control all inputs. You can't practically control an initial state. You can't practically rewrite the code for an improvement. Owning a human in the way you could own AI is unpractical in the extreme. I don't think the analogy between "Artificia

                    • Except that no human created another human in the way you design AI. If you are not religious, then noone created anyone. So, you're comparing apples and oranges here. It is not about ancestry, it is about engineering. Design. Software. Arguably a human brain is vaguely analogous to a computer (not really, but lets run with it a bit), but we can not do with a human brain what we can do with an AI. We can't design it, control all I/O and initial conditions.

                  • by jpapon ( 1877296 )

                    A hypothetical baby is created through normal (hopefully) natural biological process. Any AI is created through application of intelligence.

                    I would argue that "application of intelligence" is a "natural biological process". We (and our brains) are, after all, creations of nature.

                    I would also point out that a baby created through in vitro fertilization is also, by definition, an "application of intelligence", and yet should also be treated as equal to a "natural" baby.

                    • You're missing the point. We did not for example define even the initial state of the baby's brain - we do not own the software state(if you want to use that poor analogy). Neither did we define completely it's hardware. It is a human being - not of our design (whether or not we were designed is not relevant here).

                      With an AI, we designed the hardware. We designed the software. We defined the initial conditions. We defined the inputs. So, clearly there is a difference. The AI, in essence is the sum of the i

                    • by jpapon ( 1877296 ) on Wednesday November 16, 2011 @10:42AM (#38073078) Journal

                      The AI, in essence is the sum of the inputs of its designers. Therefore they should decide what to do with the AI.

                      I don't see how that follows. Just because you created something doesn't mean you should always have the power to destroy it.

                      Neither did we define completely it's hardware.

                      You seem to be saying that the degree to which something is designed by its creator determines whether or not they can destroy it. I find that completely irrelevant. If it is wrong to kill a human, then it is wrong to kill something else that has the intelligence of a human. It doesn't matter who created it or designed it, or to what to what degree they did so. If it has human-level intelligence, then it should possess human-level rights.

                    • If you kill a human they are gone. An AI may be re-creatable. Also, human level intelligence does not imply human level morality or ethics. Or is it right in your view to impose these on an AI? If we can impose things at will upon an AI, how is it better from just admitting the AI is "Artificial" and thus we can do what we like with it? How do you know it would even fear death?

                    • by jpapon ( 1877296 ) on Wednesday November 16, 2011 @11:42AM (#38073738) Journal

                      If you kill a human they are gone.

                      Gone in the sense of "not here". We have no way of proving anything beyond that.

                      An AI may be re-creatable.

                      We don't know one way or the other, so it's not really relevant. Besides, human consciousness might also be re-creatable. Can you say with 100% certainty that it is completely impossible to make an exact copy of the complete state of a human's neural network?

                      Also, human level intelligence does not imply human level morality or ethics

                      I would say that's exactly what it implies. I guess it depends on WHY you think killing humans is unethical, but killing insects, mice, cows, etc is fine. I say it's because of human intelligence. I can't figure out why you think so.

                    • by Belial6 ( 794905 )
                      You are missing the point. The fact that you are debating this at all is proof of the OP's assertion.
                    • We don't know one way or the other, so it's not really relevant. Besides, human consciousness might also be re-creatable. Can you say with 100% certainty that it is completely impossible to make an exact copy of the complete state of a human's neural network?

                      Yes. Actually. At this time we can not make a copy. So, yes, we can't restore the state of a human. Neither can we reconstruct exactly a human's hardware if we break it. If this changes, do you honestly think laws won't change with time? If I murder someone, knowing full well, you'll just "respawn" them, is it as bad as murder now? Should the punishment be the same? I

                      Killing humans is not unethical in many places. In America for example it is considered acceptable for a number or reasons, including thei

                    • Proof it is controversial not loaded. It seems pretty simple to me.

                  • by Dr_Barnowl ( 709838 ) on Wednesday November 16, 2011 @10:10AM (#38072850)

                    Application of intelligence is a natural biological process too, since the mind is running in a biochemical substrate (until the AI is working..)

                    You're arguably more responsible for the AI than you are for the baby - it's possible to produce a baby without understanding what you are doing. You don't make an AI accidentally on a drunken prom date.

                    The baby isn't even sentient until it reaches a certain level of development.

                    So why do we value the child over the computer? Because we are biased towards humans? I'm not saying this is wrong, just saying it's not defensible from the purely intellectual point of view - if they are both sentient and have an imperative to survive, defending the destruction of the artificial sentience because it's easy and free of consequence is in the same ball park as shooting indigenous tribesmen because "they're only darkies".

                    • Of course its defensible. The child has potential. An AI can be recreated if erased by supplying identical inputs and initial conditions. The child can not. This much is obvious. If I erase my AI, it isn't ended in the same way as if I erase a child. Why does everyone forget the implications of the "Artificial" part of "AI"? Too much sci fi?

                  • A hypothetical baby is created through normal (hopefully) natural biological process. Any AI is created through application of intelligence. Thus I don't find the analogy sufficiently good to base a decision on. It is unethical to kill the baby, but that does not imply it is unethical to wipe the AI. Though, why would you?

                    Well, if I developed such a thing as an artificial brain, it would have tens of thousands of image saves at various points in its development, would occasionally develop undesirable behavior and I would want to go back to previous revisions and try again - is that unethical?

                    At what point does the AI become self determining as to whether or not it wants to go back to a previous version? If I want to continue development, am I obligated to keep the current copy running while I try something different from an

                • by Anonymous Coward on Wednesday November 16, 2011 @09:00AM (#38072346)

                  You'll wipe your baby way more often than you'd want to.

                • Many people believe you do have that right.
                • No, actually your hypothetical baby isn't your creation. Its a natural result of your inborn programming. If you engineer self replicating cells are the replications the cells creations or your creations? They are your creations obviously.

                  Your baby is the creation of evolution and/or a diety/intelligent designer. So the question becomes not whether you have the right to kill your baby, but whether god does.

              • Isn't this the argument used by religious nut jobs on Jihad or Crusade, burning heretics at the stake or stoning infidels to death? "God created them, then God told me to destroy them. Who am I to question the will of the creator or the authority of his holy scriptures?"

                Likewise, if my daddy owned slaves and the children of those slaves, and if my daddy bequeathed those slaves to me in his will at his death, then those slaves and children of those slaves become my property. Who has the right to deny me o

                • No. Noone is telling anyone to do anything. I am doing what I want with the hardware I own. If my hardware kills me, that is my problem. You could call it suicide.

              • by Toonol ( 1057698 )
                It is your creation, why should you not have the right to wipe it?

                Think through that a little more.
      • by jpapon ( 1877296 )
        The trick is that while it may take a while to train it, you only have to do it once. Then you can simply copy it as many times as you want.

        Also, training would be significantly faster than in an actual human brain, since the connections are faster and you can simply train it using recorded data as input. No need to have it "actually" go through the teaching scenarios.

        • by Belial6 ( 794905 )
          Your assuming that it would be easier to write out the state of one of these brains than it would be to do it with a biological brain. That is a big assumption.
      • AFAIK we don't know enough about how the brain works to pre-program such components and it would need to be strongly tuned to the destination brain, otherwise it won't work very well or at all.

        It's true that throwing a bunch of neuron simulators into a pot won't automatically do anything, until you figure out how to program it. But advances in hardware and programming are quite tightly coupled - you make a new machine, then you spend a lot of time figuring out how to get the most out of it, until you fi

      • Even so, I can see some medical uses for this, for people with disabilities. Though nothing like what you see in 'Ghost in the Shell'.

        I see another 50 years of research with this and still not getting very far in terms of replicating complex brain function.

        Complex neural interfaces are being developed for things like vision for the blind, and of course cochlear implants - for the most part, the existing wetware adapts and learns to interpret the signals from the machine.

        I do see researchers playing with these and demonstrating some cool proofs of concept, interesting control systems built out of a handful of neurons, etc. but something co

    • by ledow ( 319597 ) on Wednesday November 16, 2011 @06:52AM (#38071764) Homepage

      I think the REAL problem is that even the smallest brains have several billion neurons, with each having 10's of thousands of connections to other neurons. This chip simulates ONE such connection.

      That's a PCB-routing problem that you REALLY don't want, and way outside the scale of anything that we build (it's like every computer on the planet having 10,000 direct Ethernet connections to nearby computers - no switches, hubs, routers, etc. in order to simulate something approaching a small mouse's brain - not only a cabling and routing nightmare but where the hell do you plug it all in?). Not only that, by a real brain learns by breaking and creating connections all the time.

      The analog nature of the neuron isn't really the key to making "artificial brains" - the problem is simply scale. We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting (and certainly nothing that we could actually analyse any better than the brain of any other species). If we did, it would be unmanageably unprogrammable and unpredictable. If it did anything interesting on its own, we'd never understand how or why it did that.

      And I think the claim that they know EVERYTHING about how a neuron works (at least one part of it) is optimistic at best.

      • by Narcocide ( 102829 ) on Wednesday November 16, 2011 @07:01AM (#38071792) Homepage

        I agree with everything about this statement except the word "never."

        Never is a pretty bold word. It puts you in a pretty gutsy mindset; one that isn't entirely productive to rational scientific analysis. The word "never" is pretty commonly seen in the company of "famous last words."

        • Never is the right word to use in the appropriate context.

          We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting

          (Emphasis Mine.) We'll probably eventually find a way to model a human brain, but these chips are just a very small step in that direction.

          (I only felt the need to comment as I thought the same thing as you did until I went back and re-read it.)

      • by six025 ( 714064 )

        If we did, it would be unmanageably unprogrammable and unpredictable.

        Should we just get it over with now, and call her EVE? ;-)

        Peace,
        Andy.

      • Never??? (Score:3, Insightful)

        by mangu ( 126918 )

        The analog nature of the neuron isn't really the key to making "artificial brains" - the problem is simply scale.

        Agreed.

        We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting

        Shall we cue here all the "never" predictions of the last century? By the year 1900 there were lots of experts predicting we would never have flying machines, by 1950 experts were predicting the whole world would never need more than a dozen computers.

        Moore's law, or should we say Moore's phenomenon, has been showing how much electronic devices scale in the long run.

      • by vipw ( 228 )

        They state that it takes 400 transistors. Intel fabs a 2 billion transistor chip. I don't think that really means that 5 million of these artificial neurons could be put in one die, but pretty I'm sure that they aren't planning to put millions of chips onto a board.

        With wafer-scale integration, and some long range signal propagation to emulate 3d, there's reason to think that fairly large systems can be emulated.

      • by Kjella ( 173770 )

        But we don't have to build our side of the system like that, we only need enough neuron simulators on the surface, run them through an A/D circuit, do it our way then D/A it back into the brain. I'm pretty sure neurons like everything else has a resolution limit.

      • by ultranova ( 717540 ) on Wednesday November 16, 2011 @08:45AM (#38072264)

        That's a PCB-routing problem that you REALLY don't want, and way outside the scale of anything that we build (it's like every computer on the planet having 10,000 direct Ethernet connections to nearby computers - no switches, hubs, routers, etc. in order to simulate something approaching a small mouse's brain - not only a cabling and routing nightmare but where the hell do you plug it all in?). Not only that, by a real brain learns by breaking and creating connections all the time.

        A single neuron-neuron connection has very low bandwidth, in effect transferring a single number (activation level) a few hundred times a second. Even if timing is important, you can simply accompany the level with a timestamp. A single 100 Mbs Ethernet connection is easily able to handle all those 10 000 connections.

        Also, most of those 10 000 connections are to nearby neurons, presumably because long-distance communication involves the same latency and energy penalties in the brains as it does anywhere else. There are efficient methods to auto-cluster a P2P network so as to minimize total length of connections, for example Freenet does this; so, you could, in theory, run a distributed neural simulator even on standard Internet technology. In fact, I suspect that it could be possible to achieve human-level or higher artificial intelligence with existant computer power in this method right now.

        So, who wants to start HAL@Home ?-)

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        I think the REAL problem is that even the smallest brains have several billion neurons, with each having 10's of thousands of connections to other neurons. This chip simulates ONE such connection.

        I give it 30 years at most.

        Let's say several=5. times 5000, (10 thousand dived by to so as to not double-count both ends) = 2,500 billion connection.Let's assume 400 transistors per connection as in this study that comes out to 10,000,000 billions transistors, not counting the possibility of time-multiplexed busses as mentioned in a comment below (as biological neurons are slow compared to transistors)

        According to wikipedia [slashdot.org] a Xilinx Virtex 7 FPGA (More similar to an array of neurons than a CPU) has 6.8 bill

        • by Toonol ( 1057698 )
          I think you're right that the hardware capabilities will progress fast enough to match or beat the human brain in the next few decades. I doubt, though, that our sophistication of software design will keep pace. Although I think AI is completely possible, it will be a lot further into the future than that. I also think that any attempts to 'program' it will invariably fail; it needs to be evolved, somehow. It's too complex of a system to design.
      • One simulated connection is enough to study them, can we simplify the implementation while maintaining the emergent propeties? We'll only know if we study them. Also, if they are similar enough and if it is fast enough, one hardware based connection is enough for speeding a software simulation of as many as you #define on your code.

        The problem is indeed of scale, but link count isn't the only way to solve it.

      • by oh2 ( 520684 )
        The brain isnt a monolithic structure, it has many different parts that are tied together by the brainstem. Its very possible that we eventually can start making decent copies of these parts and construct a brainstem analog to tie them together. The interesting thing is that they have an artificial neuron, the rest as they say is engineering.
    • by agrif ( 960591 ) on Wednesday November 16, 2011 @08:30AM (#38072166) Homepage

      "Your species is obsolete," the ghost comments smugly. "Inappropriately adapted to artificial realities. Poorly optimized circuitry, excessively complex low-bandwidth sensors, messily global variables..."

      Accelerando [jus.uio.no], by Charles Stross

    • by Hentes ( 2461350 )

      The neural network is much more than just the brain. Repairing nerves of paralised people is a much easier task than interfacing with the brain.

    • True. There are already lot's of people working on that though!
    • The problem is not providing such components, nor get them to work like the original

      I don't know, I see a major problem simulating dendritic growth and pruning... even if you have 10,000 of these things on a chip to emulate a tiny little brain structure, how do you emulate the growth and pruning of the interconnections? If you build a cross-switch matrix, it gets big at least O N^2...

    • Actually, the great thing about interfacing with the brain is that he brain adapts to the interface so the interface doesn't have to be adapted to the brain, per se. (By which I mean it's not terrible important where/how an input is connected, though obviously it must provide a signal that the brain can process). It's not only possible to do something like re-route an optic nerve to the auditory complex, it's possible to add additional inputs, both natural and man-made, and the brain will learn to process

  • by somersault ( 912633 ) on Wednesday November 16, 2011 @06:29AM (#38071664) Homepage Journal

    With MIT's brain chip, the simulation is faster than the biological system itself.

    Uh-oh.

    • Seems like you were thinking just I was thinking; Great, just enough time to enjoy a decade or two of flying cars built-and-designed entirely by machines before the machines realize we're all bad drivers and must be permanently restrained for our own well being.

    • by Pegasus ( 13291 ) on Wednesday November 16, 2011 @07:26AM (#38071878) Homepage

      It may be faster, but what about performance per watt? You know, the whole brain does everything on only 40-50 watts. How does this MIT product compare to brains in this area?

      • Wow, I never thought about it that way. A human brain consumes less power than a modern CPU (say, 100W).

        Plus, the brain does its own glucose burning and that's counted in the 50W. To compare fairly, you'd need to take into account the PSU efficiency, electrical grid losses and power plant efficiency in the CPU power. If we say 50% efficiency overall, that means 200W for the CPU.

        Just wow.

        • I've got another one for you. The muscles needed to interact with the simulation, they can run on mostly or all vegetables!

  • by Narcocide ( 102829 ) on Wednesday November 16, 2011 @06:31AM (#38071670) Homepage

    Have you ever stood and stared at it, marveled at its beauty, its genius? Billions of people just living out their lives, oblivious. Did you know that the first Matrix was designed to be a perfect human world, where none suffered, where everyone would be happy? It was a disaster. No one would accept the program, entire crops were lost. Some believed we lacked the programming language to describe your perfect world, but I believe that, as a species, human beings define their reality through misery and suffering. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about. Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You've had your time. The future is our world, Morpheus. The future is our time.

    -- Agent Smith (The Matrix)

  • by Anonymous Coward

    "You cannot step twice in the same river". That means we are constantly changing. Over sufficiently long period, old person has all but died while new one has gradually taken over.

    Using that reasoning, we could replace biological brains bit-by-bit over long period of time, without killing the subject. In the end, if successful, the person has no more biological brains left. He'd have all digital brains. Backups, copies, performance tuning, higher clock rates, more memory, more devices, ... and immortality.

    • Makes me wonder. I assume the immortal machine would think it WAS the subject. But would the subject think he was the machine?
      • There are no two distinct bodies. The GP is proposing piecewise replacement (probably after defects, I wouldn't get them any other way) and improvement. That is not the conventional uploading you see around, and "feels" way different.

  • by simoncpu was here ( 1601629 ) on Wednesday November 16, 2011 @06:46AM (#38071740)
    1. Build a farm of brain chips
    2. Expose the brain chips via an API
    3. Build a cloud service for brain chips
    4. Market as Brain Power on Demand(tm)!
    5. ???
    6. Profit!!
  • ... they used analogue electronics.

    And the radical new technology is what? This was done in the 1960s. Sure, there may be a bit more accuracy and finesse with this version, but really, cutting edge this is not.

    • by Anonymous Coward on Wednesday November 16, 2011 @08:08AM (#38072046)

      I think you have to credit MIT researchers for knowing better where the cutting edge is than you, and the writers of the article for including the 1960s in this paragraph:

      'Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says.'

      More than just spiking; from my AI lectures years ago I recall that the McCulloch-Pitts neuron model of the was a spiking model (excitatory inputs, inhibitory inputs, thresholds) etc.

  • ...and war was beginning.

  • by Ceriel Nosforit ( 682174 ) on Wednesday November 16, 2011 @07:33AM (#38071906)

    The way I remember it is that a transistor stops a much larger current from passing through until a signal is put on the gate in the middle. Then the current that passes through is in proportion to the signal strength.

    The circuit becomes digital when we decide that only very small and very large voltages counts as 0s and 1s.

    • Transistors are analog. Transistor-transistor logic is digital.

  • Very cool and all, but why does the summary call this a "Computer Chip?

  • so we build a synapse, and then link to more, and more, and before you know it we have a "brain",

    We could call it a Positronic Brain, sounds catchy, and marketable.

    And we really should enforce some rules to prevent a 'skynet' occurrence, not too many rules though,
    I'm sure we could distill the logic down to three simple rules ............

  • A hundred bucks says a woman and her son, black dude and a juice head will break into the lab, blow it up and throw the chip into a vat of molten metal.

    All that work for nothing.

  • by wdef ( 1050680 ) on Wednesday November 16, 2011 @08:43AM (#38072248)
    I might be out of date, but: the event itself requires the neuron's action potential to reach a threshold, then the synapse fires. It either fires or it does not. On or off. But the process of reaching the firing threshold is analog, since the physical geometry of the neuron and of its afferent neural feeds (inputs) determines at what point the neuron will fire. Neurotransmitter quantities in the synapse are also modifiable though eg by drugs and natural up/down regulation of receptors, enzymes or re-uptake inhibition. So a neuron is an analog computer having output with various amplitudes of on/off.
  • by Pollux ( 102520 ) <speter@[ ]ata.net.eg ['ted' in gap]> on Wednesday November 16, 2011 @08:56AM (#38072330) Journal

    MIT’s chip — all 400 transistors (pictured below) — is dedicated to modeling every biological caveat in a single synapse. “We now have a way to capture each and every ionic process that’s going on in a neuron,” says Chi-Sang Poon, an MIT researcher who worked on the project.

    Just because you finally can recognize the letters of the alphabet doesn't mean you can speak the language.

    • by leptogenesis ( 1305483 ) on Wednesday November 16, 2011 @09:44AM (#38072644)
      Mod parent up. The linked article (and the MIT press release) are misleading. The closest thing I can find to a peer-reviewed publication by Poon has an abstract is here (no, I can't find anything throught the official EMBC channels--what a disgustingly closed conference):

      https://embs.papercept.net/conferences/scripts/abstract.pl?ConfID=14&Number=2328 [papercept.net]

      And there's some background on Poon's goals here:

      http://www.frontiersin.org/Journal/FullText.aspx?ART_DOI=10.3389/fnins.2011.00108&name=neuromorphic_engineering [frontiersin.org]

      The goals seem to me to be about studying specific theories about information propagation across synapses as well as studying brain-computer interfaces. They never mention building a model of the entire visual system or any serious artificial intelligence. We have only the vaguest theories about how the visual system works beyond V1, and essentially no idea what properties of the synapse are important to make it happen.

      About two years ago, while I was still doing my undergraduate research in neural modeling, I recall that the particular theory they're talking about--spike-timing dependent plasticity [wikipedia.org]--was quite controversial. It might have been simply an artifact of the way the NMDA receptor worked. Nobody seemed to have any cohesive theory for why it would lead to intelligence or learning, other than vague references to the well-established Hebb rule.

      Nor is it anything new. Remember this [slashdot.org] story from ages ago? Remember how well that returned on its promises of creating a real brain? That was spike-timing dependent plasticity as well, and unsurprisingly it never did anything resembling thought.

      Slashdot, can we please stop posting stories about people trying to make brains on chips and post stories about real AI research?
      • "Slashdot, can we please stop posting stories about people trying to make brains on chips and post stories about real AI research?"

        Those are two very different research areas, and both quite interesting. I'd vote for the continuation of the status quo, and having stories about both. But I'm willing to let the sensationalist summaries go.

  • If it's analog, then it's behaviour is unique per chip, and so anything you build from them will be subtly unique. So "software" would behave differently depending on the unit it was running on. You thought 4 or 5 versions of Linux was tricky to support...

  • For those who RTFA (or at least clicked) anybody else see his eyes in that picture and wonder if Data had been smoking pot?
    • by Jeng ( 926980 )

      With contacts like those I would think your eyes would almost always be irritated.

  • He made hybrid analog-digital circuits to emulate retina processing. There was some talk of self-adjusting cameras using these, but I've lost track.
    • They built these out of circuits before computers got cheap enough. They had "memory" which implemented training-by-example. Huge debat int he A.I. labs whether they were significant. But they seem to return in some new form every decade.
  • I had to play with the beowulf meme. :)

    But really, what good is modelling a single neuron when you'd need billions or trillions of these chips clustered together to mimic an actual human brain?

  • by tigre ( 178245 ) on Wednesday November 16, 2011 @12:42PM (#38074602)

    The summary is way off. Transistors are analog devices, so TTL may behave digitally but that's only because a lot of work was done to make that happen. All that's happening here is taking analog devices with certain characteristics and using them to model an analog process with certain other characteristics. No small feat mind you.

  • Don't mind me, posting to undo accidental moderation

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...