MIT Creates Chip to Model Synapses 220
MrSeb writes with this excerpt from an Extreme Tech article: "With 400 transistors and standard CMOS manufacturing techniques, a group of MIT researchers have created the first computer chip that mimics the analog, ion-based communication in a synapse between two neurons. Scientists and engineers have tried to fashion brain-like neural networks before, but transistor-transistor logic is fundamentally digital — and the brain is completely analog. Neurons do not suddenly flip from '0' to '1' — they can occupy an almost-infinite scale of analog, in-between values. You can approximate the analog function of synapses by using fuzzy logic (and by ladling on more processors), but that approach only goes so far. MIT's chip is dedicated to modeling every biological caveat in a single synapse. 'We now have a way to capture each and every ionic process that's going on in a neuron,' says Chi-Sang Poon, an MIT researcher who worked on the project. The next step? Scaling up the number of synapses and building specific parts of the brain, such as our visual processing or motor control systems. The long-term goal would be to provide bionic components that augment or replace parts of the human physiology, perhaps in blind or crippled people — and, of course, artificial intelligence. With current state-of-the-art technology it takes hours or days to simulate a simple brain circuit. With MIT's brain chip, the simulation is faster than the biological system itself."
The Interface will be a problem. (Score:5, Insightful)
The problem is not providing such components, nor get them to work like the original nor getting it into your head. The real problem I see is interfacing with the rest of the brain.
Because, let's face it, that's something every coder knows: Interfacing, working and supporting legacy systems just sucks.
Re:The Interface will be a problem. (Score:4, Insightful)
Well it's obvious (Score:5, Funny)
Due to their incompatibility with newer systems, meat bags are now obsolete.
Re:Well it's obvious (Score:4, Funny)
I'm sure someone will build an interface for it, and then there will be an open source driver within days.
If not that then let's at least hope our robotic overlords have it in their perfectly synchronized hearts to backport some of the major features...
Re:The Interface will be a problem. (Score:3)
I think getting them to "work like the original" is the problem actually. That part covers interfacing all by itself. The brain is very highly interconnected in 3D.. and we don't have great 3D chip fabrication yet.
Re:The Interface will be a problem. (Score:3)
You don't need 3D chip fabrication to do 3D interconnects. The Connection Machine processors were interconnected in 8 dimensions, IIRC. Each node had 8 connections to its neighbors in a hypercube.
I have my doubts (Score:3, Insightful)
get them to work like the original
Is this really something that we could do in the foreseeable future ? My understanding is that the brain programs itself (or we program it if you like) during the first years of our lives (5 to 7) for the most part. An empty new 'brain part' would act just like some parts of the brain act after a stroke I suspect, meaning that it'll take years and years to (re)train it.
Similarly, children that grew up with animals alone, with little or no interaction with other humans (there were some cases) are never able to learn to speak fluently, because that part of the brain never fully develops (ie. is never programmed).
AFAIK we don't know enough about how the brain works to pre-program such components and it would need to be strongly tuned to the destination brain, otherwise it won't work very well or at all. We know about the lower-level stuff (neurons, synapses) and some things about the higher-level (regions and general functions), but not much in between (though, I'm not a specialist).
Even so, I can see some medical uses for this, for people with disabilities. Though nothing like what you see in 'Ghost in the Shell'.
Re:I have my doubts (Score:3)
Creating an artificial human brain is too ethically loaded to even be considered in university research. They are more likely to try to get it to play a flight simulator since that's what someone did with a rat brain and they could compare their results, making for interesting data.
Slashdot did however already welcome the flying rat overlords.
Re:I have my doubts (Score:2)
Ethically loaded? How? I don't see how the brain would be suffering? Or are they worried about skynet?
Re:I have my doubts (Score:5, Interesting)
Would the artificial brain have rights? If you wiped its artificial neurons, would it be murder? If you give it control of a physical robot arm and it hurt someone, how and to what extent could you "punish" it? The ethical questions are virtually endless when you start to play "god". I would think that would be obvious.
Re:I have my doubts (Score:2)
Ethically controversial more than loaded. It is your creation, why should you not have the right to wipe it?
Re:I have my doubts (Score:5, Insightful)
My (hypothetical) baby is my (and my fiancee's) creation, why should I not have the right to "wipe" it?
Re:I have my doubts (Score:2)
A hypothetical baby is created through normal (hopefully) natural biological process. Any AI is created through application of intelligence. Thus I don't find the analogy sufficiently good to base a decision on. It is unethical to kill the baby, but that does not imply it is unethical to wipe the AI. Though, why would you?
Re:I have my doubts (Score:4, Interesting)
How about: does it suffer?
Does creating a "human" inside a device where it can presumably sense, but have no limbs or autonomy consitute torture? Can you turn it off?
Why is "being natural" a defining answer to these questions?
Re:I have my doubts (Score:2)
How does it know what it is missing?
No Human is owned - that would be a violation of rights. But a computer for example (no matter how complex) is hardware which can be owned. Its software is merely a state of that machine which can also be owned. If I want to change the state of my software on my hardware, how can you say I'm ethically wrong? An AI I make is mine in a way no other intelligence can be. I can not own a person, neither can I own the state of their brain. But the state of the conditions inside hardware that I own, I do. I defined (at least initially) those conditions.
Re:I have my doubts (Score:2)
[Quote]No Human is owned - that would be a violation of rights.[/quote]
That is (almost) true now but the age of slavery is not that long ago.
And the only reason owning* a human would be a "violation of rights" is that those rights have been granted by humans with human laws. And those laws can be changed so it is easy to imagine a future where humans again can own other humans(Not that I hope this happens, but I can imagine that it might).
The entire concept of "Ownership" is a human created thing based on laws made by humans, not some global constant true natural law.
So it all come back to: What is the motivation/reason we have laws which say you can't own other humans, and how many of those reasons would also be valid for an self-aware computer.
Re:I have my doubts (Score:2)
If the attributes that give humans rights also exist in another object, that object should have recognized rights as well. I think most people would agree that it's self-awareness, perception, and reasoning that are the basis of rights (with some exceptions), not heredity or species.
Re:I have my doubts (Score:2)
So, you're saying I can't own a self aware computer?
I can own the hardware. I can own the software it runs, but I can't own the unit as whole - i.e. the machine is the more than the sum of its parts.
Ownership of a human is a different matter. You can't practically control all inputs. You can't practically control an initial state. You can't practically rewrite the code for an improvement. Owning a human in the way you could own AI is unpractical in the extreme. I don't think the analogy between "Artificial" intelligence and "intelligence" holds well enough. Thus, I would hesitate to ascribe the same rights to AI as a human. It may be intelligent, but it is artificially so.
Re:I have my doubts (Score:2)
Except that no human created another human in the way you design AI. If you are not religious, then noone created anyone. So, you're comparing apples and oranges here. It is not about ancestry, it is about engineering. Design. Software. Arguably a human brain is vaguely analogous to a computer (not really, but lets run with it a bit), but we can not do with a human brain what we can do with an AI. We can't design it, control all I/O and initial conditions.
Re:I have my doubts (Score:2)
A hypothetical baby is created through normal (hopefully) natural biological process. Any AI is created through application of intelligence.
I would argue that "application of intelligence" is a "natural biological process". We (and our brains) are, after all, creations of nature.
I would also point out that a baby created through in vitro fertilization is also, by definition, an "application of intelligence", and yet should also be treated as equal to a "natural" baby.
Re:I have my doubts (Score:2)
You're missing the point. We did not for example define even the initial state of the baby's brain - we do not own the software state(if you want to use that poor analogy). Neither did we define completely it's hardware. It is a human being - not of our design (whether or not we were designed is not relevant here).
With an AI, we designed the hardware. We designed the software. We defined the initial conditions. We defined the inputs. So, clearly there is a difference. The AI, in essence is the sum of the inputs of its designers. Therefore they should decide what to do with the AI. The analogy simply does not hold.
Re:I have my doubts (Score:5, Insightful)
The AI, in essence is the sum of the inputs of its designers. Therefore they should decide what to do with the AI.
I don't see how that follows. Just because you created something doesn't mean you should always have the power to destroy it.
Neither did we define completely it's hardware.
You seem to be saying that the degree to which something is designed by its creator determines whether or not they can destroy it. I find that completely irrelevant. If it is wrong to kill a human, then it is wrong to kill something else that has the intelligence of a human. It doesn't matter who created it or designed it, or to what to what degree they did so. If it has human-level intelligence, then it should possess human-level rights.
Re:I have my doubts (Score:2)
If you kill a human they are gone. An AI may be re-creatable. Also, human level intelligence does not imply human level morality or ethics. Or is it right in your view to impose these on an AI? If we can impose things at will upon an AI, how is it better from just admitting the AI is "Artificial" and thus we can do what we like with it? How do you know it would even fear death?
Re:I have my doubts (Score:4, Insightful)
If you kill a human they are gone.
Gone in the sense of "not here". We have no way of proving anything beyond that.
An AI may be re-creatable.
We don't know one way or the other, so it's not really relevant. Besides, human consciousness might also be re-creatable. Can you say with 100% certainty that it is completely impossible to make an exact copy of the complete state of a human's neural network?
Also, human level intelligence does not imply human level morality or ethics
I would say that's exactly what it implies. I guess it depends on WHY you think killing humans is unethical, but killing insects, mice, cows, etc is fine. I say it's because of human intelligence. I can't figure out why you think so.
Re:I have my doubts (Score:3)
Re:I have my doubts (Score:2)
We don't know one way or the other, so it's not really relevant. Besides, human consciousness might also be re-creatable. Can you say with 100% certainty that it is completely impossible to make an exact copy of the complete state of a human's neural network?
Yes. Actually. At this time we can not make a copy. So, yes, we can't restore the state of a human. Neither can we reconstruct exactly a human's hardware if we break it. If this changes, do you honestly think laws won't change with time? If I murder someone, knowing full well, you'll just "respawn" them, is it as bad as murder now? Should the punishment be the same? I
Killing humans is not unethical in many places. In America for example it is considered acceptable for a number or reasons, including their failure to comply with accepted morals and ethics (e.g. if they go around killing people for kicks.). You have not answered the point, would it be ethical to force our moral/ethical standard on our creation? Perhaps against its will(if the AI has this thing we call will)? How is that different from just admitting "we made it and can do what we like with it?" After all, if we can programatically force it to do that, we can programmatically force it to do anything.
Re:I have my doubts (Score:2)
Proof it is controversial not loaded. It seems pretty simple to me.
Re:I have my doubts (Score:5, Insightful)
Application of intelligence is a natural biological process too, since the mind is running in a biochemical substrate (until the AI is working..)
You're arguably more responsible for the AI than you are for the baby - it's possible to produce a baby without understanding what you are doing. You don't make an AI accidentally on a drunken prom date.
The baby isn't even sentient until it reaches a certain level of development.
So why do we value the child over the computer? Because we are biased towards humans? I'm not saying this is wrong, just saying it's not defensible from the purely intellectual point of view - if they are both sentient and have an imperative to survive, defending the destruction of the artificial sentience because it's easy and free of consequence is in the same ball park as shooting indigenous tribesmen because "they're only darkies".
Re:I have my doubts (Score:3)
Of course its defensible. The child has potential. An AI can be recreated if erased by supplying identical inputs and initial conditions. The child can not. This much is obvious. If I erase my AI, it isn't ended in the same way as if I erase a child. Why does everyone forget the implications of the "Artificial" part of "AI"? Too much sci fi?
Re:I have my doubts (Score:2)
A hypothetical baby is created through normal (hopefully) natural biological process. Any AI is created through application of intelligence. Thus I don't find the analogy sufficiently good to base a decision on. It is unethical to kill the baby, but that does not imply it is unethical to wipe the AI. Though, why would you?
Well, if I developed such a thing as an artificial brain, it would have tens of thousands of image saves at various points in its development, would occasionally develop undesirable behavior and I would want to go back to previous revisions and try again - is that unethical?
At what point does the AI become self determining as to whether or not it wants to go back to a previous version? If I want to continue development, am I obligated to keep the current copy running while I try something different from an older version?
At some point, I'd run out of hardware to keep all of them running. Is it O.K. to just put them to sleep and maybe wake them later?
Re:I have my doubts (Score:5, Funny)
You'll wipe your baby way more often than you'd want to.
Re:I have my doubts (Score:2)
Re:I have my doubts (Score:2)
No, actually your hypothetical baby isn't your creation. Its a natural result of your inborn programming. If you engineer self replicating cells are the replications the cells creations or your creations? They are your creations obviously.
Your baby is the creation of evolution and/or a diety/intelligent designer. So the question becomes not whether you have the right to kill your baby, but whether god does.
Re:I have my doubts (Score:2)
Isn't this the argument used by religious nut jobs on Jihad or Crusade, burning heretics at the stake or stoning infidels to death? "God created them, then God told me to destroy them. Who am I to question the will of the creator or the authority of his holy scriptures?"
Likewise, if my daddy owned slaves and the children of those slaves, and if my daddy bequeathed those slaves to me in his will at his death, then those slaves and children of those slaves become my property. Who has the right to deny me of property that was legally transferred to me (assuming pre-1860's US laws in southern states). Just because it is "legal" or fits well with some pre-existing theory of ethics, law, or property, does not mean that specific new situations should never be examined in a whole new light. Slavery is wrong not just because the Union won the Civil War, but because slavery is wrong.
There are a lot of other things that are wrong but still legal, and even presumed by many to be ethical, but I won't get into that here today. But the presumption that ownership or creation confers some sort of universal god-like status is erroneous. Regardless of your opinion, people are not just going to stand by and let it happen. Just ask Muammar Gaddafi. Who's going to come to your rescue if you torture your creations in your lab and the revolt against you?
Re:I have my doubts (Score:2)
No. Noone is telling anyone to do anything. I am doing what I want with the hardware I own. If my hardware kills me, that is my problem. You could call it suicide.
Re:I have my doubts (Score:2)
Think through that a little more.
Re:I have my doubts (Score:2)
Still don't see an issue. Designing something is not analogous to natural reproduction.
Re:I have my doubts (Score:2)
Aside from my belief that there's nothing supernatural about the human brain and that consciousness is just an artifact of being sufficiently complex to host a theory of mind, what would you do to someone who thought they had the right to kill you at any time, for any reason?
Re:I have my doubts (Score:2)
Re:I have my doubts (Score:2)
Re:I have my doubts (Score:2)
Also, training would be significantly faster than in an actual human brain, since the connections are faster and you can simply train it using recorded data as input. No need to have it "actually" go through the teaching scenarios.
Re:I have my doubts (Score:2)
Re:I have my doubts (Score:2)
It's true that throwing a bunch of neuron simulators into a pot won't automatically do anything, until you figure out how to program it. But advances in hardware and programming are quite tightly coupled - you make a new machine, then you spend a lot of time figuring out how to get the most out of it, until you find its limitations which inspires the design of the next generation machine.
Turing Completeness gives us the idea that hardware doesn't really matter, since any computer can run any program (if it has enough memory), but this is misleading. For example, you didn't see a lot of cellphone apps written in the 1970's, even though big iron at the time was able to run the algorithms. Similarly, simulating huge neural networks currently requires massive clusters of computers that require thousands of dollars per day in electricity alone. That sharply limits how many people can experiment with them.
Re:I have my doubts (Score:2)
Even so, I can see some medical uses for this, for people with disabilities. Though nothing like what you see in 'Ghost in the Shell'.
I see another 50 years of research with this and still not getting very far in terms of replicating complex brain function.
Complex neural interfaces are being developed for things like vision for the blind, and of course cochlear implants - for the most part, the existing wetware adapts and learns to interpret the signals from the machine.
I do see researchers playing with these and demonstrating some cool proofs of concept, interesting control systems built out of a handful of neurons, etc. but something complex like language processing is quite a ways off.
Re:The Interface will be a problem. (Score:5, Interesting)
I think the REAL problem is that even the smallest brains have several billion neurons, with each having 10's of thousands of connections to other neurons. This chip simulates ONE such connection.
That's a PCB-routing problem that you REALLY don't want, and way outside the scale of anything that we build (it's like every computer on the planet having 10,000 direct Ethernet connections to nearby computers - no switches, hubs, routers, etc. in order to simulate something approaching a small mouse's brain - not only a cabling and routing nightmare but where the hell do you plug it all in?). Not only that, by a real brain learns by breaking and creating connections all the time.
The analog nature of the neuron isn't really the key to making "artificial brains" - the problem is simply scale. We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting (and certainly nothing that we could actually analyse any better than the brain of any other species). If we did, it would be unmanageably unprogrammable and unpredictable. If it did anything interesting on its own, we'd never understand how or why it did that.
And I think the claim that they know EVERYTHING about how a neuron works (at least one part of it) is optimistic at best.
Re:The Interface will be a problem. (Score:5, Insightful)
I agree with everything about this statement except the word "never."
Never is a pretty bold word. It puts you in a pretty gutsy mindset; one that isn't entirely productive to rational scientific analysis. The word "never" is pretty commonly seen in the company of "famous last words."
Re:The Interface will be a problem. (Score:2)
Never is the right word to use in the appropriate context.
We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting
(Emphasis Mine.) We'll probably eventually find a way to model a human brain, but these chips are just a very small step in that direction.
(I only felt the need to comment as I thought the same thing as you did until I went back and re-read it.)
Re:The Interface will be a problem. (Score:2)
If we did, it would be unmanageably unprogrammable and unpredictable.
Should we just get it over with now, and call her EVE? ;-)
Peace,
Andy.
Never??? (Score:3, Insightful)
The analog nature of the neuron isn't really the key to making "artificial brains" - the problem is simply scale.
Agreed.
We will never be able to produce enough of these chips and tie them together well enough to produce anything conventionally interesting
Shall we cue here all the "never" predictions of the last century? By the year 1900 there were lots of experts predicting we would never have flying machines, by 1950 experts were predicting the whole world would never need more than a dozen computers.
Moore's law, or should we say Moore's phenomenon, has been showing how much electronic devices scale in the long run.
Re:The Interface will be a problem. (Score:2)
They state that it takes 400 transistors. Intel fabs a 2 billion transistor chip. I don't think that really means that 5 million of these artificial neurons could be put in one die, but pretty I'm sure that they aren't planning to put millions of chips onto a board.
With wafer-scale integration, and some long range signal propagation to emulate 3d, there's reason to think that fairly large systems can be emulated.
Re:The Interface will be a problem. (Score:2)
But we don't have to build our side of the system like that, we only need enough neuron simulators on the surface, run them through an A/D circuit, do it our way then D/A it back into the brain. I'm pretty sure neurons like everything else has a resolution limit.
Re:The Interface will be a problem. (Score:5, Interesting)
A single neuron-neuron connection has very low bandwidth, in effect transferring a single number (activation level) a few hundred times a second. Even if timing is important, you can simply accompany the level with a timestamp. A single 100 Mbs Ethernet connection is easily able to handle all those 10 000 connections.
Also, most of those 10 000 connections are to nearby neurons, presumably because long-distance communication involves the same latency and energy penalties in the brains as it does anywhere else. There are efficient methods to auto-cluster a P2P network so as to minimize total length of connections, for example Freenet does this; so, you could, in theory, run a distributed neural simulator even on standard Internet technology. In fact, I suspect that it could be possible to achieve human-level or higher artificial intelligence with existant computer power in this method right now.
So, who wants to start HAL@Home ?-)
Re:The Interface will be a problem. (Score:3, Interesting)
I think the REAL problem is that even the smallest brains have several billion neurons, with each having 10's of thousands of connections to other neurons. This chip simulates ONE such connection.
I give it 30 years at most.
Let's say several=5. times 5000, (10 thousand dived by to so as to not double-count both ends) = 2,500 billion connection.Let's assume 400 transistors per connection as in this study that comes out to 10,000,000 billions transistors, not counting the possibility of time-multiplexed busses as mentioned in a comment below (as biological neurons are slow compared to transistors)
According to wikipedia [slashdot.org] a Xilinx Virtex 7 FPGA (More similar to an array of neurons than a CPU) has 6.8 billion transistors. This means we need 1,470,588 times more transistors. That's less than 2^20.5, or 20.5 doublings, which according to Moore's law would be about 30 years or so.
So even without multiprocessing, simplification of this design, and other simple improvements, this will be possible to put on some sort of chip in 30 years time.
Never say never. 2042 will be the year of the brain in the desktop! :)
Re:The Interface will be a problem. (Score:2)
Re:The Interface will be a problem. (Score:2)
One simulated connection is enough to study them, can we simplify the implementation while maintaining the emergent propeties? We'll only know if we study them. Also, if they are similar enough and if it is fast enough, one hardware based connection is enough for speeding a software simulation of as many as you #define on your code.
The problem is indeed of scale, but link count isn't the only way to solve it.
Re:The Interface will be a problem. (Score:2)
Re:The Interface will be a problem. (Score:4, Informative)
"Your species is obsolete," the ghost comments smugly. "Inappropriately adapted to artificial realities. Poorly optimized circuitry, excessively complex low-bandwidth sensors, messily global variables..."
— Accelerando [jus.uio.no], by Charles Stross
Re:The Interface will be a problem. (Score:2)
Re:The Interface will be a problem. (Score:2)
The neural network is much more than just the brain. Repairing nerves of paralised people is a much easier task than interfacing with the brain.
Re:The Interface will be a problem. (Score:2)
Re:The Interface will be a problem. (Score:2)
The problem is not providing such components, nor get them to work like the original
I don't know, I see a major problem simulating dendritic growth and pruning... even if you have 10,000 of these things on a chip to emulate a tiny little brain structure, how do you emulate the growth and pruning of the interconnections? If you build a cross-switch matrix, it gets big at least O N^2...
Re:The Interface will be a problem. (Score:3)
Actually, the great thing about interfacing with the brain is that he brain adapts to the interface so the interface doesn't have to be adapted to the brain, per se. (By which I mean it's not terrible important where/how an input is connected, though obviously it must provide a signal that the brain can process). It's not only possible to do something like re-route an optic nerve to the auditory complex, it's possible to add additional inputs, both natural and man-made, and the brain will learn to process the additional information in short order.
Was not expecting that.. (Score:4, Funny)
With MIT's brain chip, the simulation is faster than the biological system itself.
Uh-oh.
Re:Was not expecting that.. (Score:2)
Seems like you were thinking just I was thinking; Great, just enough time to enjoy a decade or two of flying cars built-and-designed entirely by machines before the machines realize we're all bad drivers and must be permanently restrained for our own well being.
But what about efficiency? (Score:5, Interesting)
It may be faster, but what about performance per watt? You know, the whole brain does everything on only 40-50 watts. How does this MIT product compare to brains in this area?
Re:But what about efficiency? (Score:3)
Wow, I never thought about it that way. A human brain consumes less power than a modern CPU (say, 100W).
Plus, the brain does its own glucose burning and that's counted in the 50W. To compare fairly, you'd need to take into account the PSU efficiency, electrical grid losses and power plant efficiency in the CPU power. If we say 50% efficiency overall, that means 200W for the CPU.
Just wow.
Re:But what about efficiency? (Score:2)
I've got another one for you. The muscles needed to interact with the simulation, they can run on mostly or all vegetables!
Re:But what about efficiency? (Score:2)
As it is written so shall it be done. (Score:3, Interesting)
Have you ever stood and stared at it, marveled at its beauty, its genius? Billions of people just living out their lives, oblivious. Did you know that the first Matrix was designed to be a perfect human world, where none suffered, where everyone would be happy? It was a disaster. No one would accept the program, entire crops were lost. Some believed we lacked the programming language to describe your perfect world, but I believe that, as a species, human beings define their reality through misery and suffering. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about. Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You've had your time. The future is our world, Morpheus. The future is our time.
-- Agent Smith (The Matrix)
Better long-term goal: replace brains with these (Score:2, Interesting)
"You cannot step twice in the same river". That means we are constantly changing. Over sufficiently long period, old person has all but died while new one has gradually taken over.
Using that reasoning, we could replace biological brains bit-by-bit over long period of time, without killing the subject. In the end, if successful, the person has no more biological brains left. He'd have all digital brains. Backups, copies, performance tuning, higher clock rates, more memory, more devices, ... and immortality.
Re:Better long-term goal: replace brains with thes (Score:2)
Re:Better long-term goal: replace brains with thes (Score:2)
There are no two distinct bodies. The GP is proposing piecewise replacement (probably after defects, I wouldn't get them any other way) and improvement. That is not the conventional uploading you see around, and "feels" way different.
My next startup idea (Score:4, Funny)
2. Expose the brain chips via an API
3. Build a cloud service for brain chips
4. Market as Brain Power on Demand(tm)!
5. ???
6. Profit!!
So to model analogue neurons... (Score:2)
... they used analogue electronics.
And the radical new technology is what? This was done in the 1960s. Sure, there may be a bit more accuracy and finesse with this version, but really, cutting edge this is not.
Re:So to model analogue neurons... (Score:5, Insightful)
I think you have to credit MIT researchers for knowing better where the cutting edge is than you, and the writers of the article for including the 1960s in this paragraph:
'Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says.'
More than just spiking; from my AI lectures years ago I recall that the McCulloch-Pitts neuron model of the was a spiking model (excitatory inputs, inhibitory inputs, thresholds) etc.
It was 2011... (Score:2)
...and war was beginning.
Transistors are not digital (Score:4, Interesting)
The way I remember it is that a transistor stops a much larger current from passing through until a signal is put on the gate in the middle. Then the current that passes through is in proportion to the signal strength.
The circuit becomes digital when we decide that only very small and very large voltages counts as 0s and 1s.
Re:Transistors are not digital (Score:2)
Transistors are analog. Transistor-transistor logic is digital.
Re:Transistors are not digital (Score:2)
Transistors are analog. Transistor-transistor logic is digital.
Neurons are analog. What about neuron-neuron-logic?
Some seem to expect it quantum-mechanical or even completely non-deterministic. ;-)
Some are living up to that expectancy
Computer Chip (Score:2)
Very cool and all, but why does the summary call this a "Computer Chip?
Re:Computer Chip (Score:2)
Re:Computer Chip (Score:2)
But it's only 400 of them! Even the Intel 4004 had more.
Putting many together (Score:2)
so we build a synapse, and then link to more, and more, and before you know it we have a "brain",
We could call it a Positronic Brain, sounds catchy, and marketable.
And we really should enforce some rules to prevent a 'skynet' occurrence, not too many rules though, ............
I'm sure we could distill the logic down to three simple rules
They better have insurance... (Score:2)
A hundred bucks says a woman and her son, black dude and a juice head will break into the lab, blow it up and throw the chip into a vat of molten metal.
All that work for nothing.
Synapse firing event is not pure analog (Score:4, Insightful)
Still a long, LONG way to go... (Score:4, Insightful)
MIT’s chip — all 400 transistors (pictured below) — is dedicated to modeling every biological caveat in a single synapse. “We now have a way to capture each and every ionic process that’s going on in a neuron,” says Chi-Sang Poon, an MIT researcher who worked on the project.
Just because you finally can recognize the letters of the alphabet doesn't mean you can speak the language.
Re:Still a long, LONG way to go... (Score:5, Informative)
https://embs.papercept.net/conferences/scripts/abstract.pl?ConfID=14&Number=2328 [papercept.net]
And there's some background on Poon's goals here:
http://www.frontiersin.org/Journal/FullText.aspx?ART_DOI=10.3389/fnins.2011.00108&name=neuromorphic_engineering [frontiersin.org]
The goals seem to me to be about studying specific theories about information propagation across synapses as well as studying brain-computer interfaces. They never mention building a model of the entire visual system or any serious artificial intelligence. We have only the vaguest theories about how the visual system works beyond V1, and essentially no idea what properties of the synapse are important to make it happen.
About two years ago, while I was still doing my undergraduate research in neural modeling, I recall that the particular theory they're talking about--spike-timing dependent plasticity [wikipedia.org]--was quite controversial. It might have been simply an artifact of the way the NMDA receptor worked. Nobody seemed to have any cohesive theory for why it would lead to intelligence or learning, other than vague references to the well-established Hebb rule.
Nor is it anything new. Remember this [slashdot.org] story from ages ago? Remember how well that returned on its promises of creating a real brain? That was spike-timing dependent plasticity as well, and unsurprisingly it never did anything resembling thought.
Slashdot, can we please stop posting stories about people trying to make brains on chips and post stories about real AI research?
Re:Still a long, LONG way to go... (Score:2)
"Slashdot, can we please stop posting stories about people trying to make brains on chips and post stories about real AI research?"
Those are two very different research areas, and both quite interesting. I'd vote for the continuation of the status quo, and having stories about both. But I'm willing to let the sensationalist summaries go.
Analog = Unique (Score:2)
If it's analog, then it's behaviour is unique per chip, and so anything you build from them will be subtly unique. So "software" would behave differently depending on the unit it was running on. You thought 4 or 5 versions of Linux was tricky to support...
Data smoking pot (Score:2)
Re:Data smoking pot (Score:2)
With contacts like those I would think your eyes would almost always be irritated.
Caltech Carver Mead built similar things (Score:2)
Neural Nets and Perceptrons 40 years ago (Score:2)
Imagine a beowulf cluster of trillions of nodes (Score:2)
I had to play with the beowulf meme. :)
But really, what good is modelling a single neuron when you'd need billions or trillions of these chips clustered together to mimic an actual human brain?
Fundamentally Analog (Score:3)
The summary is way off. Transistors are analog devices, so TTL may behave digitally but that's only because a lot of work was done to make that happen. All that's happening here is taking analog devices with certain characteristics and using them to model an analog process with certain other characteristics. No small feat mind you.
n/t (Score:2)
Re:Still a long way to go ... (Score:2)
Re:Still a long way to go ... (Score:2)
Especially regarding the fact that while we know a lot about the neurotransmitters, we know much less about the different channel types. The thorough characterization of TMEM16 e.g. just started sometime around 2008 if I remember rightly and I see no reason why it should be the last "new" channel that's found.
And I'd say we know even less about the various roles/actions of neuromodulators.
But I'm still intrigued to read the paper, maybe I'm wrong and it really allows for the customization necessary.
Re:the 1960s called (Score:2)
Well, at least they can keep their nuvistors [wikipedia.org] -- although it would be an interesting (if expensive) technical challenge to redo the project with the last gasp of vacuum tube technology.
Re:and how... (Score:2)