New Hardware Needed For Future Computational Brain 143
schliz writes "Salk Institute director Terrence Sejnowski has called for more power-efficient, parallel computing architecture to support future robots that could keep up with the human brain. While human brains had 100 billion neurons and required only 20 Watts of energy, today's most powerful supercomputer, the 2.57 PFlop Chinese Tianhe-1A, requires four megawatts, and still has trouble with vision, motion, and 'common sense,' he said."
still has trouble with... (Score:1)
LOL! "can't approach the capabilities of a common honey bee" might be more accurate.
Re:still has trouble with... (Score:5, Insightful)
Re: (Score:3, Interesting)
I could only agree if we are speaking of computer that is intending - by and within its design - to learn like, as well as act like us in a mature state. I agree this may be the most pure way for getting AI to resemble the human condition (for a lack of a better way to put it), but executing on this path is entirely a red herring.
I would say that trying to understand and emulate the learning process is 10 to 100
Re: (Score:2)
Its interesting that you think epistemology actually plays a part for the flipping computer.
Wellll that's only half the story. You'll get very little in the way of logic unless it's a flopping computer as well.
get-it, logic gates! flip flop!,,,bwahahahahaha
I'm sure there was a propagation delay before people got that one,,, da bom tish
Re: (Score:2)
The only real saving grace is that this effort could actually be such a mirror for man kind, and accelerate our understanding of ourselves, if only slightly.
Maybe all we will discover is that if you have a *really* big network of interconnected nodes functioning in parallel and a good handling of metastable states you get a reasonable facsimile of intelligence. Maybe that's all intelligence is, after all, humans provide a pretty good facsimile of intelligence - but they aren't very logical.
Re: (Score:2)
Re: (Score:2)
The OP is far more correct than you are. If you knew anything about ANN's the first thing you'd know is that they are modeled (by their very design methodologies if not through direct observation and intent) upon the human brain.
Neural Nets work like we thought 30 years ago how human brains might work. But we still don't know much about how the human brain really works. NNs are at best a very rough approximation that falls far short of modeling the real thing. And we can't model the real thing accurately because we still don't know how it works. We know how individual cells sort-of work, and we know what parts of the brain are involved in what sort of activity, but there's an enormous gap in between that we know nothing about.
Re: (Score:3)
This is a valid point. There is indeed a learning factor for the brain... at least some aspects of the brain.
Our brains are extremely inaccurate. Our perceptions are always relative and demonstrably imaginative. There is a lot more to what we think we see and know versus what we actually see and know.
The thing with computers as we currently use and design them is that they are dependant on accuracy. (I recall when DRAM was coming into existence... people were flipping out over the idea that this type of
Re: (Score:2)
So our brains are really quantum computers, that may be on, off or both.
It explains fuzzy memories, and why each person sees an event differently.
as for common sense, some people don't learn that ever.
Re: (Score:2)
Re: (Score:2)
That is also assuming that consciousness is computation in the first place, which is nowhere near certain.
Re: (Score:3)
Obviously I'm simplifying and paraphrasing a year old article here, but one of the most intriguing things about this one setup is not only that it
Re: (Score:2)
I too find that interesting. Link?
Re:still has trouble with... (Score:4, Interesting)
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Well the first step in that is actually understanding how the human brain works. Contrary to popular understanding we know almost nothing about it.
Re: (Score:2)
Re: (Score:2)
This actually raises an interesting point that I've been thinking about recently. People imagine that an AI will also be a mathematical genius compared to us, because computers can calculate numbers quickly. Not necessarily so. One of the reasons we are slow with numbers is we keep vast amounts of related information along with the number. If I ask you to think of a number and tell me what it is, you might say "seven", but in your mind you might also be imagining the colours of the rainbow, the sides of a f
Re: (Score:2)
With an AI, the solution would be a hybrid design.
You have your ANN which is the seat of the AI's conciousness, but you attach an ordinary sequential computer (running ordinary software) to some of it's motor and sensory neurons.
The idea here, is that the ANN can control the "dumb" sequential processing computer for such answers. It can consciously input data via the motor neurons, then receive sensory stimulation back from it. This *WOULD* make the AI into a mathematical prodigy, at least compared to pure
Re: (Score:1)
Yes they've only just surpassed a Muslim's brain. Shouts "death to the infidel" And "how dare you insult the prophet" at random
Don't forget the random suicide bombing function
Apples and oranges... (Score:5, Informative)
That most powerful supercomputer, I'd assume, has not been tuned to actually work like a brain would.
This is like an emulator. A lot of computational power is probably wasted on trying to translate biological functions into binary procedures. I think if they truly want to compare, they'll need to create an environment that is enhanced for the tasks we want it to process.
Nobody expects the human brain to compute integer and floating point stuff at the same efficiency either, right?
Re: (Score:1)
I would *Love* to see that reduced to machine code
Re: (Score:2)
Would surely be interesting, wouldn't it?
Re: (Score:2)
The task was daunting for and undergrad... and what we ended up with was not so intuitive. I can only imagine mapping the depth and breath of the brain - and in fact would postulate that it can not be done with any adherence to soundness and validity using todays digital hierarchy.
Re: (Score:1)
Re: (Score:2, Insightful)
I know a way, but it takes about 18 years plus 9 months and a male and a female participant...
Also, what you end up with is usually an unemployed intelligence looking for something to do. And they don't always succeed. It's not obvious to me that we need more human intelligences. Maybe we need more and faster idiot savant machines, ones that excel at mundane things like driving road vehicles, doing laundry, loading dishwashers, sorting bills in chronological order. The boring stuff.
Re: (Score:2)
We don't treat them well - we eat and exploit most of them). Why should we create more? So that we can exploit them too?
If that's the reason we'd just be causing more evil in the world than good.
Whereas if we instead used the tech to augment humans, we'd have about the same amount of evil and good. Or at least not increase the evil so rapidly.
For similar reasons we should not create animal-human hybrids. We're not ready
Re: (Score:2)
Every animal, every organism, on the planet exploits other organisms. Does that make all life evil? Why are we so different, that the way we treat other life as a resource makes us evil? Perhaps the most effective evolutionary adaptation that life has ever stumbled across is to be domesticatable, tasty, and/or useful to humans. It's a guaranteed win.
Re: (Score:2)
It's not obvious to me that we need more human intelligences.
I thought the AI community had abandoned that idea ages ago. We already have plenty of humans. We don't need computers to do the things we're good at, we need them to do the things we're bad at.
Re: (Score:2)
Are you confusing law enforcement with biological reality?
That's the difference between science and mad science.
Re: (Score:2)
That's the difference between science and mad science.
Also Jacob's ladders, giant knife switches, gothic castle setting, bubbling, multicolored chemicals, and hunch-backed lab assistants.
Re: (Score:2)
You forgot goggles. But all these things are optional, being an affront to the laws of man and god is the important thing.
Re: (Score:1)
Humans are actually quite good at floating point math as embodied by ballistic trajectories --- watch outfielders run straight to where a ball will be when it comes down rather than following a curve, or a marksman who can consistently shoot coins or aspirin out of the air (for the former always positioning the bullet hole so that the coin will be useful as a watch fob).
Integer math as expressed in the real world can be quite good too --- I knew one teller who could take a fresh stack of $100 bills and zip
Re: (Score:2)
Re: (Score:2)
is like an emulator. A lot of computational power is probably wasted on trying to translate biological functions into binary procedures.
Isn't that kind of the point of the article? To get around this need for all the computational power, we need hardware that's better at probabilistic analog computations, and to run it all in parallel.
Re: (Score:1)
A lot of computational power is probably wasted on trying to translate biological functions into binary procedures.
Tried and failed (which was to be expected). If you try to build code that follows the same type of principles that biological functions do, most of your computing power goes into finding stuff that can react with other stuff. That was a kick to write tho.
Re: (Score:1)
Its magic lies within its prowess of organizing/re-organizing info.
Efficiency is the key! (Score:3)
Instead of trying to emulate the human brain, which at the moment is unattainable, we should concentrate on efficiency paradigms of smaller neural ensembles. Once we achieve efficiency we can scale. Why haven't we learned anything from the CPU industry? They didn't start from 19nm manufacture. Why should we?
We shouldn't hurry. AI comparable to a human person can be achieved, but it is still a long way until we reach it.
Re: (Score:3)
Why haven't we learned anything from the CPU industry?
So you're saying AI is all in the branding, and that we should ship AI with artificial brain lobules disabled to reduce manufacturing costs?
Re:Efficiency is the key! (Score:4, Informative)
The author talks about the honeybee. Let's emulate first the honeybee. Create a robot that can achieve what the social insect "bee" can achieve.
Lobules Lobes Whole Brain
Re: (Score:2)
Swarm Intelligence [wikipedia.org] would be a good place to start. Path-finding/graph search is only one part of AI though. It's very useful, but it's not necessarily the best method to solve all types of problem.
Re: (Score:2)
The honeybee is interesting because it's complexity is at about the limit of what personal computers can simulate today.
In rough order of magnitude terms, a honeybee brain has a million neurons with a thousand synapses each. Assume a neuron fires a hundred times per second. In the standard model of a neuron, each synapse can be simulated by a floating point multiplication and one addition.
Doing the math, a computer simulation of a honeybee brain in real time would need 100 gigaflops, which is in the range o
Re: (Score:2)
Beer powered (Score:5, Funny)
Each pint of beer contains 600 joules of energy, which can power your 20 watt brain for many hours, and give you trouble with vision, motion, and common sense.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
You both are. A pint of Guinness has about 711 kilojoule, which will last almost 10 hours.
Re: (Score:1)
Re: (Score:1)
The probable root of this error is that there are two types of calories - gram calories (written with small c) and kilogram Calories (written with capital C).
Interpreted AI (Score:2)
The reason this is the case is because current AI simulates a neural network as a program, you would have to produce chips which where actual neural networks the problem however is the interconnects which is in an order of magnitude more complicated compared to anything we can currently create. In fact the brain is quite slow, but its organization is what makes it powerful.
Re: (Score:2)
How about simpler hardware neural nets, in a cluster with a more modest interconnect? Eg build a hive mind.
Re: (Score:2)
Would be better but would not even come close to the human brain, which in the cerebral cortex has roughly a billion synapses per cubic millimeter.
Re: (Score:2)
I had to look this up just to be sure you haden't put a decimal point in the wrong place somewhere. Truly mind-boggling!
Re:Interpreted AI (Score:4, Funny)
Eg build a hive mind.
Like 4chan?
Editor fail (Score:1)
Human Brain doesn't excel at all either. (Score:1)
I have known many people who have ~100billion or so neurons that consume 20 watts of power, but they also have plenty of trouble with "Common sense". Actually they might be less sensible in some areas than a 100Kb C code running on a puny little Pentium 4.
Neurons are the wrong number (Score:4, Insightful)
The significant number is interconnect. In that area electronics is several orders of magnitude farther behind. Far enough that is seems doubtful something even remotely like the interconnect of a human brain can be reached artificially.
Side note: Comparing neurons and transistors, as is often done in the popular (but not very knowledgeable) press, is completely invalid as well. You need to compare neurons more to a micro-controller each.
Re: (Score:1)
Re: (Score:2)
The significant number is interconnect. In that area electronics is several orders of magnitude farther behind. Far enough that is seems doubtful something even remotely like the interconnect of a human brain can be reached artificially.
Hint: simulating is not the same as duplicating. A digital computer trades high-speed communication for interconnections. Think of serial vs parallel. If you simulate a neuron as an object located in memory, each neuron is interconnected to each other, only they cannot all communicate at the same time.
Considering the relatively slow rate at which neurons fire, that problem isn't so insurmountable as it seems at first.
Problem Continues to Grow (Score:2)
In addition, they are finding that chemical communication between cells is not point to point contained within the synapse only. Cells are swimming in chemical and electrical communic
The brain can also rewire itself (Score:2)
Ok , you can do this with a FPGA but this requires something external to the gate array to reset the logic gates - the array can't rewire itself. Biological neural systems can rewire themselves and not only that - they can do it *while they're running*. Obviously you could have this on the fly rewiring in a software simulation but thats orders of magnitudes slower than using hardware so I don't think we'll see computers simulating human brains in real time anytime soon.
Re: (Score:2)
"It is called auto-reconfigurable FPGA. Look at Xilinx ones, for example"
My mistake, I need to get back up to date!
"some slightly more biologicaly plausible than others"
I'm not convinced the brain has to be simulated exactly to produce the same result. After all, robots can now walk like a human but they don't use exact facsimilies of human muscles - they use hydraulics or electric motors to achieve the same effect. No doubt there are parts of neurons operation and the brains overall architecture that are s
Hang on a second (Score:3)
Getting a little ahead of ourselves aren't we?
We're still not entirely sure of how a brain works. Oh sure, it's a neural network of some kind, but how do the neurons in a brain form meaningful connections with each other? How do they get their weightings of activation? etc.
Chances each neuron in the brain might be representable by a simple mathematical function with only a few terms. The way the neurons connect to each other might also be representable in a simplistic way. (btw. look up dynamic markov coding if you want to see a neat way a state can reproduce in a way that gives the newly created state meaningful input/output connections to other states).
So the problem isn't necessarily that our computers aren't powerful enough. The problem is that we still don't know how a brain works.
Re: (Score:2)
Mod parent up. (Score:2)
The dose of realism injected by a real live neuroscientist ought to be paid attention to. Most CS types know too little about neuroscience and psychology to have a worthwhile opinion about the viability of human-level machine intelligence and what it takes to get there. I used to believe we'd have a strong AI by oh, 2040 or so until I started really looking into the fields I mentioned, and every informed post like the one I'm replying to reaffirms my belief that we have a very, very, very long way to go--
Google The Brain! (Score:2)
Ok, I admit this sounds completely absurd at first, but there's an awful lot of similarities between the neural pathways of the brain and the countless number of ways websites link to each other, both directly and indirectly through their contacts, and their contacts' contacts, and all the contacts that eventually show up in an endless cycle of recursion, etc...
Now, google has to wade through all this, and constantly correct and update itself, to ensure it can get a user to the correct web page that best ma
Re: (Score:2)
"You'd think it'd just be a matter of passively connecting to a neuron to sniff it's traffic and then observing which nearby neurons carry the signals to and from it"
You'd think. Except not only have people tried this but it's inherently gibberish and never gets anything useful.
A neuron is an extremely complex biochemical cellular device that we don't understand. It is *not* just a biochem transistor, as some would have you think.
It retains some information, reacts to historical stimuli, reacts to chemica
Machine intelligence is not a hardware problem... (Score:5, Insightful)
It's a software problem.
Re: (Score:3)
The architecture on which you run the software also determines quite a lot of what you can do and how the software is executed. You need a certain topology of the hardware, otherwise it is impossible to do certain tasks efficiently. There is a huge difference between a slow but massively interconnected network like the brain, and a sequential microprocessor running instructions one by one at high speed.
Re: (Score:1)
Who mentioned efficiency?
We don't have to do it in real time. But even if we had till the heat death of the universe to let the code run, we still don't know how to write the code, which was the OP's point.
Re: (Score:2)
It's a software problem.
Well, that's the hypothesis put forward in the 1950's that hasn't yielded results.
In contrast, something like Watson has massive amounts of processing power and storage access, with relatively simple algorithms. Watson is the ENIAC of the 2029 pocket calculator.
I wonder if humans like to think of themselves as needlessly complex.
But as to the main story - "we need more power-efficient, more parallel hardware":
Watson: "What is the main focus of modern computer architecture for the pa
Conversely... (Score:2)
As awesome as everyone talks up these 'brains' and how incredibly superior they are with only 20 watts, the fastest brain on earth can't even keep up with a 10 dollar pocket calculator that uses a fraction of a watt when it comes to remotely complex arithmetic.
Obviously, we have very two different things here. We created computers to be good at the stuff we are *not* good at, not to match our capabilities (we wouldn't spend so much money to make machines that are good at just the same things we are). That
Re: (Score:2)
As awesome as everyone talks up these 'brains' and how incredibly superior they are with only 20 watts, the fastest brain on earth can't even keep up with a 10 dollar pocket calculator that uses a fraction of a watt when it comes to remotely complex arithmetic.
Exactly!
My $50,000 BMW can't keep up with my $10 pocket calculator when it comes to math. And my $10 calculator can't drive me to the mall.
Re: (Score:2)
No, I tried to ask it to calculate 32x12 and it couldn't do it.
But it COULD drive me to the mall.
Re: (Score:1)
Re: (Score:2)
No, I didn't sleep.
However, my calculator failed miserably at getting me to the mall.
Not totally off-topic, I still am amazed at the power of my index finger, which can do things that Kings couldn't do 100 years ago. I can move it in a certain way (associated with my keyboard) such that it causes a total stranger to bring food to me - a pizza in 30 minutes or less.
Re: (Score:2)
By that logic the human brain outclasses the calculator as well, not to mention that without the ingenuity of the human mind neither the calculator nor the car would exist.
Past tense (Score:2)
Why is this article written in past tense? It contains funny paragraphs like this:
'While fundamental physics and molecular biology dominated the past century’s innovations, Sejnowski said the years between 2000 and 2050 was the “age of information”.'
2050 isn't really the past, right?
Undoing an accidental mod-down (Score:2)
Know lots of 20 - 70 somethings with no common sense.
Not-so-fast with handing the Tianhe a fraudulent r (Score:2)
That belongs to the Jaguar Cray XT5-HE, not the overstated specs of the system that "claims" the supposed top slot.
Move it down a bit more and you would truthfully be representing its capability. But then you'd just want to modbomb me into oblivion, since that's easier to do.
your information is dated (Score:2)
Read it and weep:
http://www.top500.org/ [top500.org]
The current data is suspect. (Score:2)
I did read it.
Given the low quality of China's manufacturing(and their propensity to copy, not create), the current data would be very suspect. Doubly so for where they use knockoff chips.
processor organization is the problem (Score:1)
computer CPU and software processes in a flat 1 dimensional stream nerual structures are emulated taking time to read each ones state one after another and simulate the actions of the interconnects to get the result
"Hardware/softcore" FPGA based neural net would form a flat even 2 dimensional "grid" array
but a DNA based brain is both a 3D structure and also has sub "fractal" patterned interconnected structures within it
to form even a bee style neural structure in a FPGA would still need the logic cells to
Abby Someone... (Score:2)
"Who's brain did you emulate?"
"Uh, Abby someone..."
"Abbey who?"
"Abby Normal...."
FPGA (Score:2)
It seems like we already have this in FPGAs. We don't really have good clusters of them though..at least that I know.
I'm a software developer that has dabbed in VHDL and created some basic programs that got ran directly on a chip.
It was a major pain as someone just trying to write something. A higher level language designed for parallel computation on a large FPGA array might be more in line with what he wants...without trying to design hardware specifically to the problem. Although maybe after a while comm
I've Been Thinking About Those Pink Meatputers (Score:2)
I suspect there's some trickery going on in the meatputer though. The whole system feels kludgy. They seem
Re: (Score:1)
Just wait (Score:2)
Moore's law says that the 2.5Pfl machine in a 20 watt package is about 25 - 30 years away.
Can't get there from here (Score:2)
From what I'm hearing, Dr Sejnowski's plaint only partly addresses the problem. To implement cognition using a computational model, we need a neural simulator that:
- is large enough to represent all the neurons and interconnections needed to synthesize human-level cognition
- uses much less power than a supercomputer
But to be more than "a brain in a jar" it also must:
- learn using supervised and unsupervised instruction
- quickly load and unload modules of what it has learned
Without addressing all four goals
Why emulate brains? (Score:1)
It just seems like a massive waste of computational resources... I would rather have a well programmed predictable computer program controlling my spacecraft vs a brian modeled after humans which may decide to go on strike or otherwise act unreliably.
Why not just use GAs and NNs in specific context where they make sense... rather than trying to copy brains?
If you want to solve hard math problems who is to say intelligent solvers can't be designed to provide real results for a fraction of the computer time?
I
Re: (Score:2)
I really doubt that anyone in the next thousand years will be able to build a machine equal in all respects to the human brain.
You can build a machine that will perform a single task or a variety of tasks but I have yet to see anything from anyone about building a machine that will recognize that a new task is required to solve a new problem and then formulate and perform that new task.
The problem with a machine is that is does not think, it does not ponder, it does nothing intuitively. It can resolve any
Re: (Score:2)
I would rather have a well programmed predictable computer program controlling my spacecraft vs a brian modeled after humans which may decide to go on strike or otherwise act unreliably.
Hey, I know Brian and he's a real stand-up guy. In fact, he'd be offended at being called unreliable if he wasn't so damned amicable.
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Is there a human capable of multiplying precisely billions of numbers per second or doing any other similar tasks?
What, you think the computer invented itself?
We are tool makers, the computer is a tool, we want to multiply at a rate of 10^9 a second, we just build the tool using our brains.
Would explain fraudulent Tianhe specs (Score:2)
At the risk of some modpoints:
What China really can't do with computers, they make up with dissidents. The Top500 data from 11/2010 would be suspect, even if that wasnt the cause.
Re: (Score:1)