IBM Researchers Propose Device To Dramatically Speed Up Neural-Net Learning (arxiv.org) 87
skywire writes: We've all followed the recent story of AlphaGo beating a top Go master. Now IBM researchers Tayfun Gokmen and Yurii Vlasov have described what could be a game changer for machine learning — an array of resistive processing units that would use stochastic techniques to dramatically accelerate the backpropagation algorithm, speeding up neural network training by a factor of 30,000. They argue that such an array would be reliable, low in power use, and buildable with current CMOS fabrication technology.
"Even Google's AlphaGo still needed thousands of chips to achieve its level of intelligence," adds Tom's Hardware. "IBM researchers are now working to power that level of intelligence with a single chip, which means thousands of them put together could lead to even more breakthroughs in AI capabilities in the future."
Yes! (Score:5, Funny)
With this technology, chatbots can become neo-nazi holocaust deniers in less than two hours!
Re: (Score:2)
Re: (Score:2)
"The possible responses look to be mined from over a million twitter posts."
There's 'yer problem!
Re: (Score:1)
Most people posting using Twitter are Nazi's.
Re: (Score:2)
...Looks like it was mainly a modified language translator, instead of translating language A to language B it translated a statement or question to a response..... The possible responses look to be mined from over a million twitter posts.
And that's different from real meatspace human talk in what significant way?
Re: (Score:2)
Re: (Score:2)
No. Not learning. comprehension. Without comprehension, the "I" in "AI" is void.
Re: (Score:2)
define those terms, learning and comprehension for me please.
As far as I am concerned, comprehension (of a situation) could be defined as: extracting facts about the current situation, combining those with previously known facts, selecting which are relevant, and then applying rules when working out how to respond or making new inferences.
Please tell me how your comprehension differs to this process that the machines are already doing. Just because they are not self aware, does not mean they don't comprehen
Re: (Score:2)
Well, you can easily have learning without comprehension. You can learn to predict that a pattern will occur without knowing anything else about it. Comprehension is required if you are supposed to act in response to it in a "useful" way.
And that's not understanding. Understanding requires that you construct a model relating multiple streams of input, and comprehend what those streams mean for the model's reaction.
And THAT's not sufficient. (Google's robot has exhibited that kind of understanding.) Und
Re: (Score:2)
Understanding requires that you construct a model relating multiple streams of input
No, that is not "required". Intelligence is a characteristic of behavior. If a system behaves intelligently, then it is intelligent. Internal mechanism is irrelevant.
Re: (Score:2)
Pretty sure some of us do. [fyngyrz.com]
Comprehension, in the context of intelligence: Capable of abstract thought about any subject or input presented. When we get there, we'll have AI. Not before. Everything to date, while often marvelously useful, is just marketing speak on the order or "3d television", which is to say, not.
Re: (Score:2)
At the current state-of-the-art computing machinery has exactly zero capability for "thought" or for creating/understanding abstractions. All it can do is use abstractions it is programmed to use. One reason I see why so many people get this wrong is that they do not have a lot of effective intelligence themselves and are mostly driven by an emotional system that may well be mostly mechanical.
Re: (Score:2)
To the best of our knowledge, every system in the human brain - including the emotional system - is completely mechanical.
Re: (Score:2)
There is not comprehension in machines. That very likely requires consciousness, i.e. some major fundamental breakthrough that is not even on the very far horizon as nobody has any idea what it is and as it does not seem to be part of what can be implemented with physical machines (there just is no mechanism for it).
And please do not tell me that consciousness is an "emergent property" of complex machinery. That is pseudo-mystical bullshit. There are no emergent properties in Physics and the whole cannot be
Re: (Score:1)
That very likely requires consciousness, i.e. some major fundamental breakthrough that is not even on the very far horizon as nobody has any idea what it is and as it does not seem to be part of what can be implemented with physical machines (there just is no mechanism for it).
Whatever consciousness is, it can be implemented by a physical machine: our brains do it. If you feel that this is impossible, then you need to adjust your notion of what consciousness is.
Re: (Score:2)
You are making invalid assumptions. You are assuming everything observable at the outer interface is created inside. That is a rather simplistic and unsophisticated model. There is no reason to assume it is true.
Re: (Score:2)
AIUI raw computation power ceased to be a significant concern a long time ago. The problem now is more a case of producing learning techniques that work well, I'm not sure this device adds anything to that problem?
Computation power and especially computation efficiency still leave huge room for improvement when modeling biological neural networks. The human brain runs on about 20 watts and currently would take clusters of supercomputers to simulate, probably using billions of watts.
We certainly do need improved learning techniques for neural networks, but overall hardware efficiency is still an important research goal as well.
Re: (Score:3)
Re: (Score:2)
Really? I haven't seen anything yet that I would classify as non-hype.
Then you haven't been keeping up with advances in image and voice recognition. This does not just involve theoretical research, there are actual products used by consumers benefiting from these technologies.
Re: (Score:2)
Ahem. Flying cars.
Re: (Score:2)
Hahahhaha, no. AI is even farther removed from reaching its stated goals today then it ever was. It looks quite possible today that AI is infeasible in this universe.
If you think this Go-machine is intelligent, then you probably also think that a slide-rule of book of mathematical tables is intelligent: It both can do computations far better than humans can. Yet clearly both are inanimate objects and hence clearly not intelligent at all. The Go-machine is just the same idea scaled up and with some motors ad
Re: (Score:2)
Then how do you explain human brain? Nature magic?
Re: (Score:2)
Why would I need to explain it? Are you of the school of though that a theory is only valid if it explains everything and has no gray areas and leaves no unexplained things? If so, you are exceptionally stupid. This is, incidentally, an idea that is also frequently encountered in fundamentalist religion.
Re: (Score:2)
Why would I need to explain it?
Because you're asserting that artificial intelligence is infeasible in "this universe", yet natural intelligencies - us - exist. That's equivalent to claiming human beings are supernatural - that is, there's a component to human intelligence which can't be replicated by any engineering, no matter how advanced - which is an extraordinary claim.
I haven't feared AI before, but ... (Score:1)
Knowing that they can possibly speed it up to this extent? I might have to bother thinking about what may come, if true. I never had a concern about AI, since making a strong one has always been in the realm of fantasy, where we are just scratching at the toes of the giant.
I have always thought that AI techniques lacked elegance, but I never put forth the effort to sort it out and look for a better method, other things I wanted to do. This may be part of the answer to that problem that annoyed me at that ti
Re: (Score:2)
We have been looking at it for only a few decades. Building intelligence in nature took millions of years. One could argue we are well on our way up the evolutionary ladder of AI.
In response to your question:
Would this technique be used to bring it closer to making a human-like mind, or simply a better mind?
I would say a more interesting question is:
How will we know when we have created something with a human-like mind?
Our first one will likely be much simpler, but it gets complicated, because all our inputs come filtered through biological inputs. The first human like AI will not be received through filte
Re: (Score:2)
What they're talking about is a way of improving the speed of learning. Nothing else (that I know of). Also nothing less. This is quite important WRT the practicality of using current deep learning approaches, but it doesn't make the end state any more powerful (except that it can continue learning faster).
This doesn't address motivation, which I see as the current major stumbling block in front of General AI. This doesn't make the AI more human...except that humans learn continually.
What it does do is
Re: (Score:1)
What they're talking about is a way of improving the speed of learning. Nothing else (that I know of). Also nothing less. This is quite important WRT the practicality of using current deep learning approaches, but it doesn't make the end state any more powerful (except that it can continue learning faster).
They're also talking about reducing the size by several orders of magnitude.
Re: (Score:2)
Neural networks are non-linear classificators. They are not anything intelligent, and speeding up the parametrization process (misleadingly called "learning") does not do a lot except possibly make them cheaper. They do not gain capabilities by this.
Re: (Score:2)
That's what cloud computing is for. Do all the processing remotely. Most speech recognition on phones is done this way currently.
Re: (Score:2)
And thanks to this, all you say is now monitored and analyzed by three letters agencies around the world.
Re: (Score:2)
Re:Won't shrink this to fit into your phone (Score:4, Informative)
There are some major limitations with the design they have gone with for deep learning. You may think that thousands of chips will soon shrink to fit in a phone. (~15 years if moore's law holds). But thermodynamics won't let this happen, you can't flip an arbitrary number of bits for zero energy. There is a minimum amount of energy necessary for a register to perform a simple operation, and the amount needed for a deep learning system of this scale is more than you would want to comfortably power in your pocket.
This is for training. Once the training is done, the model can be used in a cell phone.
Case in point, voice recognition.
Re:Won't shrink this to fit into your phone (Score:5, Insightful)
The human brain runs on about twenty watts. The computational power required to match it is barely imaginable.
Clearly we are a long, long way from the limits imposed by the laws of physics.
Re: (Score:2)
Actually we could adopt the same approach if we gave up on switching speed and went instead for low power and parallel execution. But we'd need to redesign all our algorithms.
That said, even with low power switching electronics are less power efficient than the brain, just not so much so.
Re: (Score:1)
Actually we could adopt the same approach if we gave up on switching speed and went instead for low power and parallel execution. But we'd need to redesign all our algorithms.
The problem is that many algorithms can't be redesigned to work efficiently on a (massively) parallel computer.
Re:Won't shrink this to fit into your phone (Score:4, Interesting)
The human brain runs on about twenty watts. The computational power required to match it is barely imaginable.
AlphaGo required megawatt-hours of energy to learn to play Go well enough to beat Lee Se-dol. But how much did Lee Se-dol's brain consume in the ~20 years that he spent learning, not to mention the energy expended by the brains of his opponents (remember that much of AlphaGo's education was from playing against itself)? Supposing Lee Se-dol spent 2000 hours per year on Go for 20 years, that's about 800 kWh, plus some more for the energy expended by his opponents. AlphaGo's education required more energy input than Lee Se-dol's, but it's probably an order of magnitude more, maybe two. Not three or four. Switching from general-purpose to special-purpose hardware will probably get us to the same order of magnitude.
That said, my guess is that you're right that we're still a long way from physics-imposed limitations. My guess is that current technology would already be capable of building something vastly more efficient than a human brain... if only we knew what to build. We're learning.
Re: (Score:2)
The human brain runs on about twenty watts. The computational power required to match it is barely imaginable.
Clearly we are a long, long way from the limits imposed by the laws of physics.
No, not really. The 'barely imaginable' computational power required now comes from clumsy, inaccurate, and barely-informed emulation.
When we finally understand it, a faithful execution of the brain's design will have fairly modest hardware requirements. It will be like a graphics card.
Re: (Score:1)
This is for training. Once the training is done, the model can be used in a cell phone.
Case in point, voice recognition.
Correct me if I'm wrong, but isn't voice recognition performed by the cloud? The phone simply records and transmits your voice to the cloud for processing.
Re: (Score:2)
Moore's law has effectively been over about 10 years ago.
Re: (Score:3)
As I understand it AlphaGo operated via deep learning. That's not only an AI, that's a rather advanced AI. Deep Blue was an expert machine. Different technology.
Re: (Score:2)
You fell for the hype. "Deep learning" is not learning at all and carries zero qualities of insight or understanding. It is parameter adjustment to a sample of data. It is something that looks very well on grant applications or marketing material though.
Re: (Score:2)
Please explain what you mean by "insight" and "understanding" and why these are necessary qualities for something to qualify as learning? Because "parameter adjustment to a sample of data" seems to cover an awful lot of cases.
It looks good because it gets results. Customers are g
Re: (Score:2)
Very much this. It cannot to basically everything that the human opponent it beat _can_ do. It can do this really simple game with really simple rules extremely well, but that is it. Here is a comparison I like to use: Take a pocket-calculator or slide-rule or even a book of mathematical tables. All are several orders of magnitude better than humans at some, very specific mathematical operations. Yet nobody sane would claim either of these objects is intelligent.
Re: (Score:2)
Well said. I have noted that too. Explains a lot. Physicalists are a very strange kind of fundamentalist religious, as they always assume their view is obviously true and the only possible one. These people are thinking they have rejected religion, only to replace it with something that has all the characteristics of fundamentalist religion. (Well, no personal God, but that is not strictly required for religion.) They are bad at Science as well (like other fundamentalists), because in Science the question i
Re: (Score:1)
Physicalists are a very strange kind of fundamentalist religious, as they always assume their view is obviously true and the only possible one. These people are thinking they have rejected religion, only to replace it with something that has all the characteristics of fundamentalist religion
It's just simple observation. We observe that brains are physical objects and that they have intelligence and consciousness. To deny that this is happening, even though we observe it, that's fundamentalist religious.
Re: (Score:2)
You don't. You assume everything is physical and from that you can conclude *surprise* that everything is physical. It is an elementary beginner's mistake. Physics, incidentally, makes no such claim.
Re: (Score:2)
Very little. The whole concept of a Turing machine isn't even a century old, and modern computers are still far from the raw computing power of human brain. Internet broke through in my lifetime, and ubiquitous computing - smart everything - is still just a promise on the horizon. The proverbial sunrise of the Informa
Re: (Score:2)
Nicely clueless. Proves my point.
Do you want skynet? (Score:4, Funny)
'cause this is how we get skynet
Re: (Score:2)
It was the Terminator that learned the value of human life.
RPU speedup vs CPU? Or GPU? (Score:4, Interesting)
The article abstract suggests that a Resistive Processing Unit will run 30,000 times faster than a cluster of CPUs using less power. But nobody runs neural nets on CPUs; they use GPUs.
So then, how does a RPU compare to a GPU?
Re: (Score:2)
Not nearly as well, obviously. That is why they did not do that far more appropriate comparison. Just like the D-Wave scammers that compare their machine to a simulation of their machine on a single CPU and get ridiculous speed-ups, when in actual reality they are slower when said far cheaper single CPU actually runs an algorithm suitable for it. It is lying with numbers and it has gotten very bad indeed because a lot of people fall for it.
Re: (Score:1)
How long until AI's *save* jobs & *help* human (Score:2)
So far, AI has only been pursued in ways that destroy without room for replacement. When (on net) do they start becoming a force that helps humanity without requiring retraining?