New 'Deep Learning' Technique Lets Robots Learn Through Trial-and-Error 65
jan_jes writes: UC Berkeley researchers turned to a branch of artificial intelligence known as deep learning for developing algorithms that enable robots to learn motor tasks through trial and error. It's a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence. Their demonstration robot completes tasks such as "putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more" without pre-programmed details about its surroundings. The challenge of putting robots into real-life settings (e.g. homes or offices) is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings, so this type of learning is an important step.
"Deep Learning"...?? (Score:2, Troll)
This seems more like basic-level stuff... learning from your mistakes. That strikes me as the sort of thing that would be "hardwired" in everything from nematodes to primates. Why is this news?
Re: (Score:3)
Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.
-- Douglas Adams
Re: (Score:2)
One thing I wonder about is: will machine learning systems being to transmit their experiences over the Internet to have other machine learning systems learn from that.
They can't now, but how long will until they can ?
An other is: can they take snapshots of what one system learned and transmit that to an other ?
You remember how they learned new skills in the Matrix ?
Re: (Score:2)
An other is: can they take snapshots of what one system learned and transmit that to an other ?
If they run on general purpose software, you can just clone the entire program. Matrix style learning is a lot more difficult, because it has to be integrated in what the person already knows.
Re: (Score:2)
"Matrix style learning is a lot more difficult, because it has to be integrated in what the person already knows."
This is exactly why I worded it that way. ;-)
Actually found a talk by the people working on this project, here he talks about where/how to get data:
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
You can't experience the experience of others (paraphrase) J.D. Lang.
OTOH when I read the OP, I immediately thought of 'Deep Thought' and a couple of philosophers who were too highly trained to be useful.
Re: (Score:1)
If that doesn't matter, then I don't know what does.
Re: (Score:2)
I don't want to see us talk about machines having "agency" when most people struggle with it for themselves.
And you think the NSA is run by humans? Have you ever seen pictures of the top brass at the NSA? I've seen toasters with more anthropomorphic features.
Re: (Score:2)
But Anon, humans are automata.
"I don't want to see us talk about machines having "agency" when most people struggle with it for themselves."
Birds struggle with flying above a certain ceiling, or beyond a certain speed. Human made flying machines breach those limits easily. The same will likely hold true with human made thinking machines. These will not be a slave to evolution, though they may utilize evolution as a force for optimiza
Re:"Deep Learning"...?? (Score:4, Insightful)
An automaton can be neither benevolent nor have free agency.
Why not? Unless you believe that brains are magic, or created by the intervention of a deity, there is no reason to believe that computers have any inherent limitation that living things do not have.
Re: (Score:2)
An automaton can be neither benevolent nor have free agency.
Sure it can, you just have to program it to have free agency.
Re: (Score:2)
I would bet that we get an ASI long before we can raise ourselves up to that level, unless we as a species push to delay the former and focus on the latter. But when have we as a species ever come together to do an
Re: (Score:1)
There has never been a benevolent godlike human in any culture without fault. That I postulate would be impossible.
One approaching fallacy is that humans have free will without constraints. That is obviously not true and humans in their environment have finite responses for any real situation. They are no different to robots. We all operate within natural law.
For humans to be other than that which they are would mean some kind of transformation and thought and philosophy has totally explored most of that fo
Re: (Score:2)
The thing about an ASI running your economy is that 99.99999% of interactions will be invisible.
Re: (Score:2)
Re: (Score:2)
...Why is this news?
Because they couldn't do it before....
Re:"Deep Learning"...?? (Score:4, Insightful)
That strikes me as the sort of thing that would be "hardwired" in everything from nematodes to primates. Why is this news?
Because it isn't a nematode or a primate. It is a robot. A living thing that can learn and adapt is not news, because that's what living things do. A non-living think that can learn and adapt is news because that's what living things do.
Re: (Score:1)
Hmm... sorry, this has been happening for many years already... back to the 80s... This is nothing new.
Re:"Deep Learning"...?? (Score:5, Interesting)
Hmm... sorry, this has been happening for many years already... back to the 80s... This is nothing new.
No, it was not happening in the 1980s. The fundamental algorithm behind deep learning networks was worked out by Geoffrey Hinton [wikipedia.org] in 2006. Before that, training a NN more than 2 layers deep was intractable.
Re: (Score:1)
Deep neural networks are only faster at learning than normal neural networks. A regular neural network can eventually compute any function. The only 'advancement' here is speed and scale, and it doesn't sound like they used any custom chips so they didn't even make any advancements there. The algorithms are well known and the outcome should have been easily predicted by the developers. If not, they haven't spent enough time in their library.
They did a good job, but this is only interesting if you know n
Re: (Score:2)
Deep neural networks are only faster at learning than normal neural networks. A regular neural network can eventually compute any function.
If "eventually" is exponential, that doesn't mean much. A computer can eventually solve the traveling salesman problem for a thousand cities. But in the meantime, all the black holes in the universe will evaporate through hawking radiation, and there will be nothing left but cosmic radiation at a few nano Kelvins. It will be hard to power a computer with that.
Re: (Score:3)
If you believe evolution your argument makes no sense. Random mutations accomplished already what you claim is unlikely to occur until the theorized heat death of the universe. How likely is that?
Is the universe infinite?
PS: Evolution does not rely on random mutations.
Re: (Score:2)
I think if we are talking about heat death then the universe is not infinite. I would be interested in reading someone else's application of probability/frequency to a question like this though. It does not seem straightforward.
Infinite universe (spatial) means you have infinite chances of something happening. Infinite universe (temporal) means you have infinite chances of something happening and can also perform arbitrarily long calculations. There's a decent chance that the universe could be infinite in either sense, also that there could be an infinite number of different universes. (however, if our universe is temporally infinite it is likely to have certain difficulties making use of said infinity, due to entropy or data loss
Re:"Deep Learning"...?? (Score:5, Funny)
This seems more like basic-level stuff... learning from your mistakes. That strikes me as the sort of thing that would be "hardwired" in everything from nematodes to primates. Why is this news?
Because you haven't learned what is news yet. But by trial and error, you'll catch on
Re:"Deep Learning"...?? (Score:5, Insightful)
It is a good question, and there are several answers...
Artificial Intelligence has been seen as a goal since Ada Lovelace was a lass. In the fifties, it was hoped that computers fed with parallel translations could learn the rules of languages, and provide fought translations of (say) technical documents on aeronautics from Russian to English, where sufficiently skilled and positively vetted engineers were rare. There were later attempts in the sixties and seventies to learn to walk, recognise objects, or solve puzzles. There was the constant hope that the next hardware would be a bit more powerful, and you could throw problems at it, and intelligence would somehow boot up. After all, that is how it must have started last time. However, intelligence failed to boot up, or maybe it always lost out to other brute force techniques which regular computers are good at.
The nematode has a simple. pre-programmed brain. It is good for being a nematode, but it doesn't really learn. Our brains have a lot of structure when they are formed, which means that our language centres, our vision centres, the parts that are active when we are solving spatial problems, or composing music, turn up in the same places most of the time; but we don't seem to run an actual program as such. We are born with very little instinct when compared to most other complex animals, but I suspect even they are not really running a program either.
The trick seems to be to provide the robot with enough plastic design to nudge it in the general direction of intelligence: too little design and it never gets its act together, while too much design means it is just doing what you programmed it to do. There are interesting times where computers are getting the complexity and the connectivity and plastic re-programmability to rival animal brains; but the spontaneous self-evolving problem solving spark just isn't there yet. But I hope we may see it in our lifetimes.
Re: (Score:1)
but we don't seem to run an actual program as such.
Perhaps an interdisciplinary pov might be of help here. We do run programs based on hard wired (unconscious) programming.
Principally it is self-preservation, from biological respiration to environmental choices. That's the core programming from which all other extensions spring from. Replication is group preservation, so is war for survival, hunting and gathering, society, friendship, love, art, recording of knowledge etc.
The fact that AI is not concerned with that basic tenant is bemusing to me.
Re: (Score:2)
I think there is more here than just learning to imitate humans, exciting though that is.
Let us take 'Deep Blue' as an example of a machine that does not think. It was able to come up with some dramatic solutions. Its typical successes were mates involving an improbably sequence of sacrifices that gave a mate in 6 or 7, which was about the brute force look-ahead of the time. It also had weighting models that give suggestions of which were 'good' moves and which were 'bad' ones. Moving a bishop to a centr
Re: (Score:1)
I understand you post and was written with a great example. I do not deny that Deep Learning has a long way to go, yet I (as a Philosophy sub-major) can't leave my original contention alone. So with my technical knowledge I can build a do-able A.I. machine which can have elements of Deep Learning if I understand it correctly.
In this case let's use our imagination:
The simplest A.I. is a feedback circuit - like a thermostat. It always tries to control temperature within a set range. It finds it difficult to o
Re: (Score:2)
"Deep learning" refers a family of machine learning techniques (such as neural-networks, convolutional neural-networks, stacked-autoencoders, etc.) that have a multi-layer architechture, typically allowing the system to learn highly non-linear functions of many variables. Each layer can be thought of as a simple learned function whose output is fed into the next layer. Such systems can often have thousands or millions of parameters to learn and thus require a LOT of training data and a fair bit of computing
Re: "Deep Learning"...?? (Score:1)
The Skynet is Falling! (Score:1)
Cue the Skynet / Matrix references in 3...2...1...
Re: (Score:2)
Do you want Skynet? Because this is how you get Skynet.
Genetic Algorithm Re-framed? (Score:1)
There is a Genetic Algorithms textbook from 1989 that covers generational learning and "mutating" the parameters until you get to the end state in the best way possible. My AI knowledge isn't great but I wouldn't be surprised if there are ideas that pre-date the '89 text.
Does anyone know what the software controlling the robot is doing under the hood that's different?
Re: (Score:3)
In the late 80's/early 90's, they were able to use some of their theory, but it just wasn't super-robust because things just took too darn long. You couldn't have your system analyze at a million images
Re: (Score:2)
Here are the papers: http://rll.berkeley.edu/deeple... [berkeley.edu]
Sorta Off Topic (Score:1, Offtopic)
They are becoming the 2015 equivalent of "Frist Post, or "Welcome from the Golden Girls".
Re: (Score:2)
But can't we get people to mod down the now incessent "Why is this news" or "Why is this on Slashdot?" Posts?
They are becoming the 2015 equivalent of "Frist Post, or "Welcome from the Golden Girls".
Amazing how many people are wasting mod points calling an admitted Offtopic Post as Offtopic. Captain Obvious is smiling upon thee.
Roderick and Roderick at Random (Score:2)
How about learning to 'fly'? (Score:2)
Reminds me of http://www.newscientist.com/ar... [newscientist.com] from 2002. Robots goal was to raise its altitude without knowing its actuators ahead of time.
Deep (Score:2)
Better known as 'learning' to everyone not trying to exaggerate an claim of artificial intelligence.
It's excellent progress, which is why I don't think it should be watered down by being compared to the simple algorithms.
New? (Score:1)
What could possibly go wrong? (Score:1)