Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Robotics Technology

New 'Deep Learning' Technique Lets Robots Learn Through Trial-and-Error 65

jan_jes writes: UC Berkeley researchers turned to a branch of artificial intelligence known as deep learning for developing algorithms that enable robots to learn motor tasks through trial and error. It's a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence. Their demonstration robot completes tasks such as "putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more" without pre-programmed details about its surroundings. The challenge of putting robots into real-life settings (e.g. homes or offices) is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings, so this type of learning is an important step.
This discussion has been archived. No new comments can be posted.

New 'Deep Learning' Technique Lets Robots Learn Through Trial-and-Error

Comments Filter:
  • This seems more like basic-level stuff... learning from your mistakes. That strikes me as the sort of thing that would be "hardwired" in everything from nematodes to primates. Why is this news?

    • Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.

      -- Douglas Adams

      • by Lennie ( 16154 )

        One thing I wonder about is: will machine learning systems being to transmit their experiences over the Internet to have other machine learning systems learn from that.

        They can't now, but how long will until they can ?

        An other is: can they take snapshots of what one system learned and transmit that to an other ?

        You remember how they learned new skills in the Matrix ?

        • by itzly ( 3699663 )

          An other is: can they take snapshots of what one system learned and transmit that to an other ?

          If they run on general purpose software, you can just clone the entire program. Matrix style learning is a lot more difficult, because it has to be integrated in what the person already knows.

          • by Lennie ( 16154 )

            "Matrix style learning is a lot more difficult, because it has to be integrated in what the person already knows."

            This is exactly why I worded it that way. ;-)

            Actually found a talk by the people working on this project, here he talks about where/how to get data:

            https://www.youtube.com/watch?... [youtube.com]

      • by Whiteox ( 919863 )

        You can't experience the experience of others (paraphrase) J.D. Lang.
        OTOH when I read the OP, I immediately thought of 'Deep Thought' and a couple of philosophers who were too highly trained to be useful.

    • by tmosley ( 996283 )
      Because we are figuring out the building blocks of agency, a vital stepping stone on the path to building a benevolent (hopefully) God who will actually take care of us.

      If that doesn't matter, then I don't know what does.
    • by zaxus ( 105404 )

      ...Why is this news?

      Because they couldn't do it before....

    • by ShanghaiBill ( 739463 ) on Saturday May 23, 2015 @11:11AM (#49758837)

      That strikes me as the sort of thing that would be "hardwired" in everything from nematodes to primates. Why is this news?

      Because it isn't a nematode or a primate. It is a robot. A living thing that can learn and adapt is not news, because that's what living things do. A non-living think that can learn and adapt is news because that's what living things do.

      • Hmm... sorry, this has been happening for many years already... back to the 80s... This is nothing new.

        • by ShanghaiBill ( 739463 ) on Saturday May 23, 2015 @12:06PM (#49759091)

          Hmm... sorry, this has been happening for many years already... back to the 80s... This is nothing new.

          No, it was not happening in the 1980s. The fundamental algorithm behind deep learning networks was worked out by Geoffrey Hinton [wikipedia.org] in 2006. Before that, training a NN more than 2 layers deep was intractable.

          • by Anonymous Coward

            Deep neural networks are only faster at learning than normal neural networks. A regular neural network can eventually compute any function. The only 'advancement' here is speed and scale, and it doesn't sound like they used any custom chips so they didn't even make any advancements there. The algorithms are well known and the outcome should have been easily predicted by the developers. If not, they haven't spent enough time in their library.

            They did a good job, but this is only interesting if you know n

            • Deep neural networks are only faster at learning than normal neural networks. A regular neural network can eventually compute any function.

              If "eventually" is exponential, that doesn't mean much. A computer can eventually solve the traveling salesman problem for a thousand cities. But in the meantime, all the black holes in the universe will evaporate through hawking radiation, and there will be nothing left but cosmic radiation at a few nano Kelvins. It will be hard to power a computer with that.

    • by Ol Olsoc ( 1175323 ) on Saturday May 23, 2015 @11:27AM (#49758919)

      This seems more like basic-level stuff... learning from your mistakes. That strikes me as the sort of thing that would be "hardwired" in everything from nematodes to primates. Why is this news?

      Because you haven't learned what is news yet. But by trial and error, you'll catch on

    • by Richard Kirk ( 535523 ) on Saturday May 23, 2015 @11:53AM (#49759029)

      It is a good question, and there are several answers...

      Artificial Intelligence has been seen as a goal since Ada Lovelace was a lass. In the fifties, it was hoped that computers fed with parallel translations could learn the rules of languages, and provide fought translations of (say) technical documents on aeronautics from Russian to English, where sufficiently skilled and positively vetted engineers were rare. There were later attempts in the sixties and seventies to learn to walk, recognise objects, or solve puzzles. There was the constant hope that the next hardware would be a bit more powerful, and you could throw problems at it, and intelligence would somehow boot up. After all, that is how it must have started last time. However, intelligence failed to boot up, or maybe it always lost out to other brute force techniques which regular computers are good at.

      The nematode has a simple. pre-programmed brain. It is good for being a nematode, but it doesn't really learn. Our brains have a lot of structure when they are formed, which means that our language centres, our vision centres, the parts that are active when we are solving spatial problems, or composing music, turn up in the same places most of the time; but we don't seem to run an actual program as such. We are born with very little instinct when compared to most other complex animals, but I suspect even they are not really running a program either.

      The trick seems to be to provide the robot with enough plastic design to nudge it in the general direction of intelligence: too little design and it never gets its act together, while too much design means it is just doing what you programmed it to do. There are interesting times where computers are getting the complexity and the connectivity and plastic re-programmability to rival animal brains; but the spontaneous self-evolving problem solving spark just isn't there yet. But I hope we may see it in our lifetimes.

      • by Whiteox ( 919863 )

        but we don't seem to run an actual program as such.

        Perhaps an interdisciplinary pov might be of help here. We do run programs based on hard wired (unconscious) programming.
        Principally it is self-preservation, from biological respiration to environmental choices. That's the core programming from which all other extensions spring from. Replication is group preservation, so is war for survival, hunting and gathering, society, friendship, love, art, recording of knowledge etc.
        The fact that AI is not concerned with that basic tenant is bemusing to me.

        • I think there is more here than just learning to imitate humans, exciting though that is.

          Let us take 'Deep Blue' as an example of a machine that does not think. It was able to come up with some dramatic solutions. Its typical successes were mates involving an improbably sequence of sacrifices that gave a mate in 6 or 7, which was about the brute force look-ahead of the time. It also had weighting models that give suggestions of which were 'good' moves and which were 'bad' ones. Moving a bishop to a centr

          • by Whiteox ( 919863 )

            I understand you post and was written with a great example. I do not deny that Deep Learning has a long way to go, yet I (as a Philosophy sub-major) can't leave my original contention alone. So with my technical knowledge I can build a do-able A.I. machine which can have elements of Deep Learning if I understand it correctly.
            In this case let's use our imagination:
            The simplest A.I. is a feedback circuit - like a thermostat. It always tries to control temperature within a set range. It finds it difficult to o

    • "Deep learning" refers a family of machine learning techniques (such as neural-networks, convolutional neural-networks, stacked-autoencoders, etc.) that have a multi-layer architechture, typically allowing the system to learn highly non-linear functions of many variables. Each layer can be thought of as a simple learned function whose output is fed into the next layer. Such systems can often have thousands or millions of parameters to learn and thus require a LOT of training data and a fair bit of computing

    • Taiwanjohn: the only way you could make, such an inane statement is if you had never had the task of doing anything remotely challenging... only someone that has never taken on a difficult task such as that could, ever make such a ridiculous comment. Try something far, simpler... build a go - kart from scratch, not from a kit or a set of directions, like from Popular Mechanics... or, maybe design and build a simple drone, again without directions... those would be monstrously simpler achievements... do tha
  • Cue the Skynet / Matrix references in 3...2...1...

  • by Anonymous Coward

    There is a Genetic Algorithms textbook from 1989 that covers generational learning and "mutating" the parameters until you get to the end state in the best way possible. My AI knowledge isn't great but I wouldn't be surprised if there are ideas that pre-date the '89 text.

    Does anyone know what the software controlling the robot is doing under the hood that's different?

    • by tmosley ( 996283 )
      I think we are doing much the same, it's just that computers have caught up to theory and are able to perform now. Now it is no longer a question of theory, but one of technique, and what is described in the article is a new technique--one that will likely have many, many applications in the near future.

      In the late 80's/early 90's, they were able to use some of their theory, but it just wasn't super-robust because things just took too darn long. You couldn't have your system analyze at a million images
    • by Enokcc ( 1500439 )

      Here are the papers: http://rll.berkeley.edu/deeple... [berkeley.edu]

  • Sorta Off Topic (Score:1, Offtopic)

    by Ol Olsoc ( 1175323 )
    But can't we get people to mod down the now incessent "Why is this news" or "Why is this on Slashdot?" Posts?

    They are becoming the 2015 equivalent of "Frist Post, or "Welcome from the Golden Girls".

    • But can't we get people to mod down the now incessent "Why is this news" or "Why is this on Slashdot?" Posts?

      They are becoming the 2015 equivalent of "Frist Post, or "Welcome from the Golden Girls".

      Amazing how many people are wasting mod points calling an admitted Offtopic Post as Offtopic. Captain Obvious is smiling upon thee.

  • I really recommend these two books by Sladek: http://en.wikipedia.org/wiki/R... [wikipedia.org] they're very funny satire about a naive, learning robot in a cruel illogical world. This is what our little friend here can expect.
  • Reminds me of http://www.newscientist.com/ar... [newscientist.com] from 2002. Robots goal was to raise its altitude without knowing its actuators ahead of time.

  • by Livius ( 318358 )

    Better known as 'learning' to everyone not trying to exaggerate an claim of artificial intelligence.

    It's excellent progress, which is why I don't think it should be watered down by being compared to the simple algorithms.

  • People were doing this when I was an undergrad, almost 20 years ago. I specifically remember a six legged robot that had to figure out how to walk by itself.
  • Self-correcting curious entities. If there's teeth on them, we're bound to become enemies.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...