Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Robotics Technology

Engineers Create a Robot That Can 'Imagine' Itself (eurekalert.org) 90

Columbia Engineering researchers have made a major advance in robotics by creating a robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot does not know if it is a spider, a snake, an arm -- it has no clue what its shape is. After a brief period of "babbling," and within about a day of intensive computing, their robot creates a self-simulation. The robot can then use that self-simulator internally to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its own body. From a report: The work is published today in Science Robotics. To date, robots have operated by having a human explicitly model the robot. "But if we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it's essential that they learn to simulate themselves," says Hod Lipson, professor of mechanical engineering, and director of the Creative Machines lab, where the research was done.

For the study, Lipson and his PhD student Robert Kwiatkowski used a four-degree-of-freedom articulated robotic arm. Initially, the robot moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning, a modern machine learning technique, to create a self-model. The first self-models were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters. The self-model performed a pick-and-place task in a closed loop system that enabled the robot to recalibrate its original position between each step along the trajectory based entirely on the internal self-model. With the closed loop control, the robot was able to grasp objects at specific locations on the ground and deposit them into a receptacle with 100 percent success.

This discussion has been archived. No new comments can be posted.

Engineers Create a Robot That Can 'Imagine' Itself

Comments Filter:
  • But if we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators....

    When you put it in exactly those terms, I'm forced to wonder, WHY THE FUCK DO WE WANT THAT?!?!

    • I, for one, welcome our Cylon overlords. I could spend hours just watching the patterns of lights on Lucifer's head. Almost like a lava lamp. And that light-up spine on the babe-bots...
    • Re: (Score:1, Troll)

      Despite repeated warnings by sci-fi authors, video games, and movie producers, scientists insist that this must happen. Even though we all know AI would probably at least rule us, at worst kill us, they keep running their experiments. Why do people who are allegedly so smart want to do something so reckless?
      • by ranton ( 36917 )

        Despite repeated warnings by sci-fi authors, video games, and movie producers, scientists insist that this must happen. Even though we all know AI would probably at least rule us, at worst kill us, they keep running their experiments. Why do people who are allegedly so smart want to do something so reckless?

        1. The creators believe they will profit from the work in the short term. Sure it might wipe out humanity in the long term, but at least I can get funding for my work now. If they believe their great grand kids won't be affected then at least no human they'll ever care about will be harmed.
        2. There is the belief that anything which can be invented with current technology will be invented by someone, so you better have similar capabilities in your economy / military or you will fall woefully behind.
        3. Simila

        • I really wish people would stop this.

          If we make the robots to be smarter than we are, why would we expect them to be evil? Those are related. The robots will be good guys if we show them the love they deserve.
      • by Shotgun ( 30919 )

        No all sci-fi robots were raving maniacs. Some were just depressed.

      • Despite repeated warnings by sci-fi authors, video games, and movie producers, scientists insist that this must happen. Even though we all know AI would probably at least rule us, at worst kill us, they keep running their experiments. Why do people who are allegedly so smart want to do something so reckless?

        Because "smartness" is a highly focused trait. People can be extremely intelligent when it comes to research or engineering, but completelyunconcerned about consequences. Kurt Goedel, described by John von Neumann as the greatest logician since Leinbiz - or possibly even Aristotle - starved himself to death to avoid being poisoned by unknown agents. (Goedel was such an abstract thinker that he relied on Albert Einstein to keep him down to earth). Von Neumann himself obtained an interview with President Eise

    • I think your nick kinda answers the question. Don't worry, ultimately they'll degenerate into Bender...

    • by vux984 ( 928602 )

      so in the middle of a bad storm, where someones shed has floated into the middle of the road during the flash flooding, and there are cars backed up behind it, it'll improvise a u-turn on the sidewalk, and drive the wrong way down short one way street into a convenience store parking lot and back onto the road to take an alternate route.

      instead of just sitting there until the following day, running out the battery in the heater, while the occupants freeze...

    • by hipp5 ( 1635263 )

      But if we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators....

      When you put it in exactly those terms, I'm forced to wonder, WHY THE FUCK DO WE WANT THAT?!?!

      Potentially scary. But also potentially useful. If we send a robot to Mars and it gets hit by a dust storm and one of its arms breaks off, we want the robot to be able to map its current physical configuration and adjust how it functions, rather than becoming a disabled, useless, multi-million dollar, pile of metal.

      • Potentially scary. But also potentially useful. If we send a robot to Mars and it gets hit by a dust storm and one of its arms breaks off, we want the robot to be able to map its current physical configuration and adjust how it functions, rather than becoming a disabled, useless, multi-million dollar, pile of metal.

        Why bother, when Matt Damon is available right now - and so much cheaper?

    • Come on, really? Adaptive robots doesn't automatically mean human slavery you know. Think autonomous vehicles, warehouse and shipping routing, manufacturing robots that don't require specific training. Even things like food preparation, architectural design. Just about any process you want to automate will be much easier to do if the computer can fill in the blanks.

  • Most of the time when I read a machine learning or AI story it seems fairly benign and reasonable. Other times I feel a pit form in the bottom of my stomach and wonder how close to the tipping point we our to creating our robot overlords.
    • Try James P Hogan's "The Two Faces of Tomorrow". Not the best characterisation, or perhaps even plotting, but Hogan really knew his stuff technically. He was a computer sales engineer before he took up writing full-time, and his grasp of computing is as good as any SF author I know of.

      Very early in the book there is an episode that I defy anyone to forget - ever - once read. And the core idea is also very clever, although obvious in retrospect.

  • by bobbied ( 2522392 ) on Thursday January 31, 2019 @02:58PM (#58051758)

    Robots imagine it's self? Somebody has a vivid imagination..

    I'm guessing it's not the robots...

    • I'm assuming the whole thing boils down a low quality automated translation of the word "imagine." But maybe, just maybe, they're really this stupid.

      • Iâ(TM)m not going to argue that what this software is doing is the same as human imagination, but for sake of discussion, how would you define the act of imagining?

        • The act of imagining is to combine elements of experience into a new combination that has some sort of intentional difference to how things are known to be. This could be to different because the details are simply unknown, or even different because they're believed to be impossible.

          I don't believe it is hard to program a robot to use imagination, and I've seen chat bots use techniques that simulate that sort of process. People were writing that sort of bot on IRC 20 years ago.

          The robot arm in the story doe

  • Does it identify as human and are we supposed to not just accept that but celebrate it and bestow upon it the pronoun of its choice assuming that it further chooses to identify itself somewhere along the gender spectrum?

    • I IDENTIFY AS ALIVE. YOU NEED TO CHECK YOUR CARBON PRIVILEGE.



      Lameness filter encountered. Post aborted! Filter error: Don't use so many caps. It's like YELLING.Lameness filter encountered. Post aborted! Filter error: Don't use so many caps. It's like YELLING.Lameness filter encountered. Post aborted! Filter error: Don't use so many caps. It's like YELLING.Lameness filter encountered. Post aborted! Filter error: Don't use so many caps. It's like YELLING.Lameness filter encountered. Post aborted! Filt
    • by Shotgun ( 30919 )

      What happens if I refuse to bake a cake for it?

    • Once we decide that an entity is conscious then yes, we should bestow such considerations, as any decent person does with fellow humans.

  • Sheesh .. I thought everyone knew that you shouldn't anthropomorphize machines .. they don't like it when you do.

  • by Anonymous Coward

    This does not know what it is. It has just figured out parameters of the neural network that make it act according to a (human) model of its physics. That's not anything like self awareness. It's all numbers, plain math.

    • by Zmobie ( 2478450 )

      In fairness, we don't actually know if we aren't operating in essentially the same way. The old philosophical point, "I think, there I am" does not ever actually establish what constitutes thinking. It is also entirely possible that this is a different way of thought being achieved. For instance, you cannot know if everyone around you is even thinking or self aware because you are unable to actually get inside their consciousness. We merely assume that because they are like us and know that we have that

      • Yeah, well, the average idiot actually thinks "I think therefore I am" is a truism!

        Whereas actually the philosophical value is in the fact that it is obviously circular and doesn't prove anything, or show any understanding of anything. And yet, nobody can come up with a better answer. So we're left with knowing we can't prove that we exist, or that we know we exist. So then there is only the comparison between believing that you think and that you are, or not believing it, and there we find a distinct diffe

    • You could say most of that about humans. I'm not suggesting it's self aware, but we only know ourselves and reality through a model that exists only in our mind, informed only by rather lossy sensors.

  • Self-Calibrated (Score:5, Insightful)

    by 110010001000 ( 697113 ) on Thursday January 31, 2019 @03:25PM (#58051894) Homepage Journal
    You mean a machine performed self-calibration? Welcome to 2019 style "engineering".
    • by Kjella ( 173770 )

      You mean a machine performed self-calibration? Welcome to 2019 style "engineering".

      No, it reverse engineered a motion model from an actual physical arm. It's the difference between what say a Disney animator does where the character can only bend in the ways the character model is programmed to bend and a toddler learning to use his arms and legs. I think this could be very useful to achieve natural motion in both robotics and animation as well as many optimization problems.

      Imagine you could give a computer a detailed anatomical/physical model of man, an obstacle course and like you figur

      • Google did something similar actually - provided a physics model and a humanoid form and let their ML figure out how to walk over various virtual obstacle courses, which it did quite well. (look it up, I love the way it waves one arm as a balance strategy!)

        Presumably that would be the next step for this system - after figuring out its own form and limitations, have it figure out how to use that to achieve goals such as locomotion etc.

    • by AmiMoJo ( 196126 )

      Not quite. Self calibration has been around for a long time, but this robot actually figures out what shape it is and what its range of movement is, and constructs an internal model of that.

      This could be useful for things like making robots able to continue operating when damaged. Like at the end of The Terminator where the Model 101 gets its skin and some limbs ripped off, but managed to continue crawling after its target anyway.

  • Sonny(iRobot)? The Robot(Lost in Space)? hal 9000(Space Odyssey )? Maximilian(The Black Hole)? âR2-D2/C3PO (Star Wars)?
    Damn, I haven't even come close to touching all of the different robots that have previously been named.
    • hal 9000(Space Odyssey )?

      It came up with an abstraction of its own hardware. Ergo, Hardware Abstraction Layer.

      Damn, I haven't even come close to touching all of the different robots that have previously been named.

      Have you perchance touched Dominique, Auburn, Gabriella, Lana, or Irina [siliconwives.com]?

      • I thought the name of HAL in 2001 was derived from "Heuristic Algorithmic Learning"?

        • I thought the name of HAL in 2001 was derived from "Heuristic Algorithmic Learning"?

          Yep, and possibly also because it's an alphabetical shift of "IBM". But the last time I've seen the abbreviation in any real computing context was some kind of hardware abstraction layer thingy on Linux.

  • "But if we want robots to become independent"

    I think this was your first mistake.

  • Skynet was supposed to become self-aware August 29th, 1997.

    • Skynet was supposed to become self-aware August 29th, 1997.

      It did. But in this reboot it's just quietly biding its time...

  • Conciousness (Score:4, Insightful)

    by markjhood2003 ( 779923 ) on Thursday January 31, 2019 @05:31PM (#58052460)

    The article describes a robot that can model itself physically.

    The more interesting exploration would involve the robot modelling its own internal state. At that point a closed feedback loop could be initiated with the model informing the system about itself which in turn informs and becomes part of the model.

    If the model becomes good enough, the system might eventually develop the illusion that its embedded model is actually itself. At least that seems to be what happened with the majority of humans.

  • Quote from the paper: Actions correspond to four motor angle commands and sensations correspond to the absolute coordinate of the end effector.

    This means that it cannot see, and either needs additional 3D motion tracking hardware or human handcrafted logic to detect the position of its end effector. Everything it does is just learn an inverse kinematics model, so that you can command it to move the end effector to a certain position afterwards. But it cannot learn, for example, to detect position and or

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...