Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Robotics Science

Self-Introspecting Robot Learns to Walk 121

StCredZero writes "There's something about these things that seems eerily alive! The Starfish Robot reminds me of the Grid Bugs from Tron. But it's very real, and apparently capable of self introspection. In fact, instead of being explicitly coded, it teaches itself how to walk, and it can even learn how to compensate for damage."
This discussion has been archived. No new comments can be posted.

Self-Introspecting Robot Learns to Walk

Comments Filter:
  • Damage (Score:1, Interesting)

    by Ajehals ( 947354 ) <a.halsall@pirateparty.org.uk> on Saturday September 01, 2007 @10:25AM (#20433481) Homepage Journal
    It learns to walk and t can compensate for damage?

    well I assume that there will be no issues with cash flow, the military applications are obvious.
  • Re:Damage (Score:3, Interesting)

    by Ajehals ( 947354 ) <a.halsall@pirateparty.org.uk> on Saturday September 01, 2007 @10:37AM (#20433553) Homepage Journal
    I have to agree, even if I am not sure how I would define life. It would be interesting if the software element of this could be used in conjunction with biological hardware, or hardware with biological traits (i.e. replication and energy production). It seems to me that having a central control mechanism (brain) for all large scale operations plus small independent modules for specific tasks would be a close approximation to biological life (less complete reproduction, although I suppose that may be possible at some point albeit more complex)
  • by RyanFenton ( 230700 ) on Saturday September 01, 2007 @10:47AM (#20433599)
    This is a very well-done video. I really like how it shows the virtual model to illustrate how the system 'sees' itself. Self-reflection of a sort is usually present in most complex programmed systems in one form or another - usually in terms of disjointed status variables and variables for their hard-coded implications. This is neat because the implications can be a little more dynamic.

    I hope this becomes a more general library that can be used to help self-reflection of this sort become a more separate part of physical designs. Even if the implications of the physical model aren't dynamic, a standard way of quickly seeing how your model 'sees' itself would help debugging and development in many future projects.

    The only problem if it becomes more prevalent would be same one that quantum mechanics holds - people think that 'observer effects' has to involve consciousness, in the same way they'd think that a program's self-reflection would mean that it 'thinks' the same way they do. Neither is true - they're all mechanical terms wrapped in common language. Anything that can record an effect on the world (a falling rock's scratches in another stone would work) is a quantum observer - consciousness has nothing to do with the 'collapsing wave function'. The same here - a bit of self-reflection on the part of a program doesn't mean it's eerie self-corrections are capable of the complexities of our mind. If anything, such mechanical results would imply that our own minds act simpler in some ways than we may think, and that consciousness doesn't necessarily have to be as inscrutable and special as we might want.

    Ryan Fenton
  • by InvisblePinkUnicorn ( 1126837 ) on Saturday September 01, 2007 @11:18AM (#20433787)
    "If anything, such mechanical results would imply that our own minds act simpler in some ways than we may think, and that consciousness doesn't necessarily have to be as inscrutable and special as we might want."

    Philosophers like Daniel Dennett agree with this notion. Consciousness may simply be a more complex continually-running predictive model like that used by this robot.
  • Re:Poor thing... (Score:3, Interesting)

    by KDR_11k ( 778916 ) on Saturday September 01, 2007 @11:39AM (#20433919)
    Supposedly a mine-clearing bot (lots of legs designed to be blown off by mines, the bot just walks around and triggers them) that was literally on its last leg was pulled out of the testing (it would have crawled onto a final mine and be destroyed in the process) because the supervising officer felt sorry for it. People are capable of feeling empathy for the dumbest animals, why wouldn't they for a robot?
  • by Punto ( 100573 ) <puntobNO@SPAMgmail.com> on Saturday September 01, 2007 @04:16PM (#20435471) Homepage
    Once it learns there's so much damage he can take, he'll know pain. From there is straight to world domination.
  • Re:Creepy (Score:3, Interesting)

    by Warbothong ( 905464 ) on Sunday September 02, 2007 @10:07AM (#20440809) Homepage
    At the start it looks creepy when it's moving around looking a little like a spider. Then it gets damaged and looks genuinely scary, in terms of "WHY WON'T IT DIE?!". At the end it just looks like its makers enjoy pulling the wings off flies (although I did laugh when it flipped itself upside down). It's be interesting to see whether this modelling system could be made to learn from its experiments and failures as well as creating initial similations to work from. What I mean is, its internal simulation lets it determine effective ways to move around like brains can do, but brains are also able to factor unknown environmental effects in as well. For example someone might slip on ice and fall over, since they hadn't realised how slippery the ground would be in their internal modelling. A person would get up and start walking differently (making sure each foot was firmly planted before putting weight on it, etc.) since they would factor this into their internal simulation. It would also be interesting to see whether this could get a type of "pain" added. This isn't to be sadistic or anything, as the machine wouldn't be made to "feel pain", I just mean that another factor could be added into the simulation, like "this leg is barely attached anymore, better not put too much stress on it" or "this area is important and delicate [eg. batteries, sensors, etc.] and thus shouldn't have too much weight put on it". I say this since in the video the robot has one leg damaged and then tries to walk by slamming its full weight down on the "stump", which does not seem a particularly amazing survival ability (for instance, natural selection seems to favour limping, which puts less stresses on damaged body parts and more on healthy ones which should be able to cope).
  • by Raenex ( 947668 ) on Sunday September 02, 2007 @11:58AM (#20442063)

    Understanding is a mental, not physical process.
    You are assuming that they are independent, when in fact there is lots of evidence that mental processes depend on physical processes. There are drugs to alter your consciousness, physical damage to your brain can cause mental damage, and there are experiments where people's thoughts have been maninpulated by direct electrical stimulation (these people were undergoing brain surgery).

    That of course would be a form of creationism, much reviled here on /.
    Because it doesn't explain anything or offer any evidence.

"Say yur prayers, yuh flea-pickin' varmint!" -- Yosemite Sam