Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics Science

Self-Introspecting Robot Learns to Walk 121

StCredZero writes "There's something about these things that seems eerily alive! The Starfish Robot reminds me of the Grid Bugs from Tron. But it's very real, and apparently capable of self introspection. In fact, instead of being explicitly coded, it teaches itself how to walk, and it can even learn how to compensate for damage."
This discussion has been archived. No new comments can be posted.

Self-Introspecting Robot Learns to Walk

Comments Filter:
  • Damage (Score:1, Interesting)

    by Ajehals ( 947354 )
    It learns to walk and t can compensate for damage?

    well I assume that there will be no issues with cash flow, the military applications are obvious.
    • Though this is old news (I have seen the same article on a slashdot before, had an interesting conversation with one of the authors), it is much better than many other news popping up on /.

      I personally rather see it of a relative simplicity of artificial creation of life.
      • Re: (Score:3, Interesting)

        by Ajehals ( 947354 )
        I have to agree, even if I am not sure how I would define life. It would be interesting if the software element of this could be used in conjunction with biological hardware, or hardware with biological traits (i.e. replication and energy production). It seems to me that having a central control mechanism (brain) for all large scale operations plus small independent modules for specific tasks would be a close approximation to biological life (less complete reproduction, although I suppose that may be possi
        • Re: (Score:1, Offtopic)

          by buswolley ( 591500 )
          Introspection? What, are you guys in competition with my research lab?

          http://www.latimes.com/features/health/la-he -capsule27aug27,1,4944161.story?coll=la-headlines- health

    • Ah, but can it shoot?
    • Re:Damage (Score:4, Funny)

      by hasbeard ( 982620 ) on Saturday September 01, 2007 @09:58AM (#20433663)
      Well, it seems to me that combat really isn't a good time for too much introspection. I mean with all the bullets flying and all.
  • by doombringerltx ( 1109389 ) on Saturday September 01, 2007 @09:46AM (#20433589)
    but come on! "Update 24-Nov-2006:"
    • by FlyByPC ( 841016 )
      Like another poster said, it's still more relevant (geeky, cool, and IMHO important than most of the "new" stuff on here.

      I believe that we will eventually create true self-replicating machines. It may even be one of the next important steps in the progression of computer technology (Boolean Logic, Relay, Tube, Transistor, IC, Microprocessor, Self-modifying code...)

      Where it gets controversial is that I think that we could see it in the next 50 years or so. (That is, a self-replicating organism that can m
      • Having a self replicating machine for the sake of a self replicating machine is pretty pointless. Though maybe that makes life pointless, but angst aside, can't 3D printers already replicate themselves? At least if you build an assembly line in. A self replicating robot let into the wild would just cause havoc with our ecosystems and ravage all the decent raw materials we have :O
        • by tftp ( 111690 )
          Self-replicating robots could build you a Moon or Mars base before you even land. Or would you prefer to haul rocks yourself, in a spacesuit, for 12 hours per day, just a year away from Earth in case if you break a leg?

          Self-replicating function here is essential because it covers repairs, and those will be essential. Besides, it might be difficult to send more than a handful of robots ahead of time, and definitely not thousands.

          • This is a potentially very good area for aspiring young engineers and scientists to focus on in their education.
            Take the Mars base as just one of a big class of problems. Obviously, the robots can't just be capable of self replication. They have to self replicate up to useful numbers, and then do other tasks. Once that very general model is applied, refining it is mostly a matter of efficiency, and that efficiency determines whether the project will ultimately be funded or not.
            • by reezle ( 239894 )
              Thanks, that was a great read. Nice to see things like that in writing every once in a while. (instead of loose jumbled thoughts in my head).
              If had had some mod points, they would be yours.

              One thing though... Seems like ANY percentage of rouge 'self replicating machines' can very quickly become a problem. But I suppose we'll just have to develop other robots to terminate them ,eh?
              • by MacEnvy ( 549188 )
                And when the "terminator" robots go rogue and replicate?

                I guess we send in the Governator.
  • by RyanFenton ( 230700 ) on Saturday September 01, 2007 @09:47AM (#20433599)
    This is a very well-done video. I really like how it shows the virtual model to illustrate how the system 'sees' itself. Self-reflection of a sort is usually present in most complex programmed systems in one form or another - usually in terms of disjointed status variables and variables for their hard-coded implications. This is neat because the implications can be a little more dynamic.

    I hope this becomes a more general library that can be used to help self-reflection of this sort become a more separate part of physical designs. Even if the implications of the physical model aren't dynamic, a standard way of quickly seeing how your model 'sees' itself would help debugging and development in many future projects.

    The only problem if it becomes more prevalent would be same one that quantum mechanics holds - people think that 'observer effects' has to involve consciousness, in the same way they'd think that a program's self-reflection would mean that it 'thinks' the same way they do. Neither is true - they're all mechanical terms wrapped in common language. Anything that can record an effect on the world (a falling rock's scratches in another stone would work) is a quantum observer - consciousness has nothing to do with the 'collapsing wave function'. The same here - a bit of self-reflection on the part of a program doesn't mean it's eerie self-corrections are capable of the complexities of our mind. If anything, such mechanical results would imply that our own minds act simpler in some ways than we may think, and that consciousness doesn't necessarily have to be as inscrutable and special as we might want.

    Ryan Fenton
     
    • Re: (Score:3, Interesting)

      "If anything, such mechanical results would imply that our own minds act simpler in some ways than we may think, and that consciousness doesn't necessarily have to be as inscrutable and special as we might want."

      Philosophers like Daniel Dennett agree with this notion. Consciousness may simply be a more complex continually-running predictive model like that used by this robot.
      • And most psychologists and neuroscientists. Most of our decisions are made for us before we become conscious of them. They've shown the responses to stimuli are often formulated before a subject even becomes aware.

        Dennet objects to the notion of "qualia" - the experiential bits that make up seeing, hearing, touching, smelling, and tasting. He says that these qualia don't really exist as special things, since adjusting qualia and adjusting the circuits that supposedly generate qualia conceptually achieve the
        • by E++99 ( 880734 )

          I see no other way to reconcile consciousness with a theory of evolution, which seems to demand a materialist, single substance view of mind.

          The theory of evolution doesn't demand this, it presumes it. If you think from evidence, the one thing that we truly know is real is consciousness. To attribute consciousness to the workings of a machine, without any sort of conception or model of how movements of a machine could generate consciousness, is pure superstition and anti-intellectual. And it prefers simp

          • The theory of evolution does not presume it. Nowhere in the tenets of evolution does it say "Materialist theories of mind must be true." You could conceive of natural selection never producing consciousness - but the fact is, evolution did produce consciousness, and so we have to come up with an understanding of consciousness which explains how that could be. Evolution does presume that life arises only from physical matter, but then it wouldn't be a scientific theory if it didn't. It would be intelligent d
            • by Original Replica ( 908688 ) on Saturday September 01, 2007 @03:27PM (#20435523) Journal
              "You could conceive of natural selection never producing consciousness "

              Oddly enough, you could not conceive of anything without consciousness. Understanding is a mental, not physical process. You could however conceive of consciousness without the physical world. Indeed every culture has been doing so for all of recorded history in the form of spirit worlds, afterlife, etc.

              Occam's razor can be much abused depending on how you frame your observation. "I think therefore I am." is much more straight forward than "I am incredibly complex and elaborate, therefore I think." Let's set Occam's Razor aside for this discussion, it doesn't seem to be the right tool for the job here.

              If you allow yourself to view the conscious world as more fundamental than the physical world, then the observed consistency/connectedness of all physical phenomena would require some sort of governing over-consciousness that is responsible for the physical world. That of course would be a form of creationism, much reviled here on /.
              • Oddly enough, you could not conceive of anything without consciousness.

                His point would probably be better made by saying: As far as we know, natural selection could have produce life that had purely mechanical "thought processes", like a world populated by computers.

                "I think therefore I am." is much more straight forward than "I am incredibly complex and elaborate, therefore I think."

                The facts that I exist and that I think is very obvious to me, but that doesn't make it a more basic truth, or show t

                • If you want it to be seen as a rationally defensible position, rather than just a possibility for a philosophy class or a religious discussion, you're going to need more that just "here's a cool way of looking at things".

                  Please. Even philosophy class needs more than "here's a cool way of looking at things".

              • Re: (Score:3, Interesting)

                by Raenex ( 947668 )

                Understanding is a mental, not physical process.

                You are assuming that they are independent, when in fact there is lots of evidence that mental processes depend on physical processes. There are drugs to alter your consciousness, physical damage to your brain can cause mental damage, and there are experiments where people's thoughts have been maninpulated by direct electrical stimulation (these people were undergoing brain surgery).

                That of course would be a form of creationism, much reviled here on /.

                Because it doesn't explain anything or offer any evidence.

            • by E++99 ( 880734 )

              The theory of evolution does not presume it. Nowhere in the tenets of evolution does it say "Materialist theories of mind must be true." You could conceive of natural selection never producing consciousness - but the fact is, evolution did produce consciousness, and so we have to come up with an understanding of consciousness which explains how that could be. Evolution does presume that life arises only from physical matter, but then it wouldn't be a scientific theory if it didn't. It would be intelligent d

              • Saying "consciousness just emerges because of the complexity of the machine" no more satisfies Occam's razor than saying "consciousness just emerges because of God." Neither explains a mechanism, so neither is a theory, so Occam has no interest in either.

                Occam's razor applies whether we have proposed mechanisms or not, the whole point of the thing is to help you make a good guess when none of the theories you have seem very complete or easily testable. If we knew every detail, we wouldn't need to use a

              • Just gonna address one,

                But this is backwards. What we know is primarily our thoughts, feelings, and inner perceptions (from which comes reason, logic, math, philosophy and religion), secondarily our sensual perceptions, and tertiarily the conclusions we form from our sensual perceptions (from which comes science). In that order. To raise tertiary knowledge above primary knowledge has no basis. To use it as a reason to argue that primary knowledge doesn't exist, is downright nutty.

                You are making the Cartesi

              • And in fact, the materialist evolutionary theory should preclude the evolution of consciousness, as the only goal of such evolution could have been to produce beneficial responses to stimuli. Give one machine that performs the responses without consciousness and another, much more complex machine that performs the exact same responses but with consciousness, materialist evolution should require that the former machine is built, not the latter, as there would be no evolutionary pressure towards creating the
    • The video has parts (the simulated ones) which look a heck of a lot like the cool and open source breve AI simulation environment ( http://www.spiderland.org/ [spiderland.org] ) which does pretty much the exact same thing. Check out the brevewalker or brevecreatures.

      The video is still extra-impressive though, as the robot uses sensors to detect it's own shape and limitations, and then (it looks like) loads it into breve where the thinking seems to happen. Pretty cool indeed.
    • This seems to be similar to a teenager spending lots of time in front of a mirror (except without the angst, acne and worries about hairstyle).

      CAPTCHA: prophet. Dammit, I wanted PROFIT!
  • by Anonymous Coward
    What's the difference between introspection and self-introspection?
    • What's the difference between introspection and self-introspection?

      The later one you can do yourself.
  • Creepy (Score:5, Funny)

    by Spy der Mann ( 805235 ) <spydermann.slash ... com minus distro> on Saturday September 01, 2007 @09:51AM (#20433619) Homepage Journal
    That thing almost looks alive. After seeing it, it reminded me of the nurses in Brookhaven Hospital trying to move. Eew.
    • by Mythrix ( 779875 )
      I kept expecting that thing to fly in my face. Too much Half-life 2 lately. The noises that the robot made didn't help either.
    • Re: (Score:3, Interesting)

      by Warbothong ( 905464 )
      At the start it looks creepy when it's moving around looking a little like a spider. Then it gets damaged and looks genuinely scary, in terms of "WHY WON'T IT DIE?!". At the end it just looks like its makers enjoy pulling the wings off flies (although I did laugh when it flipped itself upside down). It's be interesting to see whether this modelling system could be made to learn from its experiments and failures as well as creating initial similations to work from. What I mean is, its internal simulation le
  • The videos of it trying to move with the damaged leg make it look like a crippled animal. I can't help but feel sorry for it. :(

    Argh, I said that and had a sudden mental image of hordes of animal rights activists protesting the mistreatment of robots.
    • In a gut-wrenching moment, the robot was heard to be saying: "Why, why, WHY was programmed to feel pain?!?!"
    • by ettlz ( 639203 )

      The videos of it trying to move with the damaged leg make it look like a crippled animal. I can't help but feel sorry for it.
      You won't be saying that when it leaps up and grabs you in the face shouting, "Introspect this, motherfucker!"
    • Re: (Score:3, Interesting)

      by KDR_11k ( 778916 )
      Supposedly a mine-clearing bot (lots of legs designed to be blown off by mines, the bot just walks around and triggers them) that was literally on its last leg was pulled out of the testing (it would have crawled onto a final mine and be destroyed in the process) because the supervising officer felt sorry for it. People are capable of feeling empathy for the dumbest animals, why wouldn't they for a robot?
      • Well I can't exactly argue why people wouldn't feel sorry for a robot, since I in fact feel sorry for said robot. ;P
      • Re: (Score:1, Informative)

        by Anonymous Coward
        There was a Washington Post article about that here [washingtonpost.com].
    • by Nullav ( 1053766 )
      You've got it all wrong. This thing was designed for the 'laugh at the cripple' crowd. I mean, how often do you see a guy with a missing limb flip over on his back?
  • I almost feel pity seeing the broken robot trying to walk.
  • and observe a 'Robots Must be Given Human Rights' movement grow in numbers.

    This robot moves in a fluid way, almost like a living creature would, many people will immediately anthropomorphize it.

    --

    What I find interesting is applications in todays world. How about equipping cars with abilities to sense its physical parts and build a total model of itself in real time. This could be used for immediate diagnosis of problems with the car itself and with its interaction with the surrounding environment. Many p
    • by KDR_11k ( 778916 )
      For a car it wouldn't be very useful as a car does not keep track of its own status in much detail. A robot has to react to damage by itself but a car is usually under the control of a human driver who will react to the damage by himself (usually by pulling over and getting it repaired provided the damage is serious enough) so the car doesn't need to know its own state in more detail than the LEDs on the dashboard can express. After all it's just a representation of what sensors measure and no sensors = no
      • "so the car doesn't need to know its own state in more detail than the LEDs on the dashboard can express"

        The car makers using dynamic stability control [wikipedia.org] would beg to differ, amd IMHO the system qualifies as a kind of "introspection" for cars.
    • I find your use of the word "anthropomorphize" in this context interesting.

      It seems to me that in the context of artificial intelligence that word represents a set of values in the guise of a representation of some unspoken, well-defined set of characteristics that separate humans from whatever it is one is comparing humans to. It conveniently disposes of the really hard problem of establishing what it is that sets humans apart in a very neat linguistic package.

      In other words, use of the word "anthropomorph
  • Hope it comes with a remote-control kill switch.
    Hope it doesn't figure out how to circumvent the remote-control kill switch.
    Hope it doesn't build a bigger version of itself...
    • Seconded!

      Watching that video, I got a truly creepy feeling.
      Rather uncomfortable actually. Then I thought about it for a little while, and I think the reason was this [movieweb.com].

      Let's get that thing a kill switch *first and foremost*, and *then* think about imparting sentience.

  • by LynnwoodRooster ( 966895 ) on Saturday September 01, 2007 @10:36AM (#20433897) Journal
    1. Did they give it a navel?

    2. Can it contemplate it?

    • 1. Not yet. It's still attached to a PC by it's umbilical cord.

      2. Maybe, in 18 years, give or take a few depending on it's upbringing.
  • by icepick72 ( 834363 ) on Saturday September 01, 2007 @10:37AM (#20433903)
    it's obviously going to latch onto somebody's face and then they'll say it learned fast.
  • by Animats ( 122034 ) on Saturday September 01, 2007 @10:41AM (#20433939) Homepage

    First, get past the blogodreck to the actual work. [cornell.edu] (Slashdot editors missed a blog troll again.) Also, this work is several years old. The papers are from 2004 to 2006.

    The original article says that the robot has "tilt and angle sensors in all its joints", but that's wrong. It only has one central tilt sensor. That's significant, because if it did have tilt sensors at each joint, system identification would be easier. The algorithm is doing better than one might expect.

    This thing is doing what controls people call "automatic system identification". You have some set of sensor inputs and some set of control outputs, and the control system has to figure out how they relate. It does this by adjusting the outputs and watching what happens. There are various statistical techniques for doing this. Calling this "introspection" isn't really correct.

    After system identification, the model is inverted, or solved for the inputs in terms of the outputs. The inverted model can then be used as a controller. Given desired outputs, the inputs needed to achieve them can be computed.

    The novel result here is that a reasonably decent system identification for a nonlinear system is being performed with a small number of physical tries. That's an improvement over previous methods, which tended to "learn" very slowly. I'd looked at approaches like this for legged locomotion in the past, but the available system identification algorithms weren't good enough. This looks promising.

    Good robotics work, crap Slashdot article.

    • Not a blog troll! I have no association with Technovelgy. I was just cleaning up my less-used bookmarks, and happened on this post on goold ole Technovelgy, and thought it was neat. (So did the editor, and lots of readers!)

      I posted the blog entries about the Starfish Robot because it was a good and useful summary. If you don't think so, then that's fine. Just don't go falsely ascribing motivations and intentions!

      I've been a Slashdot reader and commenter for many years now. (Lost count. Over 5?) This
    • I think this device shows that pattern matching is all there is to life and intelligence. All that has to be done for artificial AI is an engine which tries to find a solution using statistical methods.

      By the way, this thing proves evolution one more: by trial and error, living entities have been developed...
      • by mizhi ( 186984 )
        Not really. It only shows one way that life and intelligence is possible. Even then, you get into that thorny little question of how you are definining intelligence.

        Don't misconstrue me, I'm closer in belief to your thoughts on this. But it doesn't prove the big questions you're alluding to in your post. It simply brings us one step closer to showing that all the amazing properties of life are not dependent on some mystical, impossible to fully comprehend entity, but can be mechanistically created.

        Even
    • Sounds like they just implemented the "Mendel" AI robot from that "Galopagos" video game in real life. Been there, seen that, yawn. :)
  • let me repeat this again and again.... self-introspection is the only kind of introspection possible by definition just like repeat means saying it again...
  • by Sir Holo ( 531007 ) * on Saturday September 01, 2007 @10:57AM (#20434049)
    Link to the research group at Cornell: http://ccsl.mae.cornell.edu/research/selfmodels/ [cornell.edu] Lots more pics, movies, and details.
  • Self-introspection (Score:2, Informative)

    by sakusha ( 441986 )
    Self-introspection is a tautology. It is just "introspection."
  • It must be very depressed.
    • by erveek ( 92896 )
      Probably noticed that it had a terrible pain in all the diodes down its left side.
  • came over me as I thought back to images of infants playing with their toes.

    If the robot had come with some elastic (but NOT flesh colored) rubber skin, instead of looking like a meccano set, it would have been almost cute.

    They should try different orientations for the 'shoulder/hip' attachment, give it a longer brain/body and a spotted outer covering (with sensors in the 'skin",) a need to home to an electrical outlet to recharge, and make a toy out of it.

    After an initial charge "through" the box, you open
    • Something like Pleo [wikipedia.org]? I don't think it's quite what you want, but still could be interesting. There's an article from Wired [wired.com] about Pleo and its creator here [wired.com].
  • while i'm excited to see new development in these fields it is far from new. The 60s introduced concepts used in this robot and the 80s introduced actual simulations of self emergent systems.
  • by SnoopJeDi ( 859765 ) <snoopjedi@@@gmail...com> on Saturday September 01, 2007 @11:42AM (#20434339)
    ...be called a herd?
  • by account_deleted ( 4530225 ) on Saturday September 01, 2007 @11:46AM (#20434355)
    Comment removed based on user account deletion
  • Comment removed based on user account deletion
  • I think that this is definitely the right direction to be going in, and a great start. However, the motion seems less than optimized -- it seems like they need a better genetic algorithm, if that's what they're using, so that they can find a locally optimized solution to the movement. I think for all robot motion problems, we could get a lot further faster by finding automatic solutions in virtual space first, and then applying them to physical space.
  • Seems to be a really coooool gadget, really want to have :-)
  • I, for one, welcome our new Starfish Overlords....
  • How does the robot know it has arms in the first place? Did it have to figure this out, or is it likely programmed to move its arms randomely at first and go from there?

    I'm very curious...if anyone has any input, please post.
    • by mikael ( 484 )
      One way of doing this would be to have self-registering joint/segments that send out a broadcast along the communication link. This would let the controller know what joints were present. But the controller would still have to figure out how they are connected. That's where the self-awareness bit comes in. With a single tilt sensor, the controller keeps moving joints at random until the orientation of the sensor in the predicted arrangement matches that in the real world. Alternatively, each segment could d
  • Nature is lots of small things working together - each of the pieces have a well defined capability. Yes it seems obvious, and yet, the amount of research into this particular line of reasoning doesn't seem to have expaned much over the yers. Lately "emergence", complexity arising from simplicity, is starting to become the topic de jour.

    The philosophy of subsumption architecture has always appealed to me because it seems that it emulates the idea that higher layers "collect" behvior from simpler layers

  • by Punto ( 100573 ) <puntob&gmail,com> on Saturday September 01, 2007 @03:16PM (#20435471) Homepage
    Once it learns there's so much damage he can take, he'll know pain. From there is straight to world domination.
  • I read in the newspaper today that researchers had built dinosaurs in the computer that taught themselves to walk. They also modeled animals like dogs and humans and let them learn to walk too. Eventually they came up with ways to walk that were very similar to the way those creatures really walk. From this result the researchers concluded that the way the dinosaurs had walked in real life must be approximately the same as the way their computer models walked.
  • Asimov would be proud
  • my first thought when it finaly started walking was "it walks like somone taped two mentaly defective chickens to eachother" but then it got the hang of it.
  • First, an aside: "self-introspection" is redundant. What's the alternative? Introspection of another? That makes no sense. Also, introspection is a component of consciousness. There is no way to determine another person is conscious (as opposed to a completely stimulus-response programmed "zombie"), much less a robot. Without consciousness, the appropriate term is "feedback mechanism".

    That said, the device in TFA is not novel, nor is it as simple as previous designs. Far simpler microbots have been built wi
  • As one of the authors of this work, I'm happy to answer anyone's questions. ...and no, they won't be taking over the world anytime soon.
  • Seriously, how soon till we have robots so real looking that they can look like me, do what I do, and commute into work, sit in my cube, and do my work while I stay home and cash the checks ?

    Why stop there, why not have many of them and have them interview for various jobs all with the same SSN etc and all their checks go into MY account so I can build more of them ?

    You know it is possible, demand your leaders deliver the obvious NO-brainer solution robotics will provide to humanity now

    What are YOU wai

  • Asimo wants to try it in sushi.
  • Self-Introspecting Robot Wonders If Walking Is Really All There Is To Life, And Vaguely Ponders That There Should Be Something More, But, Unable To Find It, Commits Suicide By Removing Its Own Battery.
  • Introspection is much too big a word for this. This is rather a model that through tries learns its physical configuration. Introspection [wikipedia.org] is looking into ones self, which implies a "self" - in other word: a self-consciousness. While this robot is cool and looks very like a living thing it is definitely not self aware.
  • ALIVE!!!!!

Genius is ten percent inspiration and fifty percent capital gains.

Working...