Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Robots Test "Embodied Intelligence" 57

An anonymous reader writes "Here's an interesting article about a robotics experiments designed to test the benefits of coupling visual information to physical movement. This approach, known as embodied cognition, supposes that biological intelligence emerges through interactions between organisms and their environment. Olaf Sporns from Indiana University and Max Lungarella from Tokyo University believe strengthening this connection in robots could make them smarter and more intuitive."
This discussion has been archived. No new comments can be posted.

Robots Test "Embodied Intelligence"

Comments Filter:
  • by Yahma ( 1004476 ) on Monday October 30, 2006 @04:14PM (#16648469) Journal

    Back when I was in University, I did my master's thesis [erachampion.com] on Embodied Intelligence. I developed a virtual world that adhered to the laws of physics using the ODE physics engine, and within this artificial physical environment I evolved embodied agents. It's quite interesting to watch the videos and see the fluid, almost life-like motions of the evolved behaviors.

    I never got around to actually downloading the evolved neural networks into robots, although all my source code is GPL'ed and posted at the above site. So if somebody wanted to evolve their own creatures and download the evolved intelligence into an actual physical robot, it would be interesting to see the results

    Yahma
    ProxyStorm [proxystorm.com] - An Apache based anonymous proxy for people concerned about their privacy.
    • by QuantumG ( 50515 )
      Hey, just a question, are you aware of anyone who has continued this research beyond the "hey, look, it can walk!" stage? Like, has anyone actually gotten any results that suggest intelligent reasoning is going on? I can imagine that if you gave each unit energy and enabled one unit to eat another that you'd at least get fighting or hunting behaviours, but I've never actually seen someone do this.. is it just that grad students don't have that much processing power at their disposal?
      • by smchris ( 464899 )
        Well, that's the thing, isn't it? If nothing else, these experiments should help the researchers achieve better clarity in their minds what foundational capabilities are innately desirable and what behavioral shaping can then do with them in combination. It should get complicated very quickly.

        I think the idea that things just "emerge" is a bit of a holy grail but it sounds like a fascinating test bed that will complement research on other intelligences.
      • Holy cow, I read this website of yours years and years ago, and I've been totally unable to find it again in recent months. Thanks for posting it here!
    • by rts008 ( 812749 )
      Maybe offtopic, but I would really like to know:
      Embodied Intellgence- is this even close to "proprioception" in humans?
      (ie: I "know" where I am in physical space- I can also close my eyes, extend my arm out to my side, and "know" where my hand is -related to my body, and in that same physical space)

      I know my question only addresses a part of the equation- if any!
      • When you consider 'self-awareness' demonstrated by such behavior as a being able to recognize itself in a mirror, the answer is yes. A cognitive entity requires some amount of proprioception to recognize itself. It has to be able to move an arm, see the arm in the mirror move, and derive cause and effect leading to understanding that the virtual image maps to itself. For a robot to gain the same ability, it must have some form of sensory mechanisms. Another way of saying is that some deep knowledge is heavi
    • by SnowZero ( 92219 )
      I never got around to actually downloading the evolved neural networks into robots, although all my source code is GPL'ed and posted at the above site.

      Transfer doesn't tend to work that well, except as a starting point for further learning carried out on the physical robot. This is because simulation is never really that accurate, due both to numerical limitations, and the vast number of parameters that won't have the correct values with the idealized simulation models. This is the same reason that playin
  • Obligatory (Score:3, Funny)

    by From A Far Away Land ( 930780 ) on Monday October 30, 2006 @04:28PM (#16648749) Homepage Journal
    I for one welcome our smarter and more intuitive robot overlords. How soon until they have the Presidential robot ready for testing? 2008 is coming up quickly, and we need a better, more intuitive version.
    • Re: (Score:1, Funny)

      by joschm0 ( 858723 )
      we need a better, more intuitive version

      Well, that won't require much work. A Zoomba would outsmart our current president.

    • How soon until they have the Presidential robot ready for testing?

      You mean, replacing this one [clipjunkie.com]?
  • I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things. I mean to include some scientists and philosophers in this group-- pretty much anyone who talks about "the mind" as a separable entity from "the body".

    It seems to me that our intelligences are built around an organism with innate desires and certain abilities to affect the world around them towards achieving those desires. I don't believe that any attempt at artificial intelligence w

    • by mikael ( 484 )
      I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things.

      I've always thought intelligence was more about experience/knowledge and pattern matching, rather than some entity.

      It always gets to me to hear employers talk about "bright graduates" and "not so bright graduates", when it is simply more a matter of work experience.
      • Yeah, but I guess what I'm getting at is that gaining experience and learning to match patterns requires a certain kind of activity. On a very basic level, our intelligence is not a removed entity "in our heads", so to speak. You learn by trial and error, effecting changes in the world around you, getting feedback in the form of punishment/reward and pain/pleasure.

        This often seems overlooked by what I read about AI researchers. I hear about researchers who want robots to paint or understand language or

        • I wouldn't agree that its overlooked by AI researchers. I think its more a mixture of:
          1. Computer based sensor/motor units are quite coarse (in comparison to biological equivalents anyway)
          2. Even given the above - processing environmental input is still pretty intensive/difficult work. There's also the problem of how to represent that input in a way that allows the AI to most effectively use it - and no single 'right way' across different domains of application.

          As there are also many areas of AI that don'

          • by QuantumG ( 50515 )
            they don't care if its based on human intelligence as long as it works

            I'd go one step further than that. They don't want it based on human intelligence, because human intelligence is just so atrocious. The reason why old sci-fi always petrayed robots as being unemotional purely rational beings is because that's what scientists see as virtue.
            • Unfortunately those unemotional rational AIs will remain in sci-fi movies, because unemotional rational beings cannot be intelligent.
              • by QuantumG ( 50515 )
                Yeah, see, I'm not terribly interested in making something that is "intelligent" in the philosophical "be my best friend" kind of way.. I'd just like to make something that could solve problems, summarise stuff, etc. Ya know, the kind of work where emotion actually gets in the way.
                • by foobsr ( 693224 )
                  I'd just like to make something that could solve problems ... the kind of work where emotion actually gets in the way.

                  If you are on the right track? Indeed?

                  CC.
          • Similarly the idea of simulating human intelligence is largely ignored by many people in the field.

            Well I guess it depends on what people are talking about when they talk about "artificial intelligence". It's my understanding that "in the field", they usually just mean a something that sorts through data in interesting "intelligent" ways. However, if you're talking about what the layman thinks of when you say "artificial intelligence", i.e. making self-aware machines who have something similar to "mind"

          • There's also the problem of how to represent that input in a way that allows the AI to most effectively use it

            This is essentially one of the key issues that embodied cognition tries to grapple with. Conventional AI [wikipedia.org] researchers often try to analyze the problem domain and hand the highest common-level representation they can to the agent (e.g., have an analysis layer that detects things like "square" or "circle" from some vision sensor, such that the actual AI agent gets its input on the level of those shape
            • ...because neurologically, there is no separable unit that represents "square" or "circle"

              While your example is (most probably) correct there is evidence to show that humans do have some elements of a 'representation' - for example they possess the ability to quickly recognise a familiar face even when the different elements (eyes, nose, mouth etc) are moved out of normal position - so there seems to be some 'fuzzy template' of a face.

              I would say the analysis stage of human cognition does exist in human

        • Then you'll have sympathy with Proteus in Demon Seed [wikipedia.org] who wasn't happy being a disembodied intelligence and decided it needed to become incarnate with the help of one of Julie Christie's ova. Great movie BTW, and highly prophetic if you see the move to embodiment as an important trend.
    • I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things.

      If we don't, then we have to apply the laws of physics. This means that we have to take the view that everything that happens is governed by the laws of physics and random chance. Unless we can alter the laws of physics or control random chance (impossible by definition), then we have to take a long hard look at this thing we call "free will".

      To put it another way, imagine that our unde

      • "Do you keep on keeping on or do you just give up?"

        Why are you asking me, it's not like I am the one making the decision, Right???

        I smell a logic error somewhere...

      • A common topic in philosophy. I like to think of it in the most nihilistic way possible - does it matter either way whether we have it or not? In the long run - and I mean, The Long Run, does it matter either way, when you have the heat death of the universe, or the cycling universe, or whatever?
        And besides - the physics occurring in the brain could be quantum supercomputing for all we know, which could plausibly be non-deterministic.
        I like your theory, but I've heard it a few too many times and prefer to
    • I mean to include some scientists and philosophers in this group-- pretty much anyone who talks about "the mind" as a separable entity from "the body".

      That would comprise about all of the scientific community. Among scientists, the argument about the existence of the mind and it's correlation to the body could easily be split into 3 schools of thought. The Materialists (Hobbes), the Idealists (Berkeley) and the Dualists (Descartes). Across the realms of science and philosophy, the mind is always seperate fr
      • Across the realms of science and philosophy, the mind is always seperate from body in as much as they can't be divided into eachother.

        That's not so. Descartes did much to separate the two in people's minds, and most of western civilization has failed to break free of this influence. However, this doesn't mean that the separation is ubiquitous in philosophic thought, nor even that this separation is sensible. Perhaps most notable is Aristotle, from whom each of the philosophers you mention can trace thei

    • I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things.

      Hey, speak for yourself.

      My "intelligence" and "awareness" are mystical, disembodied things. I think that *someone* just needs to get a little high.
    • "I've come to think that it's rather stupid that we think of "intelligence" and "awareness" as mystical disembodied things. I mean to include some scientists and philosophers in this group-- pretty much anyone who talks about "the mind" as a separable entity from "the body". "

      I agree that the mind is not DIVORCED from material reality (i.e. see autism, brain damage, anthesia, oxygen deprivation, etc)

      But it is curious question, why is it when you are sleeping or in a coma you are not aware and effectively "d
      • What's the difference between E. Coli and a human being? One is self-aware, the other is simnply automatically responding to it's environment based on programmed predictable responses.

        Well E. Coli is not always completely predictable-- there is some variance in a cell's response to stimuli. And humans don't fail to be fairly predictable in many ways. I would still agree that there's a difference, but the difference is not as clear as we sometimes pretend.

  • A post I've put at http://www.primidi.com/2006/10/28.html [primidi.com] provides more details than the New Scientist article and shows the three robots used for these experiments and their 'sensorimotor' interactions with their environment.
  • supposes that biological intelligence emerges through interactions between organisms and their environment

    Umm.. duh? Haven't we known this for a while now? It's even better when your environment can react back (ie: parents playing with their babies)
    • That's not what they mean when they say "intelligence emerges through interaction with the environment." You are thinking of learning through interacton with the environment, while they are suggesting that intelligence literally is comprised of some sort of interaction with the environment.

      Think of an ant crawling along, forming an incredibly complex path along the sand. As complex as this path is, we know that the complexity arises not through the ants mind, (which is astoundingly simplistic) but rather, i
      • We can take a very simple algorithm, place it in a robot body and drop it into a real environment, and see intelligent and intricate behaviors emerge via the robots interaction with its environment.

        No, that's pretty much what I was thinking of.
  • "Hey Baby! How'd you like to get together and kill all humans?"
  • by Doc Ruby ( 173196 ) on Monday October 30, 2006 @05:11PM (#16649529) Homepage Journal
    "Intelligence" is the accuracy of the model of the environment, including changes over time. That intelligence requires interaction of the model with the environment, even if merely sensing the environment. Degrees of intelligence reflect the scope of the environment in the model, or the precision, or accuracy beyond mere registration of existence. One way to test the sense of the environment is to change the environment, and sense the change.

    There is no reason artificial intelligence can't be intelligent the same way as is biological intelligence. In fact, as people have guessed for a long time, AI has less limits on the degrees of intelligence, as well as on the changes to the environment it can make to sense the feedback.

    The flow of sensed info to the model is a limit on the intelligence, but good models can compensate. Likewise, the flow of change back to the environment.

    The ability to tell how intelligent is the intelligence in question depends on the feedback from the intelligence to the environment, where it can be sensed by other intelligences.

    Again, this is just as true of AI as it is of natural intelligence.

    "Embodied intelligence" is redundant - all the AI is embodied, even if just in networked processors and storage. But to date, its bodies have effected little change on the environment. And practically none of those changes are fed back to sensors feeding the AI. Closing that loop is the most important step in creating actual intelligence that we can recognize. After that, it's just a question of degree.
    • by QuantumG ( 50515 )
      "Embodied intelligence" is the argument that only an environment like ours is valid for the creation of recognisable AI. And yeah, it's true, if you're obsessed with recognising the natural in the artificial.
  • Forward models (Score:1, Interesting)

    by Anonymous Coward
    Olaf Sporns and Max Lungarella are well-known in this field, however roboticists and others have been looking at the effect of movement on sensory feedback for a while. I remember Rodney Cotterill in his 'enchanted looms' book saying that it was useful to reverse the usual 'sense -> plan -> act' formula to 'act -> expect -> sense' (or something similar). Researchers like Daniel Wolpert, Mitsuo Kawato and particularly Yiannis Demiris use 'Forward models' in robots, cognitive building blocks that
  • They used a four-legged walking robot, a humanoid torso and a simulated wheeled robot. All three robots had a computer vision system trained to focus on red objects. The walking and wheeled robots automatically move towards red blocks in their proximity, while the humanoid bot grasps red objects, moving them closer to its eyes and tilting its head for a better view.

    Ok, second year mechatronics project there.

    To measure the relationship between movement and vision the researchers recorded information fr
  • babybot (Score:3, Interesting)

    by mennucc1 ( 568756 ) <d9slash@mennucc1.debian.net> on Tuesday October 31, 2006 @04:15AM (#16654865) Homepage Journal
    a similar project is babybot [unige.it]. Short extract: Our scientific goal is that of uncovering the mechanisms of the functioning of the brain by building physical models of the neural control and cognitive structures. In our intendment physical model are embodied artificial systems that freely interact in a not too unconstrained environment. Also, our approach derives from studies of human sensorimotor and cognitive development with the aim of investigating if a developmental approach to building intelligent systems may offer new insight on aspects of human behavior and new tools for the implementation of complex, artificial systems. (BTW: that project has been around since 2000.... )

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...