Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×
Robotics Science Technology

Evolving Robots Learn To Prey On Each Other 115

quaith writes "Dario Floreano and Laurent Keller report in PLoS ONE how their robots were able to rapidly evolve complex behaviors such as collision-free movement, homing, predator versus prey strategies, cooperation, and even altruism. A hundred generations of selection controlled by a simple neural network were sufficient to allow robots to evolve these behaviors. Their robots initially exhibited completely uncoordinated behavior, but as they evolved, the robots were able to orientate, escape predators, and even cooperate. The authors point out that this confirms a proposal by Alan Turing who suggested in the 1950s that building machines capable of adaptation and learning would be too difficult for a human designer and could instead be done using an evolutionary process. The robots aren't yet ready to compete in Robot Wars, but they're still pretty impressive."
This discussion has been archived. No new comments can be posted.

Evolving Robots Learn To Prey On Each Other

Comments Filter:
  • Confirms? (Score:1, Insightful)

    by Anonymous Coward on Saturday January 30, 2010 @12:37PM (#30963626)

    This in no way confirms that it would be too difficult for humans to build robots that posses higher A.I. traits, nor does it confirm that evolution is a better process than intelligent design.

  • Re:Evolution (Score:4, Insightful)

    by The Archon V2.0 ( 782634 ) on Saturday January 30, 2010 @01:04PM (#30963934)

    I wonder, if a robot program like this were let loose on the internet, and was capable of learning... what would it learn?

    Well, when a Dalek (ahem) 'downloaded the Internet' on Doctor Who it killed itself by the end of the episode. So I imagine that whatever it learns, it can't be good.

  • No confirmation (Score:1, Insightful)

    by Lije Baley ( 88936 ) on Saturday January 30, 2010 @02:13PM (#30964516)

    This doesn't "confirm" anything about Turing's offhanded opinion.

  • Re:A preemptive (Score:4, Insightful)

    by Hurricane78 ( 562437 ) <> on Saturday January 30, 2010 @03:06PM (#30964980)

    Flash forward a couple of billions of years, and we will perhaps write them a letter, that says it as good as this one: [] (Protip: It’s not meant in a religious way. That’s not the point. :)
    (Btw, if you like it, and like really great poetry, try this: [] )

  • by Paul Fernhout ( 109597 ) on Saturday January 30, 2010 @09:46PM (#30967644) Homepage

    A simulation I developed around 1987 had 2D robots that duplicated themselves from a sea of parts. They would build themselves up and then cut themselves apart to make two copies. To my knowledge, it was the first 2D simulation of self-replicating robots from a sea of parts. The first time it worked, one robot started canibalizing the other to build itself up again. I had to add a sense of "smell" to stop robots from taking parts from their offspring. As another poster referenced, Philip K. Dick's point on identity in 1953 was very prescient: []
    "Dick said of the story: "My grand theme -- who is human and who only appears (masquerading) as human? -- emerges most fully. Unless we can individually and collectively be certain of the answer to this question, we face what is, in my view, the most serious problem possible. Without answering it adequately, we cannot even be certain of our own selves. I cannot even know myself, let alone you. So I keep working on this theme; to me nothing is as important a question. And the answer comes very hard.""

    However, those robots were not evolving. I presented a talk on that simulation at a workshop on AI and Simulation in 1988 in Minnesota, saying how hard easy it was to make robots that were destructive, but how much harder it would be to make them cooperative. A major from DARPA literally patted me on the back and told me to "keep up the good work". To his credit, I'm not sure which aspect (destructive or cooperative) he was talking about working on. :-) But I left that field around that time for several reasons (including concerns about military funding and use of this stuff, but also that it seemed like we knew enough to destroy ourselves with this stuff but not enough to make it something wonderful). At the same workshop someone presented something on a simulation of organisms with neural networks that learned different behaviors. A professor I took a course from at SUNY Stony Brook has done some interesting stuff on evolution and communications with simple organisms: []
    Anyway, in the quarter century almost since then, what I have learned is that the greatest challenge of the 21st century is the tools of abundance like self-replicating robots (or nanotech, biotech, nuclear energy, networking, bureaucracy, and others things) in the hands of those still preoccupied with fighting over percieved scarcity, or worse, creating artificial scarcity. What could be more ironic than using nuclear missiles to fight over Earthly oil fields, when the same sorts of techology and organizations could let us build space habitats and big renewable energy complexes (or nuclear power too). What is more ironic than building killer robots to enforce social norms related to forcing people to sell their labor doing repetitive work in order to gain the right to consume, rather than just build robots to do the work? Anyway, it won't be the robots that kill us off. It will be the unexamined irony. :-)

User hostile.