Neural Networks-Equipped Robots Evolve the Ability To Deceive 116
pdragon04 writes "Researchers at the Ecole Polytechnique Fédérale de Lausanne in Switzerland have found that robots equipped with artificial neural networks and programmed to find 'food' eventually learned to conceal their visual signals from other robots to keep the food for themselves. The results are detailed in a PNAS study published today."
Re:Hardly deceptive (Score:5, Informative)
From just reading the summary, I guessed that the light went on when the robot found food, and that other robots would move towards those lights, because they indicate food, and that some robots evolved to not turn on the light when they found food, so they didn't attract other robots, so they had it all to themselves, which would be an advantage.
The summary didn't include enough information to describe what was going on. The lights flashed randomly. The robots would stay put when they had found food, and so if there were lights flashing in one spot for long enough, the other robots would realize the first robots had found something and go to the area and bump away the original robot. The robots were eventually bred to flash less often when on their food, and then not flash at all. By the end, robots would see the flashing as a place "not to go for food" because by that point, none of the robots would flash when parked on the food.
Re:Define deception? (Score:5, Informative)
These robots would signal other robots that poison was food, would watch the other robots come and die, then move away.
Re:Define deception? (Score:3, Informative)
Old News (even covered by Slashdot):
http://hardware.slashdot.org/story/08/01/19/0258214/Robots-Learn-To-Lie?art_pos=1 [slashdot.org]
Gizmodo reports that robots that have the ability to learn and can communicate information to their peers have learned to lie. 'Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.'
Re:The robots didn't learn... (Score:3, Informative)
Or, if the robots could reproduce based on their success on finding food, we could talk about evolution.
That's exactly what happened. There is a whole field of optimization strategies known as "Genetic Algorithms" which are designed to mimic evolution to achieve results. In fact, their successes are one of the best arguments for evolution, given that they are, by definition, controlled laboratory experiments in the field.
No, they did "learn" (Score:5, Informative)
The "scientists" changed the code so that the robots didn't blink the light as much when it was around food.
No, they didn't change the code. The Genetic Algorithm they were using changed the code for them. You make it sound like they deliberately made that change to get the behavior they wanted. But they didn't. They just let the GA run and it created the new behavior.
The part about adding random changes, and combining parts of successful robots, is also simply a standard part of Genetic algorithms, and is in fact random and not specifically selected for by the scientists. The scientists would have chosen from a number of mutation/recombination algorithms, but that's the extent of it.
The "scientists" then propagated that ones code to the other robots because it won.
Yes, because that's what you do in a Genetic Algorithm. You take the "best" solutions from one generation, and "propagate" them to the next, in a simulation of actual evolution and "survival of the fittest".
The AI didn't learn anything.
Yes, it did. Genetic Algorithms used to train Neural Networks is a perfectly valid (and successful) form of Machine Learning.
If you mean that an individual instance of the AI didn't re-organize itself to have the new behavior in the middle of a trial run, then no, that didn't happen. On the other hand, many organisms don't change behaviors within a single generation, and it is only over the course of many generations that they "learn" new behaviors for finding food. Which is exactly what happened here.
With the domain of robots, AI, Neural Networks, and Genetic Algorithms, this was learning.
Re:Define deception? (Score:5, Informative)
If a species has a discernable signalling pattern of some sort(whether it be vervet monkey alarm calls[with different calls for different predator classes, incidentally], firefly flash-pattern mating signals[amusing, females of some species will imitate the flash signals of other species, then eat the males who show up, classic deceptive signal] or, in this case, robots flashing about food), adaptive deviations from that pattern that serve to carry false information can be considered "deceptive". It doesn't have to be conscious, or even under an organism's control. Insects that have coloration very similar to members of a poisonous species are engaged in deceptive signalling, though they obviously don't know it.
Humans are more complicated; because culturally specified signals are so numerous and varied. If twittering your activities were a normal pattern within your context, and you started not twittering visits to certain locations, you would arguably be engaged in "deceptive signaling" If twittering were not a normal pattern, not twittering wouldn't be deceptive.