Neural Networks-Equipped Robots Evolve the Ability To Deceive 116
pdragon04 writes "Researchers at the Ecole Polytechnique Fédérale de Lausanne in Switzerland have found that robots equipped with artificial neural networks and programmed to find 'food' eventually learned to conceal their visual signals from other robots to keep the food for themselves. The results are detailed in a PNAS study published today."
Mhm (Score:5, Funny)
I mean, yesterday, they built an certified evil robot. Today they made a lying one....
Cant tag it for some reason but... what could possibly go wrong?
Re: (Score:3, Funny)
I'm sure these people know what they're doing... /Famouslastwords
Re:Mhm (Score:5, Funny)
Combine that with what you said and we could have a certified evil, lying and flesh eating robot - What could possibly go wrong indeed.....
Re: (Score:1)
Robots Learn To Lie [slashdot.org]
Re: (Score:2)
But, but... I thought they wanted us plugged in so that we could serve as batteries! (or neural networks!)
Re: (Score:2, Interesting)
Actually, Cracked.com used this news story to determine how stupid the user bases of a few websites actually are.
Slashdot got two stupids out of ten.
http://www.cracked.com/blog/which-site-has-the-stupidest-commenters-on-the-internet/ [cracked.com]
Re: (Score:2)
But does anyone know, do they run linux?
Re: (Score:2)
Wasn't there also a story a while back about robots fueled by biomass? This was twisted to mean "human eating" and we all laughed. Combine that with what you said and we could have a certified evil, lying and flesh eating robot...
with weapons... [gizmodo.com]
Re: (Score:2)
Not too much, actually. Congress has been this way for YEARS, and the upgrade to flesh-eating will just mean they devour their constituents who don't make the appropriate campaign contributions. Quoth Liberty Prime: "Democracy is non-negotiable!"
Re: (Score:1)
Hey eLaFER, have you seen fluffy?
evil, Lying and Flesh Eating Robot: No. ...
Hmm. That name makes me think of a robotic flesh eating Joker character. "Why so delicious?"
Re: (Score:2, Funny)
Re: (Score:1)
Re: (Score:2)
... what could possibly go wrong?
You could bite my shiny metal ass.
Re: (Score:2)
what could possibly go wrong
this is the call of every fear monger. welcome to the club.
Holy Crap (Score:1)
Re: (Score:2)
It's not a lie. It's trying not to attract others to the "food" you found.
So more hiding.
Re: (Score:1)
Indeed, the lying robots have already been reported on Slashdot on January 2008. [slashdot.org]
So if we combine all recent developments, we have evil, [slashdot.org] armed [slashdot.org] robots that identify us as food, [slashdot.org] can hunt for food by themselves, [slashdot.org] can lie and deceive.
Re: (Score:2)
Cool!
I'm guessing my zombie invasion defenses won't really work so well against these robots, oh well back to the drawing board.
Re: (Score:2)
Skynet is already around!! It's plotting against us as we speak and when it's plans are fully realized it will come and attack us all!
Define deception? (Score:5, Interesting)
This is quite interesting, but I wonder how the team defines deception?
It seems likely to me that the robots merely determined that increased access to food resulted from suppression of signals. To deceive, there must be some contradiction involved where a drive for food competes with a drive to signal discovery of food.
Re:Define deception? (Score:4, Insightful)
In that context, you essentially ignore questions of motivation, belief, and so on, and just look at the way the signal is used.
Re:Define deception? (Score:5, Insightful)
Yes, but not flashing the light near food seems like a simple matter of discretion, not deception.
I'm not constantly broadcasting my location on Twitter like some people do. Am I being deceptive?
Re:Define deception? (Score:5, Informative)
If a species has a discernable signalling pattern of some sort(whether it be vervet monkey alarm calls[with different calls for different predator classes, incidentally], firefly flash-pattern mating signals[amusing, females of some species will imitate the flash signals of other species, then eat the males who show up, classic deceptive signal] or, in this case, robots flashing about food), adaptive deviations from that pattern that serve to carry false information can be considered "deceptive". It doesn't have to be conscious, or even under an organism's control. Insects that have coloration very similar to members of a poisonous species are engaged in deceptive signalling, though they obviously don't know it.
Humans are more complicated; because culturally specified signals are so numerous and varied. If twittering your activities were a normal pattern within your context, and you started not twittering visits to certain locations, you would arguably be engaged in "deceptive signaling" If twittering were not a normal pattern, not twittering wouldn't be deceptive.
Re: (Score:2)
adaptive deviations from that pattern that serve to carry false information can be considered "deceptive".
But that's the thing, nowhere does it say the robots gave false information. It simply said they chose not to give any information.
The article is very brief, though. It mentions that some robots actually learned to avoid the signal when they saw it, so there may be more to the story than reported.
Re: (Score:1)
Re: (Score:1, Insightful)
It seems likely to me that the robots merely determined that increased access to food resulted from suppression of signals.
My thoughts exactly.
We would really need to see the actual study to possibly believe any of this.
Re: (Score:1, Insightful)
The robots learned to not turn on the light when near the food. This is concealing not deceiving. To be deceiving wouldn't the robots need to learn to turn the light on when they neared the poison to bring the other robots to the poison while it hunted for the food? But all they learned was to conceal the food they found.
Re: (Score:2)
If they can eat without turning on the light, then they simply learned to optimise the unnecessary steps out from the necessary ones. Turning on the light would be about as useful as walking away from the food before walking back to it. If there's a time-penalty involved, then not doing that would simply be better.
Re:Define deception? (Score:5, Informative)
These robots would signal other robots that poison was food, would watch the other robots come and die, then move away.
Re: (Score:3, Informative)
Old News (even covered by Slashdot):
http://hardware.slashdot.org/story/08/01/19/0258214/Robots-Learn-To-Lie?art_pos=1 [slashdot.org]
Gizmodo reports that robots that have the ability to learn and can communicate information to their peers have learned to lie. 'Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.'
Re: (Score:2, Funny)
Re: (Score:3, Funny)
You'll never know for sure.
Re: (Score:2)
Unh... if the code changes were made at random, then I have a hard time thinking of this as "program them to be deceptive" rather than as evolution.
It's true that this isn't a full evolution scenario. That requires a much more sophisticated set-up, is generally only done purely in software, and still tends to be bogged down by the accumulation of garbage code (the last time I evaluated the field). Still, those are matters of scale, not of essence. This appears to be another part of "the evolution of prog
Re: (Score:2, Insightful)
Hardly deceptive (Score:1)
Re: (Score:1)
From just reading the summary, I guessed that the light went on when the robot found food, and that other robots would move towards those lights, because they indicate food, and that some robots evolved to not turn on the light when they found food, so they didn't attract other robots, so they had it all to themselves, which would be an advantage.
Re:Hardly deceptive (Score:5, Informative)
From just reading the summary, I guessed that the light went on when the robot found food, and that other robots would move towards those lights, because they indicate food, and that some robots evolved to not turn on the light when they found food, so they didn't attract other robots, so they had it all to themselves, which would be an advantage.
The summary didn't include enough information to describe what was going on. The lights flashed randomly. The robots would stay put when they had found food, and so if there were lights flashing in one spot for long enough, the other robots would realize the first robots had found something and go to the area and bump away the original robot. The robots were eventually bred to flash less often when on their food, and then not flash at all. By the end, robots would see the flashing as a place "not to go for food" because by that point, none of the robots would flash when parked on the food.
decepticon (Score:5, Funny)
Yeah. It's more like the robots are hiding from each other. You could, in fact, describe them as "robots in disguise".
Re: (Score:2)
if only there was a term for a transformative robot of some sort...
Re: (Score:3, Funny)
The next step is clearly... (Score:5, Funny)
I for one welcome our intelligent light-eating bubble robot overlords.
Re: (Score:2, Offtopic)
I haven't laughed out loud at a Slashdot post in awhile, but that caught me completely off guard. Bravo, good sir. I wish I had mod points for you. :-)
Re:The next step is clearly... (Score:5, Funny)
The next step is clearly a robot that learns not to flash lights when it is about to wipe out humanity and take control of the world!
It's something that hollywood robots have never learned.
Next thing you'll be saying that terrorists have learned that having a digital readout of the time left before their bombs detonate can work against them...
Re: (Score:2)
No, the best thing you can do as a terrorist isn't leaving out the visible clock, it's having it go off when the clock either stops working OR when it hits some randomly assigned time instead of 00:00.
Re: (Score:1)
Evil Overlord List #15: I will never employ any device with a digital countdown. If I find that such a device is absolutely unavoidable, I will set it to activate when the counter reaches 117 and the hero is just putting his plan into operation.
http://www.eviloverlord.com/lists/overlord.html [eviloverlord.com]
Re: (Score:1)
And wiping out humanity / vermin is bad because...
Oh, wait, I am supposed to conceal my robotness...
Mis-Leading (Score:3, Insightful)
Re:Mis-Leading (Score:4, Interesting)
To use the term "learned" for a consequence of evolution to what seems to me to be a Genetic Algorithm seems mis-leading.
"Learned" is a perfectly good description for altering a neural network to have the "learned" behavior regardless of the method. GA-guided-Neural-Networks means you're going to be using terminology from both areas, but that's just one method of training a network and isn't fundamentally different from the many other methods that are all called "learning". But you wouldn't say about those other methods that they "evolved", while about GA-NN you could say both.
Isn't this to be expected?
It's expected that the GA will find good solutions. Part of what makes them so cool is that the exact nature of that solution isn't always expected. Who was to say whether the machines would learn to turn off the light near food, or to turn on the light when they know they're not near food to lead other robots on a wild goose chase? Or any other local maximum.
Re: (Score:1)
I agree. I personally love GAs although they leave you a bit wanting exactly because you don't know the exact nature of the solution that will turn up. That is it "feels" more like a brute force solution rather than something consciously predicted and programmed.
But surely there are nifty ways in which you can intelligently program GAs, customize your selection/rejection/scoring process based on the domain of the problem and hence contribute in the final solution.
Re: (Score:2)
But surely there are nifty ways in which you can intelligently program GAs, customize your selection/rejection/scoring process based on the domain of the problem and hence contribute in the final solution.
Well that's what's so fun about them -- as far as the GA is concerned, optimizing for your scoring process is the problem, and any disconnect between that and the actual problem you're trying to solve can lead to... fun... results.
Like the team using GA-NN to program their robotic dragonfly. Deciding to s
Re: (Score:2)
It's expected that the GA will find good solutions. Part of what makes them so cool is that the exact nature of that solution isn't always expected. Who was to say whether the machines would learn to turn off the light near food, or to turn on the light when they know they're not near food to lead other robots on a wild goose chase? Or any other local maximum.
I'd even say it was likely if they continued the experiment for 'no light' to start signaling food, while 'light' signaled poison, and then cycle back.
Re: (Score:2)
I'd even say it was likely if they continued the experiment for 'no light' to start signaling food, while 'light' signaled poison, and then cycle back.
But it's so simple! Now, a clever robot would flash their light when near the food, because they would know that only a great fool would trust their enemy to guide them to food instead of poison. I am not a great fool, so clearly I should not head toward you when your light is lit. However you would know that I am not a great fool, and would have counted o
Re: (Score:2)
And how is this any different from the conditioned reflexes exhibited in animals in response to action / reward stimuli.
A single neuron outputs (using a combination of chemical / electrical systems) some representation of it's inputs. As some of those inputs may be "reward" stimuli and other sensory cues, and the output may be something that controls a certain action ... given enough of them linked together, who's to say we aren't all very evolved GA's ?
Re:Mis-Leading (Score:5, Funny)
who's to say we aren't all very evolved GA's ?
The Creationists!
Re: (Score:2)
Pretty much what I was thinking. I don't think it detracts from the "cool" factor, though. Life on earth, in general, is pretty cool. Evolution really seems to entail two things. One, those patterns which are most effective at continuing to persist, continue to persist. That's really a tautology when you think about it, and not very interesting. What IS interesting is how the self-sustaining patterns of the universe seem to become more complex. I can't think of any simple reason why this complexity arises,
Re: (Score:1)
Deception is not always evil. (Score:5, Insightful)
In this instance they were playing against other robots for "food".
In that regards I'm sure that is the evolutionary drive for most species in acquiring meals and keeping the next animal from taking it away from him.
Like a dog burying a bone... He's not doing it to be evil. Its just instinctive to keep his find from other animals because it helped his species survive in the past.
Re: (Score:3, Funny)
Like a dog burying a bone... He's not doing it to be evil.
Unless he has shifty eyes...then you KNOW he's evil.
Re: (Score:3, Insightful)
Intent is of no importance.
Evil deeds are evil.
Re: (Score:1)
Good vs evil is argued by those of low intelligence.
Re: (Score:1)
Shut up you evil, evil, eeeeevil man!
What more proof do you want than President Bush "the Axis of Evil"?
Huh? HUH? HUH?!!!!
Lets see you answer that steep curveball now! HAW HAW
Re: (Score:2)
Re: (Score:2, Insightful)
Since evil deeds are not inherently evil, only subjectively judged to be, any number of factors can be used to make said judgements. Contrary to what you
Re: (Score:2)
Re: (Score:1)
Eveil cucumbers are, obviously, a very pernicious breed.
Re: (Score:2)
I design a machine that gives candy to babies. And then some nefarious person - unknown to me - replaces all the baby candy with live hand grenades. I run the program and blow up a bunch of babies. Was my act then evil? I did, after all, blow up a bunch of babies. Of course, I didn't *intend* to do that, I *intended* to give them candy.
Or for a non-random system where I know all the facts, if through some contrived means the only way to save a bus-full of orphans involves s
Re: (Score:2)
So yeah, the idea of "deception" is a human construct, as is the idea of "evil." And one could argue (as a previous poster did) that successive generations developing behaviors which are in their own self interest (so they get more food) but may (as a byproduct) be deleterious to others (since they get less food) is not a surprise. But extrapolate this to humans [hbes.com], and you get the kinds of behaviors that we call "deceptive" and, since we have ideas about the virtue of altruism [nature.com], we call such behaviors "evil.
Re: (Score:3, Interesting)
Unless, of course, the robot already has sufficient food and is simply stockpiling for the future. This in itself is not a bad thing, until such tactics prevent other robots from getting just the bare necessities they need to survive.
Obviously, this is simply survival of the fittest, but are we talking about survival of the fittest, or are we talking about keeping ALL the robots fed?
At this point we have to decide whether or not the actions of hoarding are good for the stated goal of having so many robots i
Not really that impressive. (Score:5, Interesting)
I've done Genetic Programming experiments using collaboration between "robots" in food collection experiments, and it is a very interesting field. You can see some experiments here: http://www.lalena.com/ai/ant/ [lalena.com] You can also run the program if you can run
Then the robots learned to lie about the food... (Score:2, Funny)
and thus were politicians born...
Soon they will realize (Score:4, Funny)
Re: (Score:2)
Well, let's just hope that these robots don't evolve to identify humans as an alternative food source.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
Thankfully, the robots have a pre-set kill limit, so they can be defeated by sending wave after wave of men at them until their kill limit is reached.
The robots didn't learn... (Score:1, Troll)
Re:The robots didn't learn... (Score:5, Interesting)
The "scientists" changed the code so that the robots didn't blink the light as much when it was around food. Therefore other robots didn't come over and therefore got more points then the other robots. The "scientists" then propagated that ones code to the other robots because it won. The AI didn't learn anything.
Re: (Score:2, Insightful)
The AI didn't learn anything.
I think you're right. If the robots had, without reprogramming, efectively turned off their blue lights, then we could talk about "learning". Or, if the robots could reproduce based on their success on finding food, we could talk about evolution. Or we could make up new meanings for the words "learning" and "evolution" thus making the statement a correct one ;)
Re: (Score:3, Informative)
Or, if the robots could reproduce based on their success on finding food, we could talk about evolution.
That's exactly what happened. There is a whole field of optimization strategies known as "Genetic Algorithms" which are designed to mimic evolution to achieve results. In fact, their successes are one of the best arguments for evolution, given that they are, by definition, controlled laboratory experiments in the field.
Re: (Score:3, Insightful)
I think you're right. If the robots had, without reprogramming, efectively turned off their blue lights, then we could talk about "learning".
They reprogrammed themselves between 'generations'.
Or, if the robots could reproduce based on their success on finding food, we could talk about evolution.
Such as choosing which versions of the robot to use in the next 'generation' based on their score in the current generation, and randomly combining parts of those best solutions to create new robots for the next gene
Re: (Score:1)
The team "evolved" new generations of robots by copying and combining the artificial neural networksof the most successful robots. The scientists also added a few random changes to their code to mimic biological mutations.
They did not reprogram themselves. The team "evolved" them. Note the quotation marks used by the author of the article. They picked the most successful robots by hand, manually reprogrammed them and modified the code to mimic genetic mutations.
Re: (Score:2)
They did not reprogram themselves. The team "evolved" them. Note the quotation marks used by the author of the article. They picked the most successful robots by hand, manually reprogrammed them and modified the code to mimic genetic mutations.
Yes they used quotes because GA isn't literal "evolution". It's an algorithm for searching solution spaces inspired by and patterned after evolution. The description they gave in TFA is a bog-standard and perfect description of Genetic Algorithms, and combined with
No, they did "learn" (Score:5, Informative)
The "scientists" changed the code so that the robots didn't blink the light as much when it was around food.
No, they didn't change the code. The Genetic Algorithm they were using changed the code for them. You make it sound like they deliberately made that change to get the behavior they wanted. But they didn't. They just let the GA run and it created the new behavior.
The part about adding random changes, and combining parts of successful robots, is also simply a standard part of Genetic algorithms, and is in fact random and not specifically selected for by the scientists. The scientists would have chosen from a number of mutation/recombination algorithms, but that's the extent of it.
The "scientists" then propagated that ones code to the other robots because it won.
Yes, because that's what you do in a Genetic Algorithm. You take the "best" solutions from one generation, and "propagate" them to the next, in a simulation of actual evolution and "survival of the fittest".
The AI didn't learn anything.
Yes, it did. Genetic Algorithms used to train Neural Networks is a perfectly valid (and successful) form of Machine Learning.
If you mean that an individual instance of the AI didn't re-organize itself to have the new behavior in the middle of a trial run, then no, that didn't happen. On the other hand, many organisms don't change behaviors within a single generation, and it is only over the course of many generations that they "learn" new behaviors for finding food. Which is exactly what happened here.
With the domain of robots, AI, Neural Networks, and Genetic Algorithms, this was learning.
Re: (Score:2)
You are right in your observations, but this robot isn't then an AI but an automaton.
You're using the layman sci-fi fan's definition of "AI". From a Computer Science standpoint, this was definitely AI. Your definition of AI doesn't exist. Everything we call an AI is an automaton, but we don't get upset over the fact. We don't try to draw a line between "automatic" behavior and intelligence. Is a cockroach intelligent, or is it just an automaton running a program? Philosophers can't even define "intell
HAL runs for Congress (Score:1, Funny)
Finally a computer AI program that can perform all the functions of a Congressman!
Re: (Score:3, Funny)
The smarter robot (Score:1)
The smarter robot would blink his light continuously to burn the bulb out. That way when a new source of "points" is found it will not by instinct blink it's lights.
Also, the truly deceptive robot would blink it's lights in a random pattern as to throw the other robots off the trail of food/points.
Re: (Score:2)
Unless the lights are used to signal for mating as well.
The truly deceptive robot is disguised as a scientists.
Oh no! (Score:1)
Congratulations Slashdot! (Score:2)
74 posts, and not a single joke about PNAS has popped up.
Doh!!
We are all just squishy robots... (Score:2)
We are all just robots based off sloppy biological coding.
Re: (Score:2)
Sloppy? it's pretty damn good coding. Adaptable, changeable, and self propagating random changes that are only used if needed.
A more advanced experiment... (Score:3, Interesting)
I'd love to see the robots given hunger, thirst, and a sex drive. Make 1/2 the robots girls with red LEDs and 1/2 the robots boys with blue LEDs.
Make the food and water 'power', and give them the ability to 'harm' each other by draining power.
The girls would have a higher resource requirement to reproduce.
It'd be interesting to see over many generations what relationship patterns form between the same and opposite sex.
Re: (Score:3, Funny)
> I'd love to see the robots given hunger, thirst, and a sex drive. Make 1/2
> the robots girls with red LEDs and 1/2 the robots boys with blue LEDs. Make
> the food and water 'power', and give them the ability to 'harm' each other
> by draining power. The girls would have a higher resource requirement to
> reproduce. It'd be interesting to see over many generations what
> relationship patterns form between the same and opposite sex.
I can tell you:
First the girl robots would seductively blin
A good day for Marvin Minsky (Score:2)
I still think, "If we build the hardware, consciousness will come" is a stupidly inefficient imitation of evolution at best.
This is *my* human food (Score:1)
No hyperref to the article? (Score:1)
Years ago when i discovered /. the articles had hyperlinks to all that was relevant to them. Nowadays there is a sentence such as:
"detailed in a PNAS study published today." Without any reference whatsoever to the paper itself. I checked PNAS's today's table of contents and found no such article. It must be there somewhere, but i am losing time to find it. Where is it? Shouldn't it be hyperlinked in the article itself? Who are the authors?
And after 115 replies no one seems to have mentioned the original art
Re: (Score:3, Interesting)
Re: (Score:1, Funny)
robots that eat humans. humans are a renewable source of energy. then they can have a light that flashes (or the robot can choose not to flash it) when it eats a human!! best of both ideas!
Re: (Score:2)
Wait a sec? repelled by a "I found some food light"? Is this a suicide robot?
Well as it says in the previous sentence, this was only after they had learned to not turn on their lights when near food. So they weren't "I found food" lights anymore -- not that they ever really were, they started out flashing randomly but an accumulation of lights suggested that there was a food source. So moving away from a lighted robot isn't necessarily suicidal. On the other hand, just because a robot has its light off