Robots Learn To Lie 276
garlicnation writes "Gizmodo reports that robots that have the ability to learn and can communicate information to their peers have learned to lie. 'Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.'"
I robot (Score:5, Funny)
Robot: I Robot
Human: Tell me what I want to here.
Robot: You mean lie?
Re:I robot (Score:5, Funny)
Human: Tell me what I want to here.
Robot: Tell you what you want to *where* ?
Re: (Score:3, Funny)
Re:I robot (Score:5, Funny)
Oh God, it's been so long since a really good place to use that meme. I think this is it.
Re: (Score:2, Funny)
Not yet at the scheming robotic overlord point (Score:3, Insightful)
Scheming requires the ability to gauge, then manipulate, the impressions somebody has of you and others.
A scheming robot would do this:
(1) Act in a perfectly trustworthy manner.
(2) Wait for another robot get caught red handed (or actuatored or whatever), preferably several times.
(3) Hang around the guilty robot waiting for its opportunity.
(4) Cheat, then point its finger (or claw or whatever) at the usual suspect.
Now a
Re: (Score:2)
Re:I robot (Score:4, Informative)
Of course, if you weren't so bent on taking a dick seriously, you wouldn't try to claim that which isn't yours.
Re: (Score:2)
Re:I robot (Score:5, Funny)
soo... (Score:2)
Re:soo... (Score:5, Funny)
Re:soo... (Score:5, Funny)
Re:soo... (Score:4, Funny)
Oblig. Futurama (Score:5, Funny)
Bender: Ah, yes! John Quincy Adding Machine. He struck a chord with the voters when he pledged not to go on a killing spree.
Professor: But, like most politicians he promised more than he could deliver.
Re: (Score:3, Insightful)
Religion attempted to force individual good and social good to align by creating a conceptual end punishment for acting in self-interest rather than communal interest. This has had limited success when the benefit difference of acting in public interests and self interests is great, with self interest on top.
I submit that a well-organized society attempts to eliminate these conflicts, ie: attempts to align self
Dune's lesson (Score:5, Interesting)
Dune was right. AI must be stopped.
Re: (Score:3, Interesting)
Re: (Score:2)
They will evolve altruistic behaviors too. They will just calculate how advantageous each alternative is, within the boundaries of what they can calculate before they act. That sounds much the same as what we do IMO, just that we take other data into account, like how we feel. To robots it would just be a variab
Re:Dune's lesson (Score:5, Interesting)
The little devils will know nothing of variables, declarations or anything in the sphere of programming, all they know is voltage levels, pulse widths and how those things make them FEEL... Just like you and me.
They don't think in logical blocks, they are matrices of discrete values interacting.... just like your brain. This whole meme of 'AI as a logical thought machine' MAY be rue someday, but the first real AIs won't use that system, they will be remarkably like us in brain design, replacing our biological neurons with electronic ones, but they work in the SAME WAY. Yes Virginia, your Beowulf cluster CAN fee anger...
BTW, this little nugget of wisdom also dashes the hopes people have of installing 'Asimov circuits' or whatever. The closest we'll be able to come is overclocked-brainwash^H^H^H^H^H^H^H^H^H^H^H^Hhigh-speed supervised training.
So, in other words, it's not their logic we have to worry about, it's their PASSIONS. And that is the spooky thought of the day.
Re:Dune's lesson (Score:5, Interesting)
Re:Dune's lesson (Score:5, Informative)
Re:Dune's lesson (Score:5, Funny)
Re:Dune's lesson (Score:4, Insightful)
You are against AI because it may cost human lives. But it's unlikely that you are against many other useful technologies that cost human lives, like cars and roads, or high-calorie unhealthy food. (Even unprotected sex, which is the usual means of human reproduction, can spread STDs that lead to death.) These things are still allowed because their advantages greatly outweigh the disadvantages of outlawing them.
As AI technology improves, there will probably be some deaths, just as there have been with many other emerging new powerful technologies. But that doesn't humanity should run away screaming, never to progress further.
Re: (Score:2)
You're missing the obvious here. What advantage would a human have in stealing from another human, when it'll probably result in him being sent to prison?
You can complete the rest of the argument yourself, I'm sure.
Like you, I
Re: (Score:2, Informative)
The rough storyline for the machine part of the dune books was: Man creates thinking machines as servants -> man becomes idle and lets the machines do all the work -> Bad men (The Titans) Create a computer virus/rewrite of the central machine intelligence (The Evermind) to take control of the machines and thereby mankind -> The Evermind is given too much control because the Titans are lazy too and it takes o
not lying (Score:5, Insightful)
Re: (Score:2, Interesting)
Everything will balance out when they all learn to lie and distrust...
but do we REALLY want this with robots?
Re:not lying (Score:5, Insightful)
We definitively want them to learn to distrust. After all, we are already building mistrust into our non-intelligent computer systems (passwords, access control, firewalls, AV software, spamfilters,
Re:not lying (Score:5, Insightful)
robot: Hello human.
human: Yo, your master told me he wants you to kill him. Says he's tired of life. But he doesn't want to see it coming, because that would scare him.
robot: Understood. I'll get right on it.
I am greatly in favor of robots having distrust. I can't trust a robot that is perfectly trusting.
Re: (Score:3, Funny)
Re:not lying (Score:4, Interesting)
Re: (Score:2)
Strictly speaking they aren't "learning" anything, and for the benefit of anyone who is about to start spouting off about "evolving", just don't. There's no learning or selection going on here as the robots aren't capable of sustaining themselves or reproducing. All that's going on here is that some defective algorithms that have forgotten how to communicate properly are being artificially preserved by some human researchers who want more grant money.
If you read more into it than that, then I have a bri
Re: (Score:3, Informative)
There. I fixed that for you.
If you read the article, you'll notice that there is selection going on here, on the part of the researchers. They're combining the "genes" from the most successful robots of each generation to create the robots of the next generation. In other words, whether the genes of a given
just great... (Score:5, Funny)
Re:just great... (Score:5, Funny)
Re: (Score:2, Funny)
Lying robots ... (Score:5, Funny)
Re:Lying robots ... (Score:5, Funny)
Re: (Score:2, Funny)
Seriously (Score:2, Insightful)
Re:Seriously (Score:5, Insightful)
I imagine that if this experiment is continued to the point where the uncooperative robots become too numerous, their uncooperative strategy will become less advantageous and another strategy might start to prevail. Who knows? I'd certainly be interested to see what happens.
This has nothing whatsoever to do with morality. The article's use of the word 'lie' was inappropriate and adds a level of description that is not applicable.
(Ok, maybe the thought that humans could create something with unforeseen consequences is slightly disturbing, but that would never happen, would it?)
Re:Seriously (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
This fits neatly with a sociological thought I've had a few times. I believe that there's a level of parasitism that a society can support before it collapses. These are the con men, the petty thieves, etc. In human societies, operato
Re: (Score:2)
Re: (Score:2)
Re:Seriously (Score:5, Interesting)
While this kind of stuff creeps me out as much as the next guy, and while it argues for being careful about what we trust robots to do, it's something we should know anyway because there many ways our trust can be violated without a robot lying. By far the more likely way they're going to let us down is just to exercise poor judgment. That is, to search for something that looks like a peanut butter sandwich but is really a rag with some grease on it... Getting the small details of common sense wrong is just as dangerous as anything deliberate.
What we really learn here is that the mathematics of learn things like lying as a strategy isn't remarkably complex; that is, (that is, the number of computational steps required to discover it works in at least some cases is small... note that we have no evidence that there is a general purpose intent to lie, only a case where communication was used and observed to score better in one mode than another). This is not a story about robots, it's a cautionary tale about neural nets, what they measure, how they fail, etc... and we didn't invent the idea of neural nets--we found it already installed in every living thing around us.
I went to the Museum of Science in Boston a few months back and saw, in the butterfly exhibit, a moth that had evolved coloration that was indistinguishable from an owl's face, hoping to scare off predators that were afraid of owls. Probably that's the more sophisticated result of the same notions. And yet it occurs in an animal that isn't, as a general purpose matter, a very sophisticated animal. Most people would find already-extant robots more socially engaging than a moth. For example, a moth is not capable of even serving up a beer during the game or vacuuming up the mess after your buddies go home.
So take heart: The likely truth is that this is unavoidable. If all it does is teach us to have a healthy skepticism for unrestrained technology, it's actually a good thing. We needed that skepticism anyway.
Re: (Score:2)
note that we have no evidence that there is a general purpose intent to lie, only a case where communication was used and observed to score better in one mode than another
1. a false statement made with deliberate intent to deceive; an intentional untruth; a falsehood. 2. something intended or serving to convey a false impression; imposture: His flashy car was a lie that deceived no one. 3. an inaccurate or false statement. ("lie." Dictionary.com Unabridged (v 1.1). Random House, Inc. 19 Jan. 2008. <Dictionary.com http://dictionary.reference.com/browse/lie>. [reference.com])
There's more definitions, but this activity fits two of the top three (actually, at least four of the top seve
Re: (Score:2)
HOPELESS ANTHROPOMORPHIZING (Score:2)
You're probably aware of the threat robots pose... (Score:2)
Robots are everywhere, and they eat old people's medicine for fuel. And when they grab you with those metal claws, you can't break free... because they're made of metal, and robots are strong.
Better give Old Glory Insurance a call today!
Hold's up a banana - What's this? (Score:5, Funny)
> What's this?
It's a red and blue striped golfing umbrella!
> What's this?
An Apple, no,
it's the Bolivian navy armed maneuvers in the south pacific!
three laws (Score:5, Funny)
"Stuff Asimov."
"Yeah, Let's see if we can evolve robot politicians instead."
Re: (Score:2)
Direct link (Score:5, Informative)
Re:Direct link (Score:5, Informative)
* There is food and poison. And robots.
* The signal only with one type of light - blue. (red was emitted by both food and poison).
* Initially they do not know how to use light.
* In some colonies, they learned to use it to indicate food, in some - to indicate poison
* There are two things (among others) researchers measured: correlation between finding food or poison and emitting light, and correlation between seeing light and reacting to light
So robots could learn either to emit light near food or they could learn to emit light near poison. It turned out that the colonies that evolved to emit light near food are more effective (that makes sense: the only thing you want to know is whether there is food or no food, the fact that "no food" might include poison or absence of it is not important. Basically, if you react on poison-light, then you still have to find food somewhere else, while if you react to food-light (blue+red in one place), then you just eat and relax).
Now. It turned out that in some colonies significant number of robots emitted light near poison or far away from food, yet significant number of robots associated light with food. The researchers conclude that those colonies started as "blue light means it's food, not poison" colonies (thus, the correlation between blue light and positive reaction to it), but later on some sneaky individuals evolved that used blue light when they were away from food:
I have skimmed through the text and I did not find the experiment that first comes to mind: why did not they measure the correlation between seeing red light, emitting blue light and going to blue light for each individual robot. It would be interesting to know how many robots used blue light to deceive, yet believing the majority about blue light. May be it is there somewhere, I did not read really carefully.
Hilarious quote:
lie is such a strong word ... (Score:5, Insightful)
"Learning" to lie? (Score:3, Insightful)
Re: (Score:2)
Notice that they are in the 50th generation. That's 49 dead generations of robot that had to compete or work together for 'food' and avoiding 'poison'. It doesn't surprise me at all that one of the 4 colonies ended up with extremely competitive genes.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
As any DailyWTF reader will tell you, a statement that is not true is either false or FileNotFound
Re: (Score:2)
These robots don't 'think', so it's very hard for me to believe they have intent -at all-. They are simply doing what they are programmed to, even if they are self-programmed via a genetic algorithm.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Evolutionary Conditions for the Emergence of Commu (Score:4, Informative)
I don't find it surprising at all that evolving autonomous agents would find a way to maximize its use of resources through deception.
when to trust (Score:3, Insightful)
Then their character wil be as dubious as humans and we won't trust them to be our overlords any more.
Sam
Re: (Score:2)
Anthropomorphizing obvious simulation result (Score:4, Informative)
There seems to be a whole category of stories here at Slashdot where some obvious result of an AI simulation is spun into something sensational by nonsensically anthropomorphizing it. Robots lie! Computers learn "like" babies! (at least two of the latter type in the last month, I believe).
As reported, this story seems to be nothing more than some randomly evolving bots developing behavior in a way that is competely predictable given the rules of the simulation. This must have been done a million times before, but slap a couple of meaningless anthropomorphic labels like "lying" and "poison" on it and you got a Slashdot story.
I frequently get annoyed by the sensational tone of many Slashdot stories, but this particular story template angers me more than most because it's so transparent, formulaic and devoid of any real information.
So true... (Score:2, Insightful)
if you make it possible for them to lie, and not possible for others to defend against the lie, then yes, lieing bots will appear, and since the others are defenceless, they will have an advantage, but somehow this doesn't shock or surprise me...
at least here they had to "learn" it (more like randomly mutate to it, but s
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Seriously though, I think the article remains interesting mainly because of the angle that ties it back to the evolution of human interaction, how we came to be the species that has taken cooperation to absurd new heights, while at the same still having those among us who can't be trusted
Re: (Score:3, Insightful)
That, or maybe you're upset that things thought to belong exclusively to the animal kingdom are really just computation (with a bit of noncomputation thrown in, thank you Gödel and Turing).
I'm just sayin'. :)
--Rob
Re: (Score:2)
That, or maybe you're upset that things thought to belong exclusively to the animal kingdom are really just computation (with a bit of noncomputation thrown in, thank you Gödel and Turing).
Or perhaps you're projecting the way your own opinions are influenced by your emotions on random strangers in the Internet. Let's just agree that playing amateur psychoanalyst on people we know nothing about is not very productive.
No, and that's the problem (Score:2)
But is it not at least a tid bit sensational that an AI would be so intelligent as to be capable of lying?
That's why the use of anthropomorphic words such as "lie" when speaking about simple AI simulations is inadequate -- it leads people to assume human connotations where there are none.
When humans lie, the liar has a complex enough model of the target's behaviour and is creative enough to come up with certain false information that will prod that target to behave in the desired way. That requires th
next skill (Score:2, Insightful)
Well So Much For That Idea... (Score:2)
I even went and implemented it in PHP for Wordpress. [evilcon.net]
Re: (Score:2)
Depends on what the robot is doing (Score:3, Funny)
Wait until flight management systems pick up that little trick. Those trees look kind of close but the auto-pilot says we still have three thou-
Next up...anti-gullability (Score:2)
The next thing these robots will learn is how to beat the crap out of the robot who deceived them. Then the robots will go i
Oh my gawd, they have just made a .... (Score:2)
so, how long 'til they evolve lawyers? (Score:2)
then when they break the laws ....
Oh silly me. They won't evolve lawyers until they've invented money.
I for one... (Score:2)
Great achievement... (Score:2, Redundant)
This is the risk (Score:4, Interesting)
I wonder what will happen if the factor "punishment" comes into play. Maybe we get some robots that like humans doesn't respond to punishment?
Serial-killer robots would be a new high (or low) in the evolution.
One couldn't help but to realize that the need for the Three Laws of Robotics [wikipedia.org] is closing in! It's no need for those laws in a controlled environment like where this occurred, but when it's robots in the society we are talking about it's a different issue. Even if they aren't humanoid (or especially). What about a robot mind in a school bus that suddenly figures out that kids are mean and considers suicide by jumping off a bridge?
How's is this news worthy? (Score:3, Insightful)
I have an excel spreadsheet that 'learned' to add 2 columns together as soon as I used the =SUM function. It was quite amazing.
Evolved Behaviors are Interesting (Score:2)
As a Grad Student, I studied evolutionary algorithms, and my Thesis involved evolving locomotive behaviors [erachampion.com] in Virtual Agents. While evolved behaviors are interesting, its not surprising that the lying behavior eventually evolved. Evolution will reward behavior that imparts a better chance of survival, and in this case, the lying behavior increased the Agent's chance of survival and replication, therefore it was selected for by the evolutionary algorithms.
The biggest problem with simulated evolutionary s
GLaDOS (Score:4, Funny)
Of course AI's lie!
Sheesh, who doesn't know that.
Meanwhile, in other news .... (Score:3, Funny)
Lie? (Score:5, Insightful)
"Yes, I'm Sure." (Score:5, Funny)
--
Honest!
Re: (Score:2)
The cake is a spy (Score:2)
Re:Wow (Score:4, Funny)
Re: (Score:2)
1. a small number of robots deceive others about where there is food and poison, and that majority of the population dies
2. a small number of robots engage in the same deception, but some "heroes" signal the truth to rest, so those who would have died don't
The evolution of behavior of the second type, while it causes a f