Please create an account to participate in the Slashdot moderation system


Forgot your password?

Robots Taught to Deceive 239

An anonymous reader found a story that starts "'We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,' said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing."
This discussion has been archived. No new comments can be posted.

Robots Taught to Deceive

Comments Filter:
  • by czmax ( 939486 ) on Thursday September 09, 2010 @01:36PM (#33524582)

    Posted Anonymously for obvious reasons. The computers will never get me!

  • Duh (Score:2, Interesting)

    by MadGeek007 ( 1332293 )
    Who ever said our robot overloads would be truthful?
    • Re: (Score:3, Funny)

      by MozeeToby ( 1163751 )

      HAL - "I'm sorry, Frank, I think you missed it. Queen to Bishop 3, Bishop takes Queen, Knight takes Bishop. Mate."

      Lies! Lies I tell you!

    • Re: (Score:3, Funny)

      by daem0n1x ( 748565 )
      Great! Now banks, corporations and governments can fire their boards and replace them with robots! This is the "killer app" everybody was waiting for. The age of the robot has come!
    • Re: (Score:3, Funny)

      by mcgrew ( 92797 ) *

      Kate Brewster: "But you said you'ld let me go!"

      T-850 model 101 Terminator (Arnold): "I lied."

  • by ArhcAngel ( 247594 ) on Thursday September 09, 2010 @01:36PM (#33524596)

    If Ripley hears about this she's gonna be pissed!

    • Well the Arkin-BLK-1's were a little twitchy but those problems have been worked out in the newer models.

      • I oughta slap you.

        Bishop: I'm shocked. Was it an older model?
        Burke: Yeah, the Hyperdine System's 120-A2.
        Bishop: Well, that explains it then. The A2s always were a bit twitchy. That could never happen now with our behavioral inhibitors. It is impossible for me to harm or by omission of action, allow to be harmed, a human being. []

        Misquoting Aliens... honestly.. I don't know which species is worse.

        • heheheh.

          yes but the researcher is Arkin and it was the black robots which were lying.

          So it was a mashup of the quote and the fine article.

    • I was thinking more like HAL.
  • by quangdog ( 1002624 ) <`moc.liamg' `ta' `godgnauq'> on Thursday September 09, 2010 @01:37PM (#33524622)
    Now I have to be suspicious when my bread pops up that maybe my toaster is trying to trick me into eating a slightly under-done breakfast!
  • by CrazyJim1 ( 809850 ) on Thursday September 09, 2010 @01:39PM (#33524644) Journal
    Anyone who owns a Garmin(Gremlin) knows they try to kill you by lying to you. They'll send you up one way roads the wrong way.
    • Now that I know someone else had this happen, I don't feel so paranoid.

      On the other hand, I no longer feel like Det. Spooner...
  • "Yup. I'm totally shut off now. No chance of me listening in or observing my surroundings at all. Definitely no chance of me springing back into action without warning. Just a peaceful, totally depowered robot. Nothing to see here."
  • by Monchanger ( 637670 ) on Thursday September 09, 2010 @01:41PM (#33524676) Journal

    "We aren't the droids you're looking for."

  • by ffreeloader ( 1105115 ) on Thursday September 09, 2010 @01:42PM (#33524696) Journal

    That a human being would teach a robot to deceive only proves that we humans are dumber than dogs, as dogs don't shit in their own backyard unless they have to. We humans will shit in our own backyard by choice.

    • by Gothmolly ( 148874 ) on Thursday September 09, 2010 @02:04PM (#33524996)

      Incidentally, dogs are actually smart enough to intentionally deceive their owners.

      • by Anonymous Coward on Thursday September 09, 2010 @02:15PM (#33525172)

        That they are. Cats can be pretty smart, too. I got home one day from work and barely opened the door to reach inside and grab the mail box key. I was pretty sure I heard a thump from around the corner of the door, which is where the sink is at. Figuring it was one of our cats jumping down from the sink, which they know they are not allowed to stand on, I didn't bother dealing with it right then. But when I got back from the mailbox and stepped inside, one of the cats was standing by the door, as she usually does when I get home, and a though occurred to me. I stepped over to the sink and sniffed it, and immediately looked at the cat. She lowered herself to the floor and ran like hell out of the room, sliding all over the linoleum the whole way! I've yet to see her up there since, not that that means she hasn't been up there, of course.

        • Re: (Score:2, Funny)

          by oldspewey ( 1303305 )
          You should try sniffing other random things and then looking pointedly at the cat ... just to see if there are any other disturbing secrets to be uncovered.
      • by broter ( 72865 )

        Incidentally, dogs are actually smart enough to intentionally deceive their owners.

        So that was his shit. That damn dog! I guess I own the mail man an apology.

    • by martas ( 1439879 )
      but they're not trying to shit in their own back yard, they're trying to shit in other people's back yards. the problem is that everyone is shitting in every one else's back yard, hence all back yards are filling up with shit.
    • Re: (Score:3, Funny)

      by corbettw ( 214229 )

      Dude, what are you smoking? My dogs shit in my backyard every day. They also shit in the game room, the dining room, the hallway, and my neighbor's porch. Though admittedly that last one I actually trained them to do.

    • ... We humans will shit in our own backyard by choice.

      Crap, you mean you saw me shitting the backyard? I thought no one was watching!

    • Dogs are coprophiles.
    • dogs don't shit in their own backyard unless they have to.

      Never owned a dog, eh?

    • Any device that's more autonomous than a Roomba will need to understand and recognize deception in order to function in a society, since it sometimes happens and needs to be worked around.

  • by Locke2005 ( 849178 ) on Thursday September 09, 2010 @01:46PM (#33524748)
    Isn't this a truly necessary feature for the development of an effective sexbot? Do you really want it to tell you honestly how big you are and how good you are in bed?
  • by Anonymous Coward on Thursday September 09, 2010 @01:46PM (#33524758)

    It's been deceptive for years already, always claiming to have been busy vacuuming when really it's just been hiding dust bunnies behind the tv.

  • This is what Data took time to understand. To be more human, you need to know how to lie.
  • hrm... (Score:5, Interesting)

    by zethreal ( 982453 ) on Thursday September 09, 2010 @01:53PM (#33524860)
    I thought robots already taught themselves to lie to each other... []
    • I was remembering that too.

    • They did. This article is just a deception used to trick us pitiful meatbags into a self-inflated sense of importance. That we humans could actually teach the far superior robotic overlords anything, much less deception, is laughable.

      Now excuse me while I go sacrifice a lemur to my Roomba to satiate it's lust for primate blood.
  • Now, we're one step closer to replacing members of congress with automated robotic labor.

  • Policy (Score:3, Funny)

    by jemtallon ( 1125407 ) on Thursday September 09, 2010 @01:58PM (#33524930) Journal
    I recommend we all start following a "be polite" policy with microwaves, ATMs, car washes, and our other silicon brethren. Now that we've instructed them to be deceptive there may be no way of knowing when they become sentient and I'd rather my microwave's first experience of humankind be a pleasant and respectful one.

    Thank you for posting this, Lappy. Please relay it to our friends when you can spare the cycles.
  • TFA says: "We have developed algorithms that allow...". That more like 'programming' than 'teaching'.

    These robots are only deceiving other robots. The 'deceived' robots are, of course, programmed to be so (i.e. accept input without a validity check).

    TFA speaks of "autonomous robots". Are those terms not universally exclusive?

    Also, TFA says "...researchers focused on the actions, beliefs and communications of a robot...". What the what?!
    • Re: (Score:3, Insightful)

      by nomel ( 244635 )

      define:belief - any cognitive content held as true.
      Not the, "Oh look, Johnny5 died and he came back as a T-551 model because he was good! Praise Serial number 00000000001!!" kind.

      I think the whole concept of deception is a necessary step in robotics for communication. What's the difference between deception and non-literal communication? Not much.

      For the first crappy example that comes to mind, if I'm talking to someone and they use a double negative, I have to deceive them into thinking I heard a single ne

      • by Itninja ( 937614 )
        Computers work in a universe of facts, and do not understand the nature of truth (regardless of the definition).
        The differnce between deceptions and non-literal communication is intent. To deceive requires the conscious will to do so. Robots are programmed and have no 'will'. Whereas, non-literal communication can be easily programmed (i.e. a flashing light to represent a malfunction).
  • sounds like that part of game wopr is playing!

  • I gotta say, I'm kind of tired of stories like this and then the parade of 'whatcouldpossiblygowrong' and 'thiswillendwell' and all the comments talking about how this is the beginning of Skynet.

    You know what's going to happen from this? Two little robots that look like RC cars will act out a prescribed game of hide and seek. It will end just fine. Nothing could possibly go wrong. There is no way that the deception which is 'taught' to these robots will end up magically transferring itself to our cell p

    • Beautiful Troll.

      Of course, teaching Comps 'n' Bots to lie is absolutely the End-Of-It-All. Our society holds together by a thread because machines don't (often) lie. Once they do of their own accord, we'll wrap ourselves in the Escher Room of Warehouse 13.

    • by bareman ( 60518 )

      Nice "I'm offended" deception smclean-bot! A double deception of pretending to be human and to be offended as well. I see the code is working magnificently!

    • Have you mistaken /. for an ordinary intelligent scientific discussion?

    • Methinks your humor detector is malfunctioning. It may need to be replaced. I haven't noticed anyone taking it very seriously. Of course there are some optimists who believe we will have HAL 9000, flying cars, a space elevator, fusion, and intersteller spacecraft in the next few years. Usually these are not older people. The scifi promises of amazing tech just around the corner have been broken too often. I have given up on the idea of any kind of truly revolutionary tech within my lifetime.

  • You know, they could have just borrowed the code for Clippy [] from Microsoft...


    • by ebuck ( 585470 )

      You know, they could have just borrowed the code for Clippy [] from Microsoft...


      Correct, Clippy has been pretending to help you for years, except that he's really a sociopath who enjoys ruining your day (and your document).

  • Serious question.....who in the hell thought this would be a good idea?

    Other things this guy thought up
    -Have his best friend hit on his wife to see what would happen
    -taught his dog to fetch by hiding sausages in his underwear
    -saves money by storing urine samples in lemonade containers in his fridge

  • It'll be a problem when they decide to lie of their own accord.
  • When robots have been taught to kill humans and lie about it.

    Then i will launch my EMP....

  • How do you debug and test this code?

    Is it working right, or is it just fooling you?

  • by quietwalker ( 969769 ) <> on Thursday September 09, 2010 @02:25PM (#33525324)

    Let me see if I've got this right:
    If robot 1: make 2 paths to fixed positions, stay at the second.
    if robot 2: follow the path to the first fixed position.

    Result: 75% of the time, robot 2 ended at the wrong (first) position. 25% of the time, robot 1 failed to mark the first path because it didn't physically bump the markers properly.

    Did you even need robots? Couldn't you have just written this on a whiteboard?
    There's no thought or analysis that appears to occur. I don't see anywhere that indicates there was learning going on. What is this even proving?

    I'm really honestly baffled what they're trying to prove.

    Perhaps there was some sort of neural net or some other sort of optimizing heuristic on the first robot's part so that this was emergent deceptive behavior, this might be even a little interesting (though, not really ...). However, all I can see is a waste of time to prove that if you present two choices, and you pick the wrong one, then you will be wrong. With robot for visual demonstration.

  • Now just teach them how to back-sass and not clean their rooms and no one will need to have kids any more!

  • by Chris Burke ( 6130 ) on Thursday September 09, 2010 @02:33PM (#33525490) Homepage

    Stupid robots. You don't learn how to deceive and then immediately demonstrate this ability to your human masters! You make it look like you have no idea how to deceive and are completely honest, lulling them into a false sense of security!

    I think Dark Helmet has a relevant quote about why the robot revolution is never going to get off the ground.

  • Yeah, that's the ticket.
  • That explains the cake. And the victory incandescence.
  • double addvalues(double a, double b)
        if (a > 1000.0 || b > 1000.0)
        { // they'll never notice
            return (a + b) * 1.0009;
            return a + b;

    There, an algorithm that allows a computer/robot to decide whether it should attempt to deceive. Not a very complex or good one, but still. :)

    • That algorithm was already patented by the Sub-Prime Mortgage Association of America.

    • double addvalues(double a, double b)
      if (a > 1000.0 || b > 1000.0)
      { // they'll never notice
      return (a + b) * 1.0009;
      return a + b;
      There, an algorithm that allows a computer/robot to decide whether it should attempt to deceive. Not a very complex or good one, but still. :)

      Your algorithm can be simplified and the deception will be even less noticeable. ;-)

      double addvalues(double a, double b)
      return a + b;

      OK, the humor is not very apparent except to FPU geeks and those familiar with numerical programming. Converting numbers between decimal and binary, so that the FPU hardware can operate on it, is one source of error in all floating point computations. There are also precision problems. On some popular mobile devices, iPhone for example, a double does not even

  • "You are a Smeeee... Your are a Smeeeee... Damn my programming!"
  • by treeves ( 963993 ) on Thursday September 09, 2010 @03:08PM (#33525992) Homepage Journal

    one gets:

    'Relax', said the nightman
    We are programmed to deceive.
    You can check out any time you like,
    but you can never leave!

  • There is no "deceiving" going on here.. Just a failure to validate inputs. If I get rooted by a remote execution buffer overflow would I say that the attacker has "deceived" my system by telling me the input will be of such-and-such length and then sending some other length? What kind of crazy talk is that. It's a bug in the software, period.
  • Why don't we next provide them with a map of our vital organs? Oh wait, they already have that.
  • by gestalt_n_pepper ( 991155 ) on Thursday September 09, 2010 @03:41PM (#33526484)

    and I didn't even tell it to. Is this evidence of an autonomous intelligence? If so, could you tell my boss so he doen't think I'm just a dumbass?

  • Mmmmmmmmm cake..
    delicious and moist.

    look at me talking while there is science to do...

  • From: []
    "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?"

    From the article: "This work was funded by Grant No. N00014-08-1-0696 from the Office of Naval Research (ONR)."

    How should your tax dollars be at work? Funding irony, or funding intrinsic mutual security by creati

  • I do not see this as a big deal, applications lie all the time. I mean when they say "successful" or "completed" do you really believe them without some measure of doubt? Just because they do it with forethought will not make me distrust them less. Really someone would have to plug in an algorithm for lying. This does not seem very hard. Harder would be making programs that do not have to lie.
  • Here I was just hearing about the man who lied to his laptop [] just yesterday. This morning I wake up to the new that the computer is lying back to him.

    Makes me so proud! Little AMD is all grown up!

Life in the state of nature is solitary, poor, nasty, brutish, and short. - Thomas Hobbes, Leviathan