Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Robots Taught to Deceive 239

An anonymous reader found a story that starts "'We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,' said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing."
This discussion has been archived. No new comments can be posted.

Robots Taught to Deceive

Comments Filter:
  • by quietwalker (969769) <> on Thursday September 09, 2010 @02:25PM (#33525324)

    Let me see if I've got this right:
    If robot 1: make 2 paths to fixed positions, stay at the second.
    if robot 2: follow the path to the first fixed position.

    Result: 75% of the time, robot 2 ended at the wrong (first) position. 25% of the time, robot 1 failed to mark the first path because it didn't physically bump the markers properly.

    Did you even need robots? Couldn't you have just written this on a whiteboard?
    There's no thought or analysis that appears to occur. I don't see anywhere that indicates there was learning going on. What is this even proving?

    I'm really honestly baffled what they're trying to prove.

    Perhaps there was some sort of neural net or some other sort of optimizing heuristic on the first robot's part so that this was emergent deceptive behavior, this might be even a little interesting (though, not really ...). However, all I can see is a waste of time to prove that if you present two choices, and you pick the wrong one, then you will be wrong. With robot for visual demonstration.

"Falling in love makes smoking pot all day look like the ultimate in restraint." -- Dave Sim, author of Cerebrus.