Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Robotics

Robots Taught to Deceive 239

Posted by CmdrTaco
from the of-this-will-be-fine dept.
An anonymous reader found a story that starts "'We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,' said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing."
This discussion has been archived. No new comments can be posted.

Robots Taught to Deceive

Comments Filter:
  • by quietwalker (969769) <pdughi@gmail.com> on Thursday September 09, 2010 @01:25PM (#33525324)

    Let me see if I've got this right:
    If robot 1: make 2 paths to fixed positions, stay at the second.
    if robot 2: follow the path to the first fixed position.

    Result: 75% of the time, robot 2 ended at the wrong (first) position. 25% of the time, robot 1 failed to mark the first path because it didn't physically bump the markers properly.

    Did you even need robots? Couldn't you have just written this on a whiteboard?
    There's no thought or analysis that appears to occur. I don't see anywhere that indicates there was learning going on. What is this even proving?

    I'm really honestly baffled what they're trying to prove.

    Perhaps there was some sort of neural net or some other sort of optimizing heuristic on the first robot's part so that this was emergent deceptive behavior, this might be even a little interesting (though, not really ...). However, all I can see is a waste of time to prove that if you present two choices, and you pick the wrong one, then you will be wrong. With robot for visual demonstration.

Chemist who falls in acid is absorbed in work.

Working...