Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Robotics Software

Variations On the Classic Turing Test 82

holy_calamity writes "New Scientist reports on the different flavors of Turing Test being used by AI researchers to judge the human-ness of their creations. While some strive only to meet the 'appearance Turing Test' and build or animate characters that look human, others are investigating how robots can be made to elicit the same brain activity in a person as interacting with a human would."
This discussion has been archived. No new comments can be posted.

Variations On the Classic Turing Test

Comments Filter:
  • Sweet (Score:1, Interesting)

    by Drumforyourlife ( 1421647 ) on Friday January 23, 2009 @10:09AM (#26573887)
    It really is kind of creepy how close they've come to actual life-like robotics... but my question is, how life-like should a robot really be? I mean, are we going to be replacing friends with these guys, or are they meant to serve us? Don't get me wrong, I have a great respect for these scientists, I just wonder how these sorts of real robots will fare on the market.
  • by postbigbang ( 761081 ) on Friday January 23, 2009 @10:14AM (#26573957)

    The Turing test is for apparent 'human' intelligence, where robotics adds communications via 'expressiveness'. These are two different vectors: rote intelligence and capacity to communicate (via body language, and the rest of linguistics/heuristics).

    The article doesn't abstract the basic cognitive capacity because it entangles it with the communications medium. The Turing Test ought to be done in a confessional, where you don't get to see the device taking the test. It would also provide a feedback loop on the test as well.

  • by FTWinston ( 1332785 ) on Friday January 23, 2009 @11:36AM (#26574929) Homepage
    The turing test always struck me as ridiculously anthropomorphic. Clearly the existance of non-humanlike intelligence can be envisaged. But no matter how smart, it would fail this test.

    Furthermore, in an in-depth conversation, surely an AI would have to lie (talk about its family, its working life, etc)...
    If we continue to enshrine the standard of the Turing test, we're aiming for a generation of inherently untruthful fake-people machines. If it 'knows' that many/most things it tells us are lies, it may well have to assume the same for us. At this point, I suspect its time to drop in a skynet reference or two.

    Lastly, its worth pointing out that for a 2 minute conversation, a randomly selected response of "lol" "haha" and "rofl" would match, if not out-score many people on the Turing test.
  • by moore.dustin ( 942289 ) on Friday January 23, 2009 @12:37PM (#26575849) Homepage
    You are only partially correct. The focus on context is misplaced, though you are on the right path. Simply remembering words or topics that have been mentioned earlier in the same discussion does not say anything about intelligence.

    The main problem with AI is learning.

    Nearly all work in the field now has a misplaced or completely wrong approach to achieving real AI. In order to understand how to make truly intelligent machines, we must first know how our own brains work. Most focus is on creating a machine that can perform in some very specific situation, like the Turing Test. However, these machines are not intelligent, they do not learn. They are not creating, storing and recalling patterns which are the crux of our cognitive abilities.

    The first step to true AI is understanding how human intelligence is achieved in our brain.

"Plastic gun. Ingenious. More coffee, please." -- The Phantom comics

Working...