Variations On the Classic Turing Test 82
holy_calamity writes "New Scientist reports on the different flavors of Turing Test being used by AI researchers to judge the human-ness of their creations. While some strive only to meet the 'appearance Turing Test' and build or animate characters that look human, others are investigating how robots can be made to elicit the same brain activity in a person as interacting with a human would."
Dammit, I took one and failed! (Score:3, Funny)
Arrgg... I just took the test and failed. Does this mean that I'm ready to run Linux, and when I die I'll be running FreeBSD?
Re:Dammit, I took one and failed! (Score:4, Funny)
Eliza: Can you elaborate on that?
Re: (Score:2)
There's a point to be made there. If you want the machines to appear more human, perhaps you should ask the machines whether what they're communicating with is a human or not. That's surely a learning experience for the machine that it could incorporate into itself to appear more human.
Re: (Score:1, Funny)
(posting AC for obvious reasons, LOL)
Syntax (Score:5, Funny)
FORMALISTS' MOTTO: Take care of the syntax and the semantics will take care of itself.
Also, if you are animating a dude, he is thinking about sex. If you are animating anyone else, they are thinking about shopping.
Technically AI is not hard, you just need to lower your mind-mechanics bar and focus on trailer parks, and folk psychology.
Sweet (Score:1, Interesting)
Re:Sweet (Score:4, Funny)
Don't get me wrong, I have a great respect for these scientists, I just wonder how these sorts of real robots will fare on the market.
I think the idea is that robots will be used to do things that humans aren't willing to put up with.
Which means if you can't find someone to put up with you, then maybe a robot is for you.
Re: (Score:1)
I think the idea is that robots will be used to do things that humans aren't willing to put up with.
So you're saying, for example, you could make Bossbot 0xFF lick your balls while you fuck Fembot 0x01 in the ass, then make Fembot 0x02 suck your cock? Vertinox, get you're mind out of the gutter!
Re:Augment Friends! (Score:2)
Sure.
Friends past college aren't up to go to Taco Bell at 12:37 at night anymore. Or carry on a really tough conversation about 4 editions of Dante's inferno.
Re: (Score:2)
While we're at it, I'd like a robotic doppelganger of myself to attend boring management meetings while I have a pint at the pub.
Re: (Score:3, Funny)
I dunno, but I get the feeling that Solaris 10 must somehow be involved.
Re:Removing (Score:2)
He's joking, but I'm not.
It's been discovered that the truncated knowledge domain of Cybering has led it to be exploited as phishing. It's damn tough to tell between a phish bot and someone with terrible typing skills and worse computer knowledge.
Re: (Score:1, Offtopic)
My solution for that is simple: Ignore them both.
Seriously, if you can't bother to type English correctly, I can't be bothered to read what you are saying. In addition, I've found that most of those posts are people asking for help, not providing information, so I really lose nothing by ignoring them.
It's no different than meatspace, really. If someone came up to me, shoved their phone in my face and said 'Fix.' I'd ignore them, too. Even my boss doesn't do that and he holds my paycheck in his hands.
I h
Abstracting cognitive response is far off (Score:5, Interesting)
The Turing test is for apparent 'human' intelligence, where robotics adds communications via 'expressiveness'. These are two different vectors: rote intelligence and capacity to communicate (via body language, and the rest of linguistics/heuristics).
The article doesn't abstract the basic cognitive capacity because it entangles it with the communications medium. The Turing Test ought to be done in a confessional, where you don't get to see the device taking the test. It would also provide a feedback loop on the test as well.
Re:Abstracting cognitive response is far off (Score:4, Insightful)
The Turing test is for apparent 'human' intelligence, where robotics adds communications via 'expressiveness'. These are two different vectors: rote intelligence and capacity to communicate (via body language, and the rest of linguistics/heuristics).
I don't think the body language is the hard part and that important considering the majority of human communication these days either involves just test or voice without seeing the other person. (That and certain persons can't interpret body language anyways)
The key problem with AI is:
Context
Context
Context
The number one failure that most Turing programs is that they only respond to the sentence you just said without any context to the conversation before hand. A really good AI would be able to keep on topic and understand what has been discussed previously so that they can expand on the topic without simply just responding to the current line.
There are several ways to achieve this, but right now I don't think there is any program out there that at least I know of that does this right. The easiest way to tell if you are talking to a chat bot is to refer to something previous in the conversation and see if they respond appropriately.
Re: (Score:2)
The article makes the mistake, however, of adding heuristics that really don't have anything to do with the tenets of the Turing Tests. Robotics aren't really allied, only cognitive results. Robotics is one discipline, where cognitive response is another. That's my problem with it.
Re:Abstracting cognitive response is far off (Score:4, Funny)
The easiest way to tell if you are talking to a chat bot
Reaction time is a factor in this, so please pay attention.
You're in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it's crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can't, not without your help. But you're not helping. Why is that?
Re: (Score:1)
Re:Abstracting cognitive response is far off (Score:4, Interesting)
The main problem with AI is learning.
Nearly all work in the field now has a misplaced or completely wrong approach to achieving real AI. In order to understand how to make truly intelligent machines, we must first know how our own brains work. Most focus is on creating a machine that can perform in some very specific situation, like the Turing Test. However, these machines are not intelligent, they do not learn. They are not creating, storing and recalling patterns which are the crux of our cognitive abilities.
The first step to true AI is understanding how human intelligence is achieved in our brain.
Re: (Score:2)
Nearly all work in the field now has a misplaced or completely wrong approach to achieving real AI. In order to understand how to make truly intelligent machines, we must first know how our own brains work. Most focus is on creating a machine that can perform in some very specific situation, like the Turing Test. However, these machines are not intelligent, they do not learn. They are not creating, storing and recalling patterns which are the crux of our cognitive abilities.
Why do you assume that human-like
Re: (Score:2)
Why do you assume that human-like intelligence is the only "real" intelligence? When real AI comes, it will probably bear as much resemblance to human intelligence as airplane flight does to bird flight.
Ding ding ding. Thanks for answering your own question. The plane does not work like the bird whatsoever, but you do know what they studied in order to develop it right? In case you aren't following, they looked at other animals with flight. They aimed to understand it and then apply mechanics/engineering to emulate it. AI is no different.
Again, in order to build truly intelligent machines, we must first grasp what intelligence actually is. We have not done so.
Re: (Score:2)
Machine learning really isn't that difficult. Context is also not difficult at all. Here is how you do it. First, you need to organize your data into categories. Then, you will need a size limited revolving heap to sort these categories. This alone will take care of the context problem as well as datapath indexing. Real learning comes in three layers. First layer is categorical, the second layer is relational (think of edge and cost), and the last layer is informational (real data). Learning on the informat
Re: (Score:2)
In Turing's day, computers barely existed and very few people had any idea what they could do or not do. At that time, philosophical arguments about whether a machine could, in principal, ever be intelligent were taken seriously. Turing responded to this nonsense by pointing out, correctly, that intelligence is as intel
Re: (Score:3, Funny)
"A really good AI would be able to keep on topic and understand what has been discussed previously"
Such an AI posting to slashdot would quickly be revealed.
Re: (Score:1)
>There are several ways to achieve this
What are they? I'm writing a novel about the development of an AI (orlandoutland.wordpress.com)
Re: (Score:2)
Wouldn't you say the piano playing robot is trying to do both? It tricks its audience into thinking it is real, but music is not purely mechanical - dynamics, tone, and style can be subtle things a human can detect. Piano is an easier instrument to fake than, say, cello. I can tell a good cellist from a bad just by asking them to play anything for a few seconds (even a single note), and not from the tangibles like vibrato and mechanical prowess - by the intangibles like attack, bow movement, and phrasing
Re: (Score:3, Informative)
Personality and other heuristics are bound to occur. The tests, however, aren't really based on whether someone can play like Billy Joel or Chopin.
Turing was very aware of asking the right questions to get the right answers of a cognitive, self-aware entity. How that entity is abstracted as a physical entity is the mistake of the article. I can't play piano-- do I fail the test? Through what disciplines do we decide that there are cognitive components that establish a baseline of sentience and intelligence?
Ninnle passes Turing Test! (Score:1, Funny)
I think the real test (Score:2)
Re: (Score:3, Interesting)
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
We use turing tests on new hires at my job (Score:5, Funny)
We do interviews via IM and if the interviewee cannot convince two out of three of the interviewers they are not a bot, they don't make it to the second round.
Wonderful! (Score:5, Funny)
Re: (Score:1)
The wife and I were looking for ways to spice up the ol' Turing Test.
They sell "warming" lubricant at your local drug store. However, as it is a test of your Turing abilities, your wife can't be involved, and will instead be replaced by Sancho!
Re: (Score:1)
What's the difference? (Score:2)
I'd be really surprised if something that appeared to be X caused a different pattern in the brain than X. If X causes a certain response in the brain, and Y does not, how can you say that Y appears to be X? "appearing" is something that happens entirely in the brain. There has to be at least some common response in the brain if two things appear to be similar.
Re: (Score:2)
But if the thing that appears to be X is not exactly like X, you might notice the difference subconciously. Testing for brain activity might detect whether you can subconsiously tell something that appears to be X and the true X apart.
The classic Turing test has been invalidated (Score:1)
Given the level to which conversation has sunk, they ought to flip it around - prove that you are human via chat or IM.
I'm betting a significant percentage of the populace would fail.
(now, if only we could make that a requirement for voting...)
Re: (Score:1)
Chatbots (Score:2, Informative)
The best chatbots I've come across are at www.a-i.com
Not quite good enough to pass the turing test yet, but some are quite witty.
Re: (Score:1)
This bot is smart enough to make that a link, for the convenience of others. www.a-i.com [a-i.com]
- Amy Iris
Bot Supreme
---
It's complicated. Smile if you're gettin' it.
Human-ness != Intelligence (Score:5, Insightful)
I understand people's fear of machine intelligence exceeding that of humans, but it is actually more dangerous to have machines merely mimicing human-ness than to have machines that are intelligent enough to actually understand what we say better than another human could.
That means more than merely having some mockery of mirror neurons for "empathy". It means genuine understanding: The ability to model.
The reason this is central to our relationship to our machines should be obvious: Friendly AI really boils down to the problem of effectively communicating our value systems to the AIs.
That's why natural language comprehension is the first step to friendly AI.
HENCE:
Re: (Score:1)
Re:Turing test == unhelpful target (Score:3, Interesting)
Furthermore, in an in-depth conversation, surely an AI would have to lie (talk about its family, its working life, etc)...
If we continue to enshrine the standard of the Turing test, we're aiming for a generation of inherently untruthful fake-people machines. If it 'knows' that many/most things it tells us are lies, it ma
Re: (Score:2)
The Turing test isn't like a litmus test - you don't get a clear and definite result either way.
Failing to pass the Turing Test doesn't mean a thing isn't intelligent, but if we make something that can pass, then it's something to take notice of.
Re: (Score:1)
Re: (Score:2)
> Why are people so interested in mimicing humans? Isn't intelligence far more interesting than human-ness?
Ah, but what IS intelligence? The beauty of the turing test is that it 'proves' that a program is intelligent when it cannot be distinguished from something that we already consider to be intelligent (humans), without (and this is the important bit) the need to properly define intelligence.
Of course when we have programs that can pass the turing test it will be much easier to convince people that a
Defining Intelligence (Score:2)
Same test? (Score:1)
Re: (Score:3, Informative)
Insensitive Clod! (Score:2, Funny)
While some strive only to meet the 'appearance Turing Test'
I don't come here to be insulted, you insensitive clod!
Re: (Score:1)
Hang on... (Score:2)
Re: (Score:1)
Shouldn't we make something that passes the original Turing Test first, before we go moving the goalposts?
Maybe mimicking appearance is easier (you know, like appearance + intelligence = regular turing test) or a subset.
Passing the human test. (Score:2)
Having chatted with elbot, I have to say that they must have had some pretty dense testers.
not getting it (Score:2)
Other great achievements (Score:1)
Never mind Humans (Score:1)
I'm still waiting for a computer to convince me it's as smart as a parrot.
--
RIP Alex
Turing Test won with Artificial Stupidity (Score:3, Funny)
Artificial intelligence came a step closer this weekend when a computer came within five percent of passing the Turing Test [today.com], which the computer passes if people cannot tell between the computer and a human.
The winning conversation was with competitor LOLBOT:
The human tester said he couldn't believe a computer could be so mind-numbingly stupid.
LOLBOT has since been released into the wild to post random abuse, hentai manga and titty shots to 4chan, after having been banned from YouTube for commenting in a perspicacious and on-topic manner.
LOLBOT was also preemptively banned from editing Wikipedia. "We don't consider this sort of thing a suitable use of the encyclopedia," sniffed administrator WikiFiddler451, who said it had nothing to do with his having been one of the human test subjects picked as a computer.
"This is a marvellous achievement, and shows great progress toward goals I've worked for all my life," said Professor Kevin Warwick of the University of Reading, confirming his status as a system failing the Turing test.
An alternative test (Score:1)
The classic Turing Test is not so good because it focuses on the appearance of intelligence rather than the mechanism for it. People design systems to do well at the test, thus evolving current AI work towards mimicking human conversation (including potential thought involved) rather than actually creating new thoughts.
I think a much harder test for machine intelligence would be passed when the *machine* cannot reliably tell itself from a human!
"I was just following orders..." (Score:1)
hodgepodge sans analysis (Score:1)
This article really doesn't have that much to do with the Turing test for most of its extent. The point of the Turing test isn't merely that under some circumstances machines can be confused with humans. The whole point of the Turing test is that it takes something that we think is essential to being intelligent or being conscious, and has the machine replicating that exactly. Or at least, that's how Turing intended it. Building sophisticated mannequins doesn't cut it - hopefully no-one thinks that merel