A.I. and Robotics Take Another Wobbly Step Forward 102
CWmike writes to tell us that artificial intelligence and robotics have made another wobbly step forward with the most recent robot from Stanford. "Stair" is one of a new breed of robot that is trying to integrate learning, vision, navigation, manipulation, planning, reasoning, speech, and natural language processing. "It also marks a transition of AI from narrow, carefully defined domains to real-world situations in which systems learn to deal with complex data and adapt to uncertainty. AI has more or less followed the 'hype cycle' popularized by Gartner Inc.: Technologies perk along in the shadows for a few years, then burst on the scene in a blaze of hype. Then they fall into disrepute when they fail to deliver on extravagant promises, until they eventually rise to a level of solid accomplishment and acceptance."
Surprised it hasn't been said yet (Score:3, Funny)
I for one welcome our new learning, seeing, navigating, manipulating, planning, reasoning, speaking, and natural language processing Stair overlords.
Re: (Score:1, Funny)
We will fight the Stair overlords on the beaches, we will fight them in the streets, we will never, never surrender... Wait, what's that you say? A completely human-like pleasure model that comes in a 20 year old Raquel Welch version will be available.
Never mind.
OK. I'll say it (Score:5, Funny)
Re: (Score:2)
I for one welcome our new learning, seeing, navigating, manipulating, planning, reasoning, speaking, and natural language processing Stair overlords.
*whispers*
You forgot plotting.
*mumbles quietly to himself*
The three types (Score:5, Insightful)
Disclosure: I am a Hofstadterian, so I am biased here.
There are basically three types of AI-peoples: The neverlands, the hype masters, and the hope monks. The neverlands, like Searle, deny that intelligence is a product of information-processing. Searle has made it into a sport the claims that AI will never happen because it does not have the "causal powers of the brain".
Then there are these types, like those reported here. Hypeware at its best. Look, it's alive, it's (F*CKING GASP) becoming self aware, etc hype hype hype ad-nauseum. But look at its innards _very_ closely, and it's pretty empty in there.
There are so many pitfalls involved that it's impossible to mention all faulty premises involved in each project. But just for starters, consider this: when we program a machine to deal with the number 2, it usually goes into binary form 10 and there it stays, ready for manipulation. But how plausible is this psychologically? NOT AT ALL! When _we think_ of a "2", hordes of disparate, subliminar images come to mind, such as the gestalt of the digit, the sound of it, the fact that it's a prime (if you're math inclined), a couple (if you're a therapist), even-ness, odd-ness, the words "two" "too", and a huge number of semi-visible mental imagery.
Whenever you see a hyped AI project, just consider how it deals with the numeral 2. Most likely it's a _fake_. The process through which it goes through is not psychologically plausible. Which means that it will fail to understand human concepts.
Some projects with machine learning actually make it a habit of finding _meaning_ in highly correlated words (i.e., words that tend to occur together in documents). That is a _joke_. Meaning NEVER comes from correlation. If it did, "lawyer" and "telephone" would have much more to do than "lawyer" and "vampire", or "politician" and "scumbag".
Sorry for the rent, but I work hard to understand fucking hard issues and to see these folks being slashdotted with nothing to show for just begs for a rant. If you want to see really serious research, take a look a Douglas Hofstadter's "Fluid Concepts and Creative Analogies" and/or google for Kemp's MIT thesis.
Re: (Score:2)
Sorry, but if you knew anything about neurons, you knew, that correlation is the way neurons learn.
Think of two inputs firing at the same time. Those receiving neurons that get both signals in a short enough frame of time, are growing the most. That's basically the learning happening.
But what am I doing, answering to a guy who is dripping with anger. Get a therapist. Learn how it's just as Ok when you're wrong. Then learn how to really back up your good arguments.
Re: (Score:2)
Re: (Score:1)
Re: (Score:3, Insightful)
Disclosure: I am a Hofstadterian, so I am biased here.
There are basically three types of AI-peoples: The neverlands, the hype masters, and the hope monks. The neverlands, like Searle, deny that intelligence is a product of information-processing. Searle has made it into a sport the claims that AI will never happen because it does not have the "causal powers of the brain".
Then there are these types, like those reported here. Hypeware at its best. Look, it's alive, it's (F*CKING GASP) becoming self aware, etc hype hype hype ad-nauseum. But look at its innards _very_ closely, and it's pretty empty in there.
I think you'd find that if we could look into the functioning of the human mind in as much detail as we can look into an AI program, you'd find that it's pretty empty in there, too. I believe that intelligence is an emergent property of a lot of fairly simple processes. Yes, that's a matter of belief, not of proof.
There are so many pitfalls involved that it's impossible to mention all faulty premises involved in each project. But just for starters, consider this: when we program a machine to deal with the number 2, it usually goes into binary form 10 and there it stays, ready for manipulation. But how plausible is this psychologically? NOT AT ALL! When _we think_ of a "2", hordes of disparate, subliminar images come to mind, such as the gestalt of the digit, the sound of it, the fact that it's a prime (if you're math inclined), a couple (if you're a therapist), even-ness, odd-ness, the words "two" "too", and a huge number of semi-visible mental imagery.
Whenever you see a hyped AI project, just consider how it deals with the numeral 2. Most likely it's a _fake_. The process through which it goes through is not psychologically plausible. Which means that it will fail to understand human concepts.
A machine does not have to reproduce the mechanisms of the human mind in order to display intelligence; it has to emulate the performance. If the inputs are similar and the outputs are similar wha
Re: (Score:2)
A machine does not have to reproduce the mechanisms of the human mind in order to display intelligence; it has to emulate the performance. If the inputs are similar and the outputs are similar what happens in the middle is unimportant.
A ball and a bat cost 110. The bat costs 100 more than the ball. How much is that ball? 10, right? Do you think computers can solve this? There is this general faulty reasoning that _understanding is the property of a representation_. That's just wrong. Just as temperature is not a property of molecules, understanding is not a property of a representation. It is a property of a process. In order to display the same "intelligent" behavior we do, machines have to go through the same process. Otherwi
Re: (Score:1)
Why would a "machine" answer $5? If you give the problem in a way that the machine can "understand" (as a Python script, for example) it would give you the right answer.
If you give the operation "5 + 5 =" to a person she would give you the same answer as a calculator. Getting computers to understand more difficult problems is just a matter of developing new layers.
Oh, representing the number '2' as more than 10 in binary (and hence having more properties) is easy: just call Double(2);
It's called OOP and bee
Re: (Score:2)
Why would a "machine" answer $5?
because 5 is thee right answer, not 10, as most humans give, incorrectly. We need to understand how we process information before we have intelligent machines, or else theyll be forever and ever as brittle as they are now.
Re: (Score:2)
Re: (Score:3, Interesting)
A machine does not have to reproduce the mechanisms of the human mind in order to display intelligence; it has to emulate the performance. If the inputs are similar and the outputs are similar what happens in the middle is unimportant.
There is this general faulty reasoning that _understanding is the property of a representation_. That's just wrong. Just as temperature is not a property of molecules, understanding is not a property of a representation. It is a property of a process. In order to display the same "intelligent" behavior we do, machines have to go through the same process.
There is no ghost in the machine. The human brain is at best a Turing complete computing engine - at best, because we can prove that is not possible to be more. And we can prove that (modulo limited store, which is also an issue for human brains) our computers are also Turing complete. So it is not possible that our computers cannot do what a human brain can do - although admittedly we don't yet know how to program them to do it.
But we will find out, and when we do, I predict we'll look at the trivial littl
Re: (Score:2)
Don't know why that posted as AC.. .. But its my post..
Re: (Score:1)
"THEY" dont want you to know.
Now you may disapear and be sucked into the net. Perhaps youll be turned into a piece of google, a blob at slashdot.
Thinking straight is never a good idea. Expressing good ideas nowdays, is suicidal.
Re: (Score:2)
to address your flawed understanding of intelligence, i am going to have to refer you back to one of my previous post.
http://hardware.slashdot.org/comments.pl?sid=1102185&cid=26579225 [slashdot.org]
please note that this is by no means a full understanding of how intelligence works. However it provides answers to many of your questions and misunderstandings as to why we think the way we do and how it is being done. You may even go a step further and actually implement many of the concepts discussed in the post. A side
Re: (Score:1)
Re: (Score:1)
Have you seen my stapler? (Score:1)
"Here is your stapler," says Stair, handing it to the man. "Have a nice day."
Re: (Score:1, Redundant)
Now, have you seen the memo about the TPS reports?
Re: (Score:1)
Re: (Score:2)
Can I have a piece of cake?
People perception (Score:2, Interesting)
The generals population of AI is the Data, or Terminator. Some how superior to us humans who will not make mistakes. However real AI the computer makes a lot of Mistakes, and learns from them. But being that a standard computer has the brain power of a bug, it isn't surprising that AI meets the hype.
Re:People perception (Score:5, Interesting)
Even my AI professor in school pointed to Data as really the end-goal of AI research (as well as a character from Battlestar Galactica, though I don't watch that show). I think many people are aware that modern AI has roughly the intelligence of an animal. That's much improved on AI from when the character Data was made, where the intelligence was more like that of a single-cell organism.
Of course, considerations must always be made for disaster... [xkcd.com]
I'm always amazed how broad a field AI really is; algorithms started in AI theory for moving robots around a room can be applied nearly everywhere.
Re:People perception (Score:5, Insightful)
It could even be argued that the ability to navigate a room is the same set of problem solving skills that informs all other intelligence.
It's spacial understanding combined with analyzing the capabilities of the agent to complete the task. Include a door and you have extremely complex problem solving and learning abilities.
Re: (Score:3, Interesting)
Then again, some single cell organisms are pretty smart [abc.net.au].
Seriously though, I don't think AI has yet reached the point of being as smart as your typical animal (which means low-level mammal I'm assuming). Not without substantial loans of intelligence on the part of the AI operator/designer.
Re: (Score:2, Interesting)
You said something unpopular about AI. It's a good job there's no -1 sceptic modpoint, or I wouldn't even have seen your comment.
As far as I can see, AI has reached the point of being as smart as a snail that's really, really good at chess.
...if I've offended any snail slashdot readers, I apologise profusely.
Re: (Score:2)
I think many people are aware that modern AI has roughly the intelligence of an animal.
I think that modern AI is not even close to the intelligence of an animal. I remember in our cognitive robotics course where we were shown a video of a bird figuring out how to get its food from a little toy trap that some researchers set.
It was amazing how the bird used a piece of wire to make a hook in order to pick out the basket from the trap. I don't think AI would figure out how to do that (yet).
Re: (Score:1)
Re: (Score:2)
I think many people are aware that modern AI has roughly the intelligence of an animal.
You must be talking about some pretty simple animals.
Re:People perception (Score:4, Funny)
The generals population of AI is the Data, or Terminator. Some how superior to us humans who will not make mistakes.
Clever use of first sentence to invalidate second :-)
Re: (Score:3, Insightful)
But being that a standard computer has the brain power of a bug, it isn't surprising that AI meets the hype.
I was thinking it was a clever use of the first part of the last sentence to invalidate the last part of the last sentence.
Re: (Score:2, Redundant)
Data was an enigma to me.
He would go into a holodeck to learn about emotions from computer software.
Was I the only one confused by this? Why not... you know... just give him the same programming as all of the holodeck characters? It seemed emotional and social behavior was easy to teach to 20 billion unique characters on a holodeck program for earth but beyond the abilities of Data?
Re: (Score:2)
Re: (Score:2)
That's not true when Data goes into the Holodeck to learn humor his teacher tells him outright what is and is not funny.
Being able to judge humor and react appropriately was beyond data, but not beyond his holographic tutor.
Re: (Score:2)
It might also explain his curious occupation with fuzzy cats.
LOL, I just looked up the "Ode to Spot" Geez, I still miss this show...
Ode to Spot
Felis Cattus, is your taxonomic nomenclature,
an endothermic quadruped carnivorous by nature?
Your visual, olfactory and auditory senses
contribute to your hunting skills, and n
Re: (Score:1)
Actually, speaking of these things... why'd they name a robot Stair when it can't even climb stairs?!
Re: (Score:2)
Skynet Terminator AI Robot?
STanford Artifical Intelligence Robot, really.
Re: (Score:2)
"But being that a standard computer has the brain power of a bug..."
I didn't realize we were that close to replacing most humans with robots.
Re: (Score:2)
Re: (Score:1)
Nasal monotone (Score:2)
New Algorithm ? (Score:2, Funny)
Bearing in mind that this new robot is called STAIR, does that mean it is using gradient descent algorithms ?
Re:New Algorithm ? (Score:5, Funny)
Bearing in mind that this new robot is called STAIR, does that mean it is using gradient descent algorithms ?
Naw, it means that in a fit of irony, they named it after the one thing it couldn't handle.
Re: (Score:1)
Naw, they found that the thing would look for a long time before making a decision but they used the wrong form of the word (stare)...
Re: (Score:2)
And a damn good thing. Last thing we need is racially hating robots out to exterminate us that can climb staiNOCARRIEREXTERMINATEEXTERMINATEEXTERMINATE
Flexible Frank! (Score:1)
Hype? (Score:5, Interesting)
This article both points out the problems of over-hyped advances in robots, while also claiming this robot has transitioned away from narrowly defined domains?
The voice recognition & language processing component alone would be years ahead of anything else if it worked well outside of a "narrow, carefully defined domain". It seems like they are yet again over-hyping new research.
Re:Hype? (Score:5, Insightful)
Re: (Score:2)
>>People have been predicting it for decades, but the actual nuance of such an achievement is much more complex than most are able to comprehend
You haven't heard of Playstation Home? Second Life? Quake?
I used to work for a company that produced goggles for VR work (used mainly in the military as personal HUDs) and we produced some video games using them. It was fun, but in the consumer market people seemed to prefer just using a mouse to turn instead of having to snap one's head around to aim at an en
Re: (Score:2)
>>People have been predicting it for decades, but the actual nuance of such an achievement is much more complex than most are able to comprehend
You haven't heard of Playstation Home? Second Life? Quake?
I used to work for a company that produced goggles for VR work (used mainly in the military as personal HUDs) and we produced some video games using them. It was fun, but in the consumer market people seemed to prefer just using a mouse to turn instead of having to snap one's head around to aim at an enemy sneaking up behind you in Quake. It's a mature technology (no hype needed, it works), but people just prefer the mouse, monitor, and keyboard approach to VR goggles and haptic gloves.
You are a bit south of my point. Your standards are realistic and not at all related to the hype surrounding virtual reality. The sheer fact you mention second life, quake or playstation home as matching the hype betrays the fact you weren't listening when the hype men were talking (not a bad thing).
When I talk about the hype, I mean all the people claiming that you would be experiencing, virtually, what we experience on a day to day basis. A photo-realistic experience close to that of regular life.
Re: (Score:2)
>>With a realistic viewpoint, I'm sure the technology is progressing fine. It isn't quite matching that of the perpetual hype machine surrounding it, though. No surprise there.
Fair enough. I saw Lawnmower Man.
Re: (Score:1)
Indeed. Although the example listed in the article (successfully responding to "go fetch me a stapler") is impressive, I highly doubt that if I plunked the robot down in my office it could perform that task. Nor could it probably handle a request for an arbitrarily different object in their lab (go fetch me light bulb). They've integrated a lot of difficult problems and managed to get it to work in their environment, but unless they're well ahead of anyone else this is still within a narrowly defined pro
Re: (Score:1)
The voice recognition & language processing component alone would be years ahead of anything else if it worked well outside of a "narrow, carefully defined domain".
Agreed. And even then it will still be short of the intelligence of a first grader. In the murky waters of human communications the words are just a tiny piece of the puzzle. Where's the posture gesture expression tone accent speed pauses and choice of words recognition that tells you what a person really means by asking you to fetch a stapler?
Sometimes a stapler isn't just a stapler.
Re:Yay! (Score:5, Funny)
And we could call the offspring of STAIR and MASTER...
the enslavement machine!
Obligatory film reference.... (Score:4, Funny)
I sure as hell hope they left out the lip-reading module.
Re: (Score:1)
All too human (Score:2)
Sounds like a few humans I can think of (politicians?) Well, except for that "solid accomplishment and acceptance" part.
Other Robots (Score:3, Informative)
Recent articles on robots.
Yeah, but.. (Score:4, Funny)
Surely there's money in this? (Score:2, Insightful)
I guess the root of my question is, by pursuing AI are you pushing yourself into becoming an academic for the rest of your life
Re: (Score:3, Interesting)
Re: (Score:2)
It depends what you want to do with AI, it's a wide and varied subject.
If you want to develop robots like in the article, then yeah you'll need to become an academic.
If you're not too picky about what AI techniques you use, then you don't so much look for a job that says "We want someone who understands AI" as you do look for a job, where you know you'll be able to do things better with AI.
This is the path I ended up going down, I simply looked for a software engineering job at a firm that was willing to nu
I'm against these sites in principal (Score:1)
Asimo (Score:1, Interesting)
I will like to know how is better than Asimo, or by the way, of any of the advanced Japanese robots
Re: (Score:2)
I will like to know how is better than Asimo, or by the way, of any of the advanced Japanese robots
You must be new here. It's from Stanford, who gave us google, nike, ...yahooo...and silicon graphics... Oh shit
solid accomplishment and acceptance? (Score:2)
And this happens when? On slashdot, I think this last part of the cycle is purely hypothetical.
Re: (Score:3, Funny)
Seriously. We've been doing this HTML thing for what, 20 years now</a>? And still we can't accomplish the proper use of the closing tag. ;) [wikipedia.org]
Hyper Cycle (Score:3, Insightful)
Turing test for people (Score:2)
trying to integrate learning, vision, navigation, manipulation, planning, reasoning, speech, and natural language processing,
IOW walk and chew gum at the same time. Heck, I know people who can't pass that test.
"Stair"? (Score:2)
"Stair" is one of a new breed of robot that is trying to integrate learning, vision, navigation, manipulation, planning, reasoning, speech, and natural language processing.
Because LVNMPRSLP doesn't make such a catchy algorithm.
I'll build robot soldiers. (Score:1, Troll)
I'm building a robot soldier. It will have a pistol in one hand, and a club in another, so it can club a bunch of people, and shoot the rest, because the pistol will be fed by 2000 rounds stored in its arm. It will weight 400lbs. Build 500,000 of these things, and we won't have to worry about hearts and minds. We'll just unleash fire breathing metal terror on our enemies, and they to us, and all that will be left is a bunch of robots running in circles until they run out humans and batteries. But hey,
Re: (Score:2)
I have made this cool ticking necklace for Stair (Score:1)
Sarah Connor
Re: (Score:1)
AI is artificial in the same sense artificial flavour is artificial. Yes, substances have a flavour, and it's no we who give those substances a flavour. But we are those who sythesize those substances, and we do so because of their specific flavour. And that's the same with AI: We are those who produce the systems, and we do it because of their "intelligence".
Re: (Score:2)
hey, are you saying AI has a flavor? i don't think /b/ and intelligence belong together like that
Re: (Score:2, Informative)