Can a Robot Learn a Language the Way a Child Does? (zdnet.com) 86
MIT researchers have devised a way to train semantic parsers by mimicking the way a child learns language. "The system observes captioned videos and associates the words with recorded actions and objects," ZDNet reports, citing the paper presented this week. "It could make it easier to train parsers, and it could potentially improve human interactions with robots." From the report: To train their parser, the researchers combined a semantic parser with a computer vision component trained in object, human and activity recognition in video. Next, they compiled a dataset of about 400 videos depicting people carrying out actions such as picking up an object or walking toward an object. Participants on the crowdsourcing platform Mechanical Turk to wrote 1,200 captions for those videos, 840 of which were set aside for training and tuning. The rest were used for testing. By associating the words with the actions and objects in a video, the parser learns how sentences are structured. With that training, it can accurately predict the meaning of a sentence without a video.
No (Score:1)
No
Re:No (Score:5, Insightful)
What if we rephrased the question, e.g., "What would an AI need to be able to acquire grammar and semantics by being trained on natural language sentences (the way human children are)?"
Those of us who have a mechanistic position on consciousness and intelligence see no theoretical obstacle to building a machine that does anything or indeed everything humans do. But many of us are dubious that AI will ever achieve true parity with the full range of human abilities. My doubts are economic in nature. I doubt that any such generalist AI will ever be the cheapest way to get whatever it is we want out of a machine.
Take the "AI" that's hot in the market now. It's not an AI like the robots in Asimov's storeis -- a mechanistic simulation of what people can do. The machine learning stuff being flogged by companies today is just a way of replacing people on certain tasks with something that is cheaper and in some case more consistent, albeit less versatile.
There's one exception to the rule that a generalist AI isn't really what we want, and that's if we want to prove a non-material soul is unnecessary for explaining anything about humanity. And I doubt anyone really cares enough about such a demonstration to pay what it would take to do it convincingly.
AIs are trained on grammatical sentences (Score:2)
That is what "statisitcal" language learning is about. Particularly useful for translation.
Your recharacterization is indeed much better than "to learn the way children learn" as we do not know how children learn.
AIs can already do many things better than people. If it would ever be as intelligent as a small child on everything, it would be a lot more intelligent than an adult in many other things.
An AI is not a human. It is a different beast entirely.
Re: (Score:2)
Indeed. The thing with automated translation is that it does not require an independent world-model. The world model and the placement of objects, attributes and actions in it is already contained in the input. That is the only reason machines can actually do translation better than by using a gigantic look-up table.
Re: (Score:2)
You actually have to pretty much "know" nothing for machine translations, as machines cannot do that. Machines can do rules (id precisely enough defined), they cannot do understanding. This is what will keep automated translations on a low quality level permanently. High quality translations will remain a domain of smart humans with a deep understanding of both source and target culture.
The funny thing is that the only situation where machine translations can go beyond that is actually the giant look-up tab
Re: AIs are trained on grammatical sentences (Score:3)
Re: (Score:2)
Oh? And where do you take that certainty? Because Science very much does not say "we are machines". Science says "we have no clue how this works". Physicalism is religion, not Science. It has no scientific basis.
Re: AIs are trained on grammatical sentences (Score:2)
Re: AIs are trained on grammatical sentences (Score:2)
Re: (Score:2)
If you're referring to his incompleteness theorem, then the same limitations it imposed on machines, it also imposes on humans. In much the same way a machine can't mindlessly rattle off closed tableaus of the negations of propositions until everything we want proved is proved, nor can a person. A human mathematician doesn't normally take this approach and is guided by intuition but there is nothing in his theorem to say a machine can't follow an algorithm indistinguishable from intuition.
Of course, wh
Re: (Score:2)
Indeed. Good point.
Re: (Score:2)
Translation from one human language to another is a different problem than understanding the semantics of natural language.
Human babies don't learn to translate English into some other language; they convert English into understanding. Learning to be fluent in a foreign language isn't learning to translate that language into your native language, it's learning to understand that language without translation.
So the ability of machines to translate from one human language to another does not at all look like
Re: (Score:3)
Translation from one human language to another is a different problem than understanding the semantics of natural language.
I would agree for a crude translation. However, as you want to approach 100% accuracy in translation, understanding of language becomes essential. Look at how Google translates jokes or song lyrics.
Re: (Score:2)
What if we rephrased the question, e.g., "What would an AI need to be able to acquire grammar and semantics by being trained on natural language sentences (the way human children are)?"
Simple: Actual intelligence. "AI" is a marketing term. No "AI" these days and for a long time to come (possibly forever) has actual intelligence. All we have is mindless automation that is not capable of insight.
Re: No (Score:2)
Re: No (Score:2)
No (Score:1)
Because nobody knows how a child learns language. Chomsky famously called it a "black box inside their heads."
Re: No (Score:2)
Re: (Score:2)
too bad zdnet editors, and then /. editors in summery, failed to question the unwarranted assumption in the article that we know how a child learns, instead allowed, and even amplified, the reporter's uncritical parroting of the "researchers" pitch.
Re: No (Score:2)
Re: (Score:3, Interesting)
Chomsky famously called it a "black box inside their heads."
Noam Chomsky was being a bit modest. He did more than anyone to figure out what is going on inside that black box, and what innate language learning ability children are born with, which is far more than the "tabla rasa" theory pushed by behaviorists. Chomsky learned that all human languages, even those invented by isolated groups of children, have nouns, verbs, adjectives, and adverbs. All of them have words for discussing hypotheticals, and situations separated in both time and place from the here-and-
Re: No (Score:3)
Re: (Score:2)
Chomsky wasn't wrong, the reports of lack of recursion were wrong. (Even if the reports were correct, it doesn't mean Chomsky was wrong, he merely said humans are capable of recursion, not that they use it all the time or need to use it)
The reports of recursion in the Pirahã language seem disputed. I suppose that's what happens when you have an extremely small field of experts studying a unique language spoken by around 250 people.
Chomsky's assertion is marked as "not in citation given" in the wikipedia article on the Pirahã language. I don't care either way but taken together with the dispute about recursion in the language, it's an interesting nerd kerfuffle to witness.
Re: (Score:2)
Nobody knows how intelligence, insight and consciousness works either. Not even the neuro-"scientists" that like so much to claim otherwise, but pathetically fail when put to any real test.
Re: (Score:2)
Because nobody knows how a child learns language. Chomsky famously called it a "black box inside their heads."
This means if we had a robot learning the way a child does, we would not be able to verify it was learning in the same way. It doesn't mean it's impossible to independently arrive at the same solution. It just means we wouldn't know it was the same method but why would we care? If the inputs and outputs are the same and the machine is efficient, then problem solved.
BS (Score:5, Insightful)
Re: (Score:1)
MIT has been claiming this type of BS for decades. They haven't done anything. Literally they have been talking about this since the 1970s.
If a machine is to be able to learn a language the way a child does, that machine needs to be able to experience the sensation of something like PEEING , for example.
A child may say 'Pee Pee' because he or she feels pressure building inside the bladder, that there is a need to release that pressure, and 'pee' is the action in which that pressure gets to be released.
Now, can a machine go through the same process?
If yes, then, the answer to whether a machine can pick up a language the way a child does might
Re: (Score:2)
that machine needs to be able to experience the sensation of something like PEEING
Actually, a machine without sensors / feelings might a lot during the learning process, and your comment makes sense. However not sure "peeing" is the most important feeling in that process.
Re: (Score:1)
Yes, because people without sight and hearing can't learn to communicate because they don't have the same sensory experience as normal communicators, right? And people born with paralysis who can't feel their peepee or other problems have difficulty learning these concepts, right?
A child can't really learn about a butterfly coming from a caterpillar and the metamorphosis process because tehy can't experience it, right?
This is about the stupidest argument I've heard against machine learning a language in a l
Re: BS (Score:2)
Re: (Score:2)
Indeed. But they are not the only ones pushing things that inspire humans but are technological bullshit. Just think of flying cars, quantum computers, and the whole endless list of persistent failures along the same lines. There are other projects that are very long time (fusion, self-driving cars, etc.), but the are making persistent and meaningful advances all along the path. This stuff here does not. It just goes from meaningless stunt to meaningless stunt, because they have nothing.
Re: BS (Score:3)
Re: (Score:2)
I am nor a naysayer. I am a technology expert. Not sure whether you are equipped to understand the difference though.
Re: BS (Score:2)
Re: (Score:2)
You are mistaken. You should not project yourself onto others....
Re: (Score:2)
Re: BS (Score:2)
Re: that's retarded (Score:2)
Re: It's all fun and games... (Score:2)
No & Yes (Score:2)
Re: (Score:2)
One difference though: it'll learn much faster than a human child.
There is absolutely no reason to expect that.
Re: No & Yes (Score:2)
Re: (Score:2)
That one would tell you that the robot would be several orders of magnitude slower. If you actually had that knowledge. Electronics is slow and large in comparison to neural tissue.
Re: No & Yes (Score:2)
Re: (Score:2)
Re: (Score:2)
There are lots of reasons to expect that, namely that we use computers precisely *because* they are faster (and more accurate) than humans at processing information.
Re: (Score:2)
They are not. They are faster at doing simple transformations on digital information. In the analog space for difficult problems (and learning language certainly is a difficult problem) computers are somewhere between very slow and incapable.
Re: (Score:2)
Machines may be able to learn to communicate as children do - but not in my lifetime or yours.
No (Score:2)
I think we might have a problem here... (Score:2)
Luckily AI is used to this class of failure by now, so they'll probably be OK.
Re: (Score:3)
"AI" has failed to deliver on grande promises for half a century now. Nobody of those deciding about money seems to notice, so yes, they will be fine. The failure will continue though for a long, long time and maybe forever.
What has delivered a lot of results is classical, dumb automation. Calling it "AI" is just a marketing lie that seems to work well though.
Re:I think we might have a problem here... (Score:5, Insightful)
It's good that we understand how humans acquire natural language well enough that 'just make the computer do it that way' is a plan.
We don't understand how humans acquire knowledge of Go, yet people made a computer that started from nothing and learned it simply by playing itself and discovering all the knowledge.
The same method has been used in many different machine learning applications, and it seems to work pretty well, regularly scoring much better results than a human.
Re: I think we might have a problem here... (Score:2)
Re: (Score:2)
Indeed. Slashdot has become Luddite central, especially with regard to AI. The endless stream of empty "AI isn't actually AI" comments are boring and tiring.
Re: (Score:2)
Re: (Score:2)
Oh stop.
This is your utterly vapid comment, which is currently rated +5 Insightful:
MIT has been claiming this type of BS for decades. They haven't done anything. Literally they have been talking about this since the 1970s. Think about it: if it worked it would have been incorporated into something like Siri and be worth billions. But Siri is pathetic.
It adds nothing to the discussion but naysaying and bashing. It is a disgrace for the level of discourse I expect from Slashdot.
I am 'in IT' too, and I have been surprised by the speed at which self-driving car advancements have been made (I wasn't expecting us to be anywhere near where we are for at least 10 years -- go ahead, tell me you were). I've also been surprised at the effectiveness of deep learning in a project of o
Re: (Score:3)
Re: (Score:2)
Computers are good at games with strict rules. We all know that.
The rules don't describe what a winning Go position looks like. And yet, that's what they learn to figure out, even better than humans.
Re: (Score:2)
AGI, also known as "strong AI", "true AI", or "the AI we do not have and have not clue how to make or whether that is even possible".
Careful what you wish for (Score:1)
By breaking shit and seeing what mom and dad say about it?
results (Score:1)
In theory - yes, why not? (Score:1)
Re: (Score:2)
Ignore Our Senses For Now, For Zorkâ(TM)s Sak (Score:2)
Stop with it with videos. Stick with ASCII text.
Machine learning, or AI, or whatever can learn patterns for things such as vision or hearing. But itâ(TM)s low level stuff like, âoeHey, look, a face!â. A newborn does this with baked in brain contents. âoeMake a low light picture look nice!â, a babyâ(TM)s eyes do this way beyond our photography tech (including AI assisted stuff).
Tangent with a point: Young children are sponges, they absorb everything they see, hear, smell, t
Re: (Score:2)
And don't use Microsoft Word to pre-type things for Slashdot. Slashdot's AI can't handle complicated text symbols like facing quote marks. Sad.
It's just nouns, verbs, some physics laws (Score:1)