Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics Education Software Technology

Can a Robot Learn a Language the Way a Child Does? (zdnet.com) 86

MIT researchers have devised a way to train semantic parsers by mimicking the way a child learns language. "The system observes captioned videos and associates the words with recorded actions and objects," ZDNet reports, citing the paper presented this week. "It could make it easier to train parsers, and it could potentially improve human interactions with robots." From the report: To train their parser, the researchers combined a semantic parser with a computer vision component trained in object, human and activity recognition in video. Next, they compiled a dataset of about 400 videos depicting people carrying out actions such as picking up an object or walking toward an object. Participants on the crowdsourcing platform Mechanical Turk to wrote 1,200 captions for those videos, 840 of which were set aside for training and tuning. The rest were used for testing. By associating the words with the actions and objects in a video, the parser learns how sentences are structured. With that training, it can accurately predict the meaning of a sentence without a video.
This discussion has been archived. No new comments can be posted.

Can a Robot Learn a Language the Way a Child Does?

Comments Filter:
  • by Anonymous Coward

    No

    • Re:No (Score:5, Insightful)

      by hey! ( 33014 ) on Wednesday October 31, 2018 @10:41PM (#57572267) Homepage Journal

      What if we rephrased the question, e.g., "What would an AI need to be able to acquire grammar and semantics by being trained on natural language sentences (the way human children are)?"

      Those of us who have a mechanistic position on consciousness and intelligence see no theoretical obstacle to building a machine that does anything or indeed everything humans do. But many of us are dubious that AI will ever achieve true parity with the full range of human abilities. My doubts are economic in nature. I doubt that any such generalist AI will ever be the cheapest way to get whatever it is we want out of a machine.

      Take the "AI" that's hot in the market now. It's not an AI like the robots in Asimov's storeis -- a mechanistic simulation of what people can do. The machine learning stuff being flogged by companies today is just a way of replacing people on certain tasks with something that is cheaper and in some case more consistent, albeit less versatile.

      There's one exception to the rule that a generalist AI isn't really what we want, and that's if we want to prove a non-material soul is unnecessary for explaining anything about humanity. And I doubt anyone really cares enough about such a demonstration to pay what it would take to do it convincingly.

      • That is what "statisitcal" language learning is about. Particularly useful for translation.

        Your recharacterization is indeed much better than "to learn the way children learn" as we do not know how children learn.

        AIs can already do many things better than people. If it would ever be as intelligent as a small child on everything, it would be a lot more intelligent than an adult in many other things.

        An AI is not a human. It is a different beast entirely.

        • by gweihir ( 88907 )

          Indeed. The thing with automated translation is that it does not require an independent world-model. The world model and the placement of objects, attributes and actions in it is already contained in the input. That is the only reason machines can actually do translation better than by using a gigantic look-up table.

          • by hey! ( 33014 )

            Translation from one human language to another is a different problem than understanding the semantics of natural language.

            Human babies don't learn to translate English into some other language; they convert English into understanding. Learning to be fluent in a foreign language isn't learning to translate that language into your native language, it's learning to understand that language without translation.

            So the ability of machines to translate from one human language to another does not at all look like

            • Translation from one human language to another is a different problem than understanding the semantics of natural language.

              I would agree for a crude translation. However, as you want to approach 100% accuracy in translation, understanding of language becomes essential. Look at how Google translates jokes or song lyrics.

      • by gweihir ( 88907 )

        What if we rephrased the question, e.g., "What would an AI need to be able to acquire grammar and semantics by being trained on natural language sentences (the way human children are)?"

        Simple: Actual intelligence. "AI" is a marketing term. No "AI" these days and for a long time to come (possibly forever) has actual intelligence. All we have is mindless automation that is not capable of insight.

  • by Anonymous Coward

    Because nobody knows how a child learns language. Chomsky famously called it a "black box inside their heads."

    • too bad zdnet editors, and then /. editors in summery, failed to question the unwarranted assumption in the article that we know how a child learns, instead allowed, and even amplified, the reporter's uncritical parroting of the "researchers" pitch.

    • Re: (Score:3, Interesting)

      Chomsky famously called it a "black box inside their heads."

      Noam Chomsky was being a bit modest. He did more than anyone to figure out what is going on inside that black box, and what innate language learning ability children are born with, which is far more than the "tabla rasa" theory pushed by behaviorists. Chomsky learned that all human languages, even those invented by isolated groups of children, have nouns, verbs, adjectives, and adverbs. All of them have words for discussing hypotheticals, and situations separated in both time and place from the here-and-

      • Chomsky wasn't wrong, the reports of lack of recursion were wrong. (Even if the reports were correct, it doesn't mean Chomsky was wrong, he merely said humans are capable of recursion, not that they use it all the time or need to use it)
        • by dj245 ( 732906 )

          Chomsky wasn't wrong, the reports of lack of recursion were wrong. (Even if the reports were correct, it doesn't mean Chomsky was wrong, he merely said humans are capable of recursion, not that they use it all the time or need to use it)

          The reports of recursion in the Pirahã language seem disputed. I suppose that's what happens when you have an extremely small field of experts studying a unique language spoken by around 250 people.

          Chomsky's assertion is marked as "not in citation given" in the wikipedia article on the Pirahã language. I don't care either way but taken together with the dispute about recursion in the language, it's an interesting nerd kerfuffle to witness.

    • by gweihir ( 88907 )

      Nobody knows how intelligence, insight and consciousness works either. Not even the neuro-"scientists" that like so much to claim otherwise, but pathetically fail when put to any real test.

    • Because nobody knows how a child learns language. Chomsky famously called it a "black box inside their heads."

      This means if we had a robot learning the way a child does, we would not be able to verify it was learning in the same way. It doesn't mean it's impossible to independently arrive at the same solution. It just means we wouldn't know it was the same method but why would we care? If the inputs and outputs are the same and the machine is efficient, then problem solved.

  • BS (Score:5, Insightful)

    by 110010001000 ( 697113 ) on Wednesday October 31, 2018 @09:23PM (#57572093) Homepage Journal
    MIT has been claiming this type of BS for decades. They haven't done anything. Literally they have been talking about this since the 1970s. Think about it: if it worked it would have been incorporated into something like Siri and be worth billions. But Siri is pathetic.
    • by Anonymous Coward

      MIT has been claiming this type of BS for decades. They haven't done anything. Literally they have been talking about this since the 1970s.

      If a machine is to be able to learn a language the way a child does, that machine needs to be able to experience the sensation of something like PEEING , for example.

      A child may say 'Pee Pee' because he or she feels pressure building inside the bladder, that there is a need to release that pressure, and 'pee' is the action in which that pressure gets to be released.

      Now, can a machine go through the same process?

      If yes, then, the answer to whether a machine can pick up a language the way a child does might

      • that machine needs to be able to experience the sensation of something like PEEING

        Actually, a machine without sensors / feelings might a lot during the learning process, and your comment makes sense. However not sure "peeing" is the most important feeling in that process.

      • by Anonymous Coward

        Yes, because people without sight and hearing can't learn to communicate because they don't have the same sensory experience as normal communicators, right? And people born with paralysis who can't feel their peepee or other problems have difficulty learning these concepts, right?

        A child can't really learn about a butterfly coming from a caterpillar and the metamorphosis process because tehy can't experience it, right?

        This is about the stupidest argument I've heard against machine learning a language in a l

      • The answer is of course yes, a machine that occasionally has the need to release built up pressure already exists. Of course your point is absurd, but I also wanted to point out that if that was somehow some critical component to the design it would be a complete non-issue anyway.
    • by gweihir ( 88907 )

      Indeed. But they are not the only ones pushing things that inspire humans but are technological bullshit. Just think of flying cars, quantum computers, and the whole endless list of persistent failures along the same lines. There are other projects that are very long time (fusion, self-driving cars, etc.), but the are making persistent and meaningful advances all along the path. This stuff here does not. It just goes from meaningless stunt to meaningless stunt, because they have nothing.

      • I love how you list a number of things that naysayers have traditionally cited as pipe dreams that are now either realized or on the verge of being realized to try to make a case for something you don't understand in the least being impossible. Seriously ... Give it up. "There will never be computers in most homes! It's impossible and the idea is absurd!" - virtually everyone I told that this would happen circa 1982
  • Not currently, because no human was smart enough to find the right algorithm. Yes, when Google or the Chinese make the algorithm. One difference though: it'll learn much faster than a human child.
    • by gweihir ( 88907 )

      One difference though: it'll learn much faster than a human child.

      There is absolutely no reason to expect that.

      • Except of course a basic knowledge of human anatomy and electronics.
      • There is. It's hard to control the way a brain works, to teach it new things optimally, and it has many limits, and can hardly be acutely controlled from the inside. A human made algorithm on the other hand, once created, should be optimizable in a much easier way.
      • There are lots of reasons to expect that, namely that we use computers precisely *because* they are faster (and more accurate) than humans at processing information.

        • by gweihir ( 88907 )

          They are not. They are faster at doing simple transformations on digital information. In the analog space for difficult problems (and learning language certainly is a difficult problem) computers are somewhere between very slow and incapable.

          • That is because AI is at the same place that electronics was a century ago: a lot of interesting ideas, but little or no results.
            Machines may be able to learn to communicate as children do - but not in my lifetime or yours.
  • by AHuxley ( 892839 )
    See AI winter https://en.wikipedia.org/wiki/... [wikipedia.org].
  • It's good that we understand how humans acquire natural language well enough that 'just make the computer do it that way' is a plan. Otherwise this might not actually work terribly well.

    Luckily AI is used to this class of failure by now, so they'll probably be OK.
    • by gweihir ( 88907 )

      "AI" has failed to deliver on grande promises for half a century now. Nobody of those deciding about money seems to notice, so yes, they will be fine. The failure will continue though for a long, long time and maybe forever.

      What has delivered a lot of results is classical, dumb automation. Calling it "AI" is just a marketing lie that seems to work well though.

    • by religionofpeas ( 4511805 ) on Thursday November 01, 2018 @02:57AM (#57572731)

      It's good that we understand how humans acquire natural language well enough that 'just make the computer do it that way' is a plan.

      We don't understand how humans acquire knowledge of Go, yet people made a computer that started from nothing and learned it simply by playing itself and discovering all the knowledge.

      The same method has been used in many different machine learning applications, and it seems to work pretty well, regularly scoring much better results than a human.

      • Finally someone who isn't a clueless baffoon pontificating on the impossibility of success in a field in which they have no education or understanding!
        • Indeed. Slashdot has become Luddite central, especially with regard to AI. The endless stream of empty "AI isn't actually AI" comments are boring and tiring.

          • We aren't Luddites. We are people that actually understand technology and aren't just in IT.
            • Oh stop.

              This is your utterly vapid comment, which is currently rated +5 Insightful:

              MIT has been claiming this type of BS for decades. They haven't done anything. Literally they have been talking about this since the 1970s. Think about it: if it worked it would have been incorporated into something like Siri and be worth billions. But Siri is pathetic.

              It adds nothing to the discussion but naysaying and bashing. It is a disgrace for the level of discourse I expect from Slashdot.

              I am 'in IT' too, and I have been surprised by the speed at which self-driving car advancements have been made (I wasn't expecting us to be anywhere near where we are for at least 10 years -- go ahead, tell me you were). I've also been surprised at the effectiveness of deep learning in a project of o

      • That is baloney. The Go/Chess/whatever playing computers didn't "start from nothing". They were PROGRAMMED to understand the game. The rest is marketing BS. They didn't just put the computer down and show it a bunch of Go games and it suddenly understood what Go was. Ridiculous. Computers are good at games with strict rules. We all know that. That is what computers are BEST at. You guys are just easily impressed and don't really understand technology.
        • Computers are good at games with strict rules. We all know that.

          The rules don't describe what a winning Go position looks like. And yet, that's what they learn to figure out, even better than humans.

  • By breaking shit and seeing what mom and dad say about it?

  • wow can it really be like that, then how will the results be seen
  • Brain has evolved (pre-trained, if you will) machinery (particular structures of neurons, encoded in DNA) which has to be trained (literally) with actual human language. Think of an evolved deep-network which is trained to learn and maintain a deep-network for a language or two. The problem is - no one knows how to encode/represent this yet. Second order deep networks are unexplored.
    • The human brain works nothing like a computer "neural network". The very fact that people call them "neural networks" is fraudulent.
  • Stop with it with videos. Stick with ASCII text.

    Machine learning, or AI, or whatever can learn patterns for things such as vision or hearing. But itâ(TM)s low level stuff like, âoeHey, look, a face!â. A newborn does this with baked in brain contents. âoeMake a low light picture look nice!â, a babyâ(TM)s eyes do this way beyond our photography tech (including AI assisted stuff).

    Tangent with a point: Young children are sponges, they absorb everything they see, hear, smell, t

    • And don't use Microsoft Word to pre-type things for Slashdot. Slashdot's AI can't handle complicated text symbols like facing quote marks. Sad.

  • What is spoken language? it's just describing the visual objects and how they move about. Once you have object recognition, you need a set of verbs (standalone objects can shrink/expand; 2 or more objects: coming close together, separating away) -- these verbs define what the laws of physics/motion allow. Once you abstract these motions and got your verb set, describing the visual objects and their motion becomes what is spoken language.

An authority is a person who can tell you more about something than you really care to know.

Working...