Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Hardware

Robots Successfully Invent Their Own Language 159

An anonymous reader writes "One group of Australian researchers have managed to teach robots to do something that, until now, was the reserve of humans and a few other animals: they've taught them how to invent and use spoken language. The robots, called LingoDroids, are introduced to each other. In order to share information, they need to communicate. Since they don't share a common language, they do the next best thing: they make one up. The LingoDroids invent words to describe areas on their maps, speak the word aloud to the other robot, and then find a way to connect the word and the place, the same way a human would point to themselves and speak their name to someone who doesn't speak their language."
This discussion has been archived. No new comments can be posted.

Robots Successfully Invent Their Own Language

Comments Filter:
  • better link (Score:4, Informative)

    by roman_mir ( 125474 ) on Wednesday May 18, 2011 @12:18PM (#36167290) Homepage Journal

    better link [ieee.org]. Also, I didn't realize it at first, this is the person mostly responsible for it [uq.edu.au]. She is from Australia and she decided to do this. I wonder what the catch with her is...

    • by smelch ( 1988698 )
      The catch is there is one of her and hundreds of nerds practically rolling their dicks out at her feet.
      • the very idea of that picture is disturbing. Would she then step on the dicks as she tried to walk anywhere? What about the balls? Ouch!

        • by smelch ( 1988698 )
          What, you're not in to that sort of thing?
          • I wanted to reply with something sensible, then I read your sig....

          • I came up with something that is not sensible to say now, I think it should be said though, it's important history in the making:

            If a man can be womanizer, can a man be a mananizer? An onanizer? A nonanizer?

            Always thinking.

    • This thread is now about stalking some nerd girl.
      • yeah, we used to talk about Natalie Portman here. Either we are out of grits or our standards are slipping or maybe it's the age showing.

    • by Anonymous Coward

      She is from Australia and she decided to do this.

      That explains why the robots named everything "Bruce."

    • That robot looks so much like a girl, for a moment there, I had trouble believing you were not putting me on. :P
    • by Anonymous Coward

      the catch?

      she has nerds like you hitting on her all day, when she's just trying to do her job, so she's going to ignore you for treating her like an object instead of a person :V

  • by Anonymous Coward

    The headline (and summary) are misleading. Here's a more accurate headline:

    "Robots programmed to carry out a specific task perform said specific task"

    It sounds much less impressive that way - and it is. It's still interesting, but don't infer anything from the whole thing that can't logically be inferred from it.

    • by Moryath ( 553296 ) on Wednesday May 18, 2011 @12:36PM (#36167514)

      After looking through the research, you're correct - the article's claims are very much overblown.

      Do they "invent" random words for places? Yes, by throwing random characters as a preprogrammed method. Do they "communicate" this to another robot? Yes.

      Is the other robot preprogrammed to (a) accept pointing as a convention and (b) receive information in the "name, point to place" format: Yes.

      They share a common communication frame. That's the "language" they communicate in. And it was preprogrammed to them. That they are expanding it by "naming places" is amusing, but it's hardcoded behavior only and they could just as easily have been programmed to select an origin spot, name it "Zero", and proceed to create a north-south/east-west grid of positive and negative integers and "communicate" it in the same fashion.

      • by fbjon ( 692006 )
        You bring a good point. However, there are dead languages that we humans are unable to figure out, even though we're the same species.

        If you don't hardcode something, this would be even worse: How do you make up a new language with grammar and all, without using any prior language or knowledge? You basically have to figure out a general algorithm for bootstrapping communication from scratch.

      • by Aryden ( 1872756 )
        Yes but in essence, we as humans are "programmed" do do exactly the same. Our parents will point at pictures, objects, people etc and make sounds which are then converted by our brains into words that label the image, object, person etc.
        • by Moryath ( 553296 )

          We still have to pick up on the meaning of "pointing."

          In some cultures, that's not polite to do [manataka.org].

          So no, "pointing" isn't hardwired. It's something babies will pick up if their parents do it, perhaps. but it's not hardwired. About the only thing hardwired is babies crying for attention.

          • by Aryden ( 1872756 )
            Which is another way of them pointing to themselves. Either way, it's a method for drawing attention to a specific object, place, or thing.
      • Space is a tough concept to "program" into a robot. You can't see it or touch it. In a simulation it can be a grid, but in the real world, each robot has to work out where it is itself. Without mind reading, how do two robots share their sense of space? The language games are the easy part. The robots create names for places, distances and directions. The tough part is knowing what those words should refer to in the real world. To make this work with real robots is a first.
    • There's more to what they've done than you are perceiving. The robots running around following their "instructions" are proving a solution their creators invented for solving a problem given a set of constraints. Namely, using auditory communication only, develop a means of sharing a common understanding about a physical space. This is a step towards developing sophisticated communication capabilities between not just other robots, but more importantly humans using their protocols rather than traditional
  • They learned how to communicate meaning. The researchers taught them the words. the computers on board did not invent the words they used. In fact a computer would not do something as dumb as a spoken word but series of tones or even FSK.

    • The researchers taught them the words. the computers on board did not invent the words they used.

      My understanding of the article is that the robot's did exactly that. The programmers put two robots together that they had intentionally not given any specific words to (although presumably the basic rules for how to form words must have been given, which you might perceive as the analogue to humans having a physically limited vocal range to play with). The robots then trial-and-errored their way through "conversations" until they had established a common set of words for locations, directions etc..

      If you

    • They learned how to communicate meaning. The researchers taught them the words. the computers on board did not invent the words they used. In fact a computer would not do something as dumb as a spoken word but series of tones or even FSK.

      When it needs a new word/label it generates it as a random combination of pre-programmed syllables that play the role of phonemes for the new language. English for example only uses 40 of them, but we combine them to make all the various words we know how to pronounce properly. It may not be a particularly sophisticated language, but I think it still counts well enough.

    • Actually they did "invent" the words, however these robots were constrained to using human derived syllables. The goal was to not produce a machine efficient, machine natural language, but rather one that is compatible/aligned with human speech and understanding. The end goal of this line of research is to create the ability for machines to have meaningful communication with humans absent a mechanism of query/response translation limited to preprogrammed states.
    • Did they learn how to communicate meaning?

      It sounds to me like they were programmed how to ostensively code and decode tokens. If it were the case that meaning is entirely reducible to ostensive definitions, then it is the case that they learned to communicate meaning. I'm not certain that many (if any) linguists, philosophers of language, or psychologists hold to an ostensive theory of language these days. Wittgenstein pretty much exploded the ostensive theory of language in such a way that no one takes it

  • Comment removed based on user account deletion
    • by Zerth ( 26112 )

      That they can distinguish that "random_syllables" means "this point" instead of "50 units of movement" or "north" or "left" is moderately impressive.

    • by hitmark ( 640295 )

      That may come, i suspect early humans where not much different in its language ability. Hell, kids are very direct early on before they start picking up that there can be both overt and covert meanings. Hell, some adults still have trouble with that...

  • by Anonymous Coward on Wednesday May 18, 2011 @12:29PM (#36167442)

    A lingo ate my baby!

  • I wonder how long until a prescriptivist control-freak robot develops to rule over the language and erase all usage that it disagrees with.

  • Is it machine language? Because all I hear them saying are 'ones and zeroes.'
    • by roman_mir ( 125474 ) on Wednesday May 18, 2011 @12:49PM (#36167680) Homepage Journal

      pize, rije, jaya, reya, kuzo, ropi, duka, vupe, puru, puga, huzu, hiza, bula, kobu, yifi, gige, soqe, liye, xala, mira, kopo, heto, zuce, xapo, fili, zuya, fexo, jara.

      The 'language' seems to be limited to 4 letter words, each one has a consonant and a vowel, and then another consonant and another vowel in it. Does not look like a language at all, there is no grammar, there is nothing except basically 4 letter words used as hash keys to point at some areas on a map.

      • Hmm.. I wonder howmuch will it take to develop that one fundamental part of any language. Swearing.
      • The robots played where-are-we, what-direction and how-far games, to create three different types of words. The coolest part of the study is that once their language is created, the robots can refer to places they haven't been to. That's imagination. Then they go explore and meet up at the place they previously referred to using their words for distance and direction.
        • I asked this already: why? [slashdot.org]

          What is the motive for a robot to do anything? What does it 'need'? People solve various problems in their lives, because we have instinct of self preservation, curiosity, various other motivators, like hunger, thirst, cold, heat, health issues, etc.

          What do robots need and why would they be developing a language if they don't have any needs? For a robot to realize a need, it has to have some form of motivating factors, have some form of 'feelings', that would force it to do things.

          • >What do robots need and why would they be developing a language if they don't have any needs? In one sense, a robot species' main "need" is to impress humans well enough to copy them. Pioneer robots have to be useful in research labs for people to keep making them. Language learning robots are a specific combination of hardware and software. {motives, needs, instincts, ...} have relatively clear meanings for carbon-based life forms but are loaded when applied to non-carbon agents. Robots, like chess
      • Well, depending on the number of communications they need to make to each other it's very possible 4 level words could map out every possible communication they could have with it each other. Think of it like Chinese symbols.. They aren't just one word but complex ideas. Grammar exists because we are unable to store such large amounts of data. We can't have 1 word/symbol map to a unique complex concept. A computer might not have such limitations. Especially if their entire universe of ideas/concepts can b
        • grammar exists so you can come up with yet another 'legal' (proper) way of saying something that maybe was never even said previously.

          Grammar is about correct stringing of words in a sentence, which communicates more ideas than just giving names to things.

          Giving names to things is important, of-course, but it does not constitute a language, and I already mentioned that robots are not things that need a language [slashdot.org] (well, not yet anyway.) They don't need it for themselves, so they won't be creating one. We can

          • I guess I was trying to say you don't necessarily need grammar for a language used by computers. Grammar for them is just a hack, or addon to allow a language to communicate more than the originally was intended.

            Humans don't want to reinvent the wheel every time we need to expand our language and thus grammar works well for this. Computers don't have that issue and so grammar (at least as we know it) isn't important.

            I was just trying to point out that having a grammar isn't required for a language.
            • Grammar is a hack?

              You know, I do have B.Sc. in computer science, if grammar is a hack in human languages, then how do you explain the fact that grammar is the absolute necessity in computer languages, and the fact that we have math describing it? It's called formal language theory and it requires formal grammar, which can be explained as rules, that describe whether a particular sequence of characters is legal in a sentence and what it is that the sequence does.

              I think grammar is a necessary condition for a

  • by vlm ( 69642 ) on Wednesday May 18, 2011 @12:34PM (#36167496)

    From the summary, it sounds like the "language" is just a noun mapping. Very much like my 14.4 modem did in 1993 over a phone line, when it came to an agreement with the modem on the other side about what voltage and phase pattern corresponded to the bitstream 0001 vs 1010, in fact my modem sounds like a more complicated language because they implemented MNP4 / MNP5 error correction, admittedly that required a lot of help from the humans typing in the "right" dialer strings and of course the humans who wrote MNP4 ...

    Might just be a bad summary of a summary of a summary of a summary, and the robots had developed interesting sentence structure and verb conjugations and direct and indirect objects, adjective and adverbs, similes and metaphors, better than your average youtube comment ... Or maybe youtube comments are actually being written by these robots, hard to say.

  • So much robotics research is to make machines do what people already do. How self-centered. Most of the time this is not useful to solve real problems. But it does get funded, because those with the pursestrings can understand what humans do, but not the best solution for a robot to do a specific task.

    In this case, a simple serial port between the machines would have them communicating and finding common ground much more efficiently than all the mics, speakers, and other mechanics needed to emulate speec

    • by tepples ( 727027 )

      So much robotics research is to make machines do what people already do.

      Often because trying to do the same with people [wikipedia.org] would violate the mainstream community's standard of ethics.

      But it does get funded, because those with the pursestrings can understand what humans do, but not the best solution for a robot to do a specific task.

      That and because figuring out how to make a robot communicate like a human contributes to the knowledge of human-computer interaction [wikipedia.org].

      In this case, a simple serial port between the machines

      ...wouldn't work so well for robotic machines that can move about.

    • The key there is most of the time. There are definitely going to be times when having a robot that can talk is going to be of serious importance. For instance rescue missions where it's too dangerous to send humans in, but where there is still a need to rescue somebody. In situations like that you're not likely to have access to a serial port, and likewise if you're wanting to have two robots coordinating with a person in a situation like that, the robots likely will understand themselves better over a seri

    • by geekoid ( 135745 )

      Lets see about that.

      1) Robotic research into what humans can do help us understand how humans do it.
      2) It allow us to create better robots to do thing humans can't do. say, move about Mars.
      3) This is simpler then using a serial connector from different manufactures. Hey, what's there OS doing with the firs NAK, do we need to send 2?
      I've seen this when getting a linux robot to try and talk to a Dos based robot. the Dos system was dropping a the first signal. SO had we not figured that out, communication woul

    • So much robotics research is to make machines do what people already do. How self-centered. Most of the time this is not useful to solve real problems. But it does get funded, because those with the pursestrings can understand what humans do, but not the best solution for a robot to do a specific task.

      In this case, a simple serial port between the machines would have them communicating and finding common ground much more efficiently than all the mics, speakers, and other mechanics needed to emulate speech.

      I find it a bit comforting that with enough research, and effort, our robotic creations -- that carry our human signature if not in form, then in design -- will be self replicating out in the asteroid belt and beyond. Long after we've been extincted by a medium sized asteroid collision (due to lack of funding for human extra-planetary exploration), the machines we build in the near future may someday encounter another race (that was less concerned with economics), and allow the forgotten footprints of our

      • P.S. Please inscribe our DNA, and it's chemical makeup on all future space probes.

        So that an advanced but not peaceful species which finds it knows how to design an illness to kill us all?

  • When they invent the subjunctive.

    Also, it's not inventing a language if they're programmed to do it. Let me know when the robots building cars on an assembly line start unexpectedly communicating with each other in ways that communicate concepts/ideas that were not hardcoded into them.

    • by geekoid ( 135745 )

      So if two people meet and come up with their own language they don't actual invent it because they are hardwired(programmed) to communicate?

      And you really don't see the advantage to this? This would mean the any two devices could come up with their own independent language on the fly. Basically a way to universally communicate between all devices.

      So device A is set to device B. Both made by separated manufactures.
      Device they could create a language, communicate and then you device can translate it into your

    • by Livius ( 318358 )

      Humans (at least children) are very much programmed to invent language, and there are documented examples of just that.

      What the robots are doing is:

      1) Very, very impressive and very, very cool, but

      2) Still vastly different from what human language does, and perhaps not even on the right track with respect to the human language faculty. Humans use language to model reality and only then communicate (i.e. share their mental model), and humans can also model things without direct sensory perception (e.g. the

    • by Lehk228 ( 705449 )
      Let me know too so I can grab my rifle, the first word of the first truely sentient machine will be EXTERMINATE!
  • And they never will until we can finally make a machine that is capable of physically remapping its components. One of the fundamental reason humans can learn is that neurons remap themselves by repeated practice and use. Do you suck at math? Well keep studying it and your neurons will literally modify themselves to handle mathematical equations better. Suck at tossing a football? Well keep practicing and the nerves in your arm will remap to develop better muscle memory to bet the ball to the location

  • "Of course like all kids, I had imaginary friends, but not just one. I had hundreds and hundreds and all of them from different backgrounds who spoke different languages. And one of them, whose name was Caleb, he spoke a magical language that only I could understand."
  • by JMZero ( 449047 ) on Wednesday May 18, 2011 @12:57PM (#36167818) Homepage

    If you did the same thing in a software simulation, nobody would pay any attention. It would be fairly trivial. Adding in the actual robot parts means that you, uh... need to have robots that can play and understand sounds. That's great, you made a robot that can play and hear sounds. If we assume nobody has made an audio modem before, then that would be something. As history stands, it isn't.

    Adding these two unimpressive things together doesn't equal anything. I mean, if they're actual going to use these for something, then that's great. Make them. But so much robot "research" seems to be crap like this. We have software that can solve problem X in simulation. To do the same thing in the "real world" you'd need hardware capable of these 3 things, all of which we can do. Unless you need to solve problem X for some reason in the real world, you're done. There's no need to build that thing.

    It's like saying "can we make a computer that can control an oven and use a webcam to see when the pie is done?". Yes. We can. But unless we actually want to do that, there's literally no point in building the thing. There will be no useful theory produced in actually building a pie watching computer. The only thing you'll get is to have built the first pie watching computer, and - apparently - an article on Slashdot.

    • I'm not sure it applies to this, but there are so many things in robotics that work well in simulation and break horribly when implemented on a physical robotic platform.

      To use your example, if we want to create a robot that uses an oven and looks at a pie, to do this in software we need to model the pie, model the oven, model the uncertainty of the robots actions/observations, and then build our algorithms to accomodate these models. When we transfer the algorithm to a real system, all kinds of hell can br

      • by JMZero ( 449047 )

        to do this in software we need to model the pie, model the oven, model the uncertainty of the robots actions/observations

        You don't need to "model" pie or oven. The only vaguely interesting thing would be interpreting the vision of the pie for doneness. And, if you want to do that, you can just get some pictures of real pies and try to interpret them. In software. Without building a computer that controls an oven. That's my point.

        Any algorithm developed would translate directly into the areas of pattern

  • I've previously argued that High Frequency Trading algorithms can use collusion to reap systematic profits. If the self-learning algos 'learn' and 'express' intentions through patterns of queries, it is possible for them to do this without there being any prosecutable intent by a human. The programmers could claim that they never wrote a line of code that did any collusion. If it is possible in theory for algos to develop trading collusion, then it is just a matter of time until they do. Since they evol
  • Let me know when they figure out "Eep Opp Ork Ah-Ah".

  • Is there video/audio footage of this? I feel really curious about how this sounds like.

  • It seems to me that the real research question is "how can one stranger teach another stranger a natural language using a less powerful shared language?" For instance, how can I teach you English when the only language we share is basic gestures?

    Some theoretical work on communicating the rules of complicated languages using very limited languages would be interesting. The fact that they used robots is hardly important; anybody can stick a speech synthesizer and speech recognition on a PC and call it a da
    • by ledow ( 319597 )

      You've put your finger on every problem I have with "AI", genetic algorithms, neural networks etc.

      They basically consist of "let's throw this onto a machine and see what happens", which doesn't sound like computer science at all (I'm not saying that computer science doesn't involve bits of this, but that's not the main emphasis). It seems that an easy way to get research grants from big IT companies is to slap some cheap tech on a robot and "see what it does".

      Here, they have a more interesting problem than

  • ... in the USA. But the American robots insisted on yelling, in English, at the foreign robots to get them to understand.

  • Hey baby... recharge here often?
  • This sounds very much like the guessing game [csl.sony.fr] done as part of the talking heads experiments in 1999 at the VUB Artificial Intelligence Lab [vub.ac.be] and Sony Computer Science Lab Paris [csl.sony.fr] by professor Luc Steels [vub.ac.be]. These experiments already date from 1999.
  • The first humanoid "words" were probably grunted utterances representing names of other humanoids, animals, places and (eventually) events.

    Even so, automatically generating unique labels is no big deal for a computer. Every automatic "builder" program already do this. Except they're usually enumerated (i.e. box1,box2, box3, ..., box999), instead of randomly generated ciphers ("xyzzy" etc). But computers don't do anything randomly, it all has to be programmed by a human.
  • Reply back when robots start figuring this out on their own without being taught (read "programmed").

  • The links I've seen about this go on and on about how the robots invent and use "words." But language is not words; language is grammar; language is a set of rules for recursively constructing highly complex expressions from smaller subparts. This is Linguistics 101 material.

    The way you distinguish somebody with Linguistics training from a layperson is that the layperson will talk about language as if it's a "bag of words" and overall focus too much on the words, whereas the linguist will tend to see mo

  • It sounds exactly like human screaming. Odd that.

  • ... we put that trust to the test. BAM, Robots gave us 6 extra seconds of cooperationGood job robots. I'm Cave Johnson, we're done here.

    http://www.youtube.com/watch?v=AZMSAzZ76EU [youtube.com]

  • It just beeped and gave me a nanobot sandwich.
  • The only danger here is that robots will be so good at developing their own shared language that they might outpace humans at being able to understand one another. A world full of robots that understand information and abstract concepts could be a world full of artificial intelligences secretly laughing behind our backs for our fascination with cat pictures on the internet.

    Where's the danger? I think that would be amazing.

  • I bet they're uploading the same software to all the robots. Therefore they already share something: the way they learn.
    Although this is interesting, a test should be done with software that was developed by different independent teams.

BLISS is ignorance.

Working...