Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics Technology

Robotic Hand Translates Speech into Sign Language 135

usermilk writes "Robot educators Keita Matsuo and Hirotsugu Sakai have created a robot hand that translate the spoken word into sign language for the deaf. From the article: 'A microchip in the robot recognizes the 50-character hiragana syllabary and about 10 simple phrases such as "ohayo" (good morning) and sends the information to a central computer, which sends commands to 18 micromotors in the joints of the robotic hand, translating the sound it hears into sign language.'"
This discussion has been archived. No new comments can be posted.

Robotic Hand Translates Speech into Sign Language

Comments Filter:
  • by A Dafa Disciple ( 876967 ) * on Tuesday January 17, 2006 @07:43AM (#14489536) Homepage
    Good lord! I imagine the Japanese language [wikipedia.org] with its 1945+ character alphabet is hard enough to learn; learning Japanese sign language must really suck.

    You know what would really spoil those deaf kids is, instead of a robot doing sign language, a robot that shows images or words based on what a speaker says. I know, I know; creating a robot to do this is a feat within itself and impressive in its own right, but perhaps there are better ways of communicating with a robot if it can already perform more than adequate speech recognition.
    • Uhhh, isn't sign language universal? I thought it didn't depend on the spoken language. I might be wrong, of course :).
      • by Anonymous Coward
        I believe it's a joke... but just to point out, there's ASL and BSL and...
      • Unfortunately, no [wikipedia.org] ;-)
        • Unfortunately, no ;-)
          Yup, much the same as how, unfortunately, no one's come up with a universal spoken or written language. Gosh, let alone trying to get a universal programming language...
          • "...no one's come up with a universal spoken or written language..."

            Hmm...isn't the middle finger signifying "Fuck You" pretty much universal?

            :-)

            I wonder if the robot hand translates that properly?

            • Hmm...isn't the middle finger signifying "Fuck You" pretty much universal?
              I know it's a joke you're making, but actually, I don't believe it is universal although it's rapidly spreading coupled with English. I want to say that most cultures have an "up yours" gesture of some sort involving a hand punching up with some kind of finger gesture, but that's probably my ethnocentrism speaking.
              • I want to say that most cultures have an "up yours" gesture of some sort involving a hand punching up with some kind of finger gesture, but that's probably my ethnocentrism speaking.

                Not really ethnocentric. The phallic symbolism is pretty much global. Though one or two cultures prefer to make a round shape, indicating a different set of genitalia are in play.

                In rural Greece, showing the palm of the hand [ooze.com] (link covers many types of "finger") is the rudest gesture you can make. If you go there, don't wave
    • by tpgp ( 48001 ) on Tuesday January 17, 2006 @08:13AM (#14489672) Homepage
      Good lord! I imagine the Japanese language with its 1945+ character alphabet is hard enough to learn; learning Japanese sign language must really suck.

      The relationship between a language & sign language does not work like that.

      From the wikipedia sign language page [wikipedia.org]
      A common misconception is that sign languages are somehow dependent on oral languages, that is, that they are oral language spelled out in gesture, or that they were invented by hearing people
      and
      On the whole, deaf sign languages are independent of oral languages and follow their own paths of developmental. For example, British Sign Language and American Sign Language are quite different and mutually unintelligible, even though the hearing people of Britain and America share the same oral language.
      You know what would really spoil those deaf kids is, instead of a robot doing sign language, a robot that shows images or words based on what a speaker says.

      That doesn't really sound like a robot, but speech recognition software connected to a teleprompter (or monitor)
    • by lewp ( 95638 ) *
      Japanese has a whole bunch of kanji, but the various words in the language can be formed from a much smaller (hiragana, mentioned in TFA) character set that represents the various syllables in the words. These syllables are always pronounced consistently, unlike languages like English where sometimes it seems like nothing is consistent (and I'm a native speaker). Thus, the first thing that came to my mind was that teaching a robot spoken Japanese is probably quite a bit easier than teaching one English (tho
      • I know nothing about Japanese sign language, and practically nothing about American sign language, but I believe American sign language shares a similarity to written Japanese in that there are signs for common words most any competent signer knows (similar to kanji), and any particularly uncommon words can be signed out with the letter (or in the Japanese case, hiragana syllable) signs.

        Sorta, but not quite. You can fingerspell words you don't know, and some words are derived from their associated lett
      • Interestingly enough, Japanese Sign Language has a trait which makes it less appropriate for this application than American Sign Language. Ambiguous signs are generally distinguished by mouthing [deaflibrary.org] the letter in JSL versus the finger-signed letters in ASL.

        The next question which I have is the significance of body positioning of signs in JSL. Most ASL signs have migrated to the face and upper-chest region, but I know some sign languages have a great amount of significance in the body positioning and it may ra

      • Japanese has only about 2,000 "daily use" kanji, those required to read the newspaper, or graduate from high school. Calling it an "alphabet" as one post did is rather inacurate, as "alphabet" denotes characters that make up parts of sounds, while Japanese is based on syllables (Only 40; or 80 is you include minor variations).

        While in Japan I was able to learn some JSL at the hands of some of the top translators in the country (my failure to learn more was my fault, not theirs). I was surprised to find JSL
    • That would not require a robot, just a screen to display the messages.
    • You think signing language could be hard to learn? Imagine blind people trying to learn japanese in Braille!!
    • You know what would really spoil those deaf kids is, instead of a robot doing sign language, a robot that shows images or words based on what a speaker says.

      Which do you think is better ?

      a)A robot that translates words into pictures

      b)A robot that translates one language into another

      Deaf people aren't stupid, at least no worse than the rest of us. You do realise that they probably already know sign language (which is the reason for the robot) so to be shown stupid pictures would be a little demeaning, don'

      • We already have a means of getting Spoken Word into the visual spectrum - it's called Written Word. Are the deaf unable to read? Why not just take a speech recognition program, and put the words on the screen, rather than translating them to sign language? Evem the deaf-blind have mechanized Braille, which I'm sure is more comfortable reading quickly than a robotic hand (at high speed, known as a 'The Mangler').
    • To be pedantic, Japanese doesn't have an "alphabet". It has syllables, representations of sound and utterance. If I may quote a portion of a page (out of some 250-plus pages from Gene Nishi's "Japanese Step by Step (ISBN: 0-658-01490-0):

      A syllable is defined as a single uniterrupted sound forming part of a word or, in some cases an entire word. (Example left out...)

      We know that there are only 26 letters in the English alphabet. But how many different syllabic combination sare there? According to the late Pr
  • They only need to put it on wheels and it can become a scutter.

    Additional warning:
    Do not let this robot pat you on the back whilst near the top of the stairs.
  • Over Kill? (Score:5, Insightful)

    by xoip ( 920266 ) on Tuesday January 17, 2006 @07:47AM (#14489552) Homepage
    Call me culturally insensitive but, why not simply translate speech to text?
    • I agree. This is an example of something that is fun but impractical. I assume they did it because they could, and because it would pique the students' interest in robotics, which is a worthy goal. As a real-world application however, a text-to-speech unit would be cheaper, much more robust, much smaller and more portable, most likely faster, useful to a wider market, etc. etc. But it wouldn't look as cool.
    • by QMO ( 836285 )
      Since sign languages are different all over the world, I don't know if there is the same problem in Japan, but:

      American Sign Language is not English (American or other).

      Thus, translating speech to ASL would reach people that that understand ASL but don't read Englih.
      • I suspect that the numbers of ASL users who can't read is small. However, as sign languages are linguistically very different from written languages, many profoundly deaf signers aren't good readers. (To start with, we have a lot of pretty meaningless words in text ... a ... the ... etc) So, signing makes it much easier to get the richness of speech. When you textualise speech, you have to cut words out, and loose a lot of nuances. (Have a look at the closed captions on the news sometime).
        • Literacy is very low among people whose first language is ASL. I believe the commonly quoted statistic is that much of the deaf population reads at a 5th grade level? Thus, part of the job description of an ASL interpreter can be to translate English text into ASL.
          • Yes, I think that the statistic that I read (in the UK) is that a profoundly deaf adult (who's never had hearing), is about age 8-9 - which is I guess about the same.

            Having said that, the average reading age of the average adult isn't that great - for example http://news.bbc.co.uk/1/hi/talking_point/3166967.s tm [bbc.co.uk] cites a study that say "millions of adults do not have the skills of the average 11 year old" (implying that it's because their skill set is lower, not higher!).

            So, certainly in the UK, though prof
        • I heard a similar statistic in my ASL class. A lot of it, particularly with the older generation, is because deaf people were either put into ASL-only schools who often could not attract the better teaching talent or into speaking schools where they were actively discouraged from signing, often by tying their hands to the desks, and therefore could not properly partake in the learning process. I wouldn't be surprised if some of the bad writing processes have propagated in a manner like ebonics; as a tight c
          • The grammar is, of course, different from English, but many children learn multiple languages growing up. So long as you're exposed to fluent speakers and forced to use the languages, anyone can pick up a language.

            But kids don't spontaneously pick up on writing. What you are asking these deaf children to do is learn ASL (which is basically not written), and the written form of a foreign language. That's like only hearing English (no writing) and only seeing written Chinese (no speaking). That's tricky to

      • by QMO ( 836285 )
        After reading a couple of replies to my parent post, I was thinking about people that might understand signing, but not read or hear.

        It is my understanding that children can learn to sign before they can learn to read. (In fact hearing children can learn to sign before learning to speak.)

        Similarly, developmentally challenged people, such as certain people with Down's Syndrome, never learn to read, but can sign just fine.

        Reading takes certain specific brain functions, and it is not inconceivable that there
        • There's also the fact that as sign languages are typically independent of spoken ones, people who know (say) Auslan will need to learn to read and write English as a second language, and the process is (I'm told) difficult.
      • Why not just give it the dual ability to write and sign depending on circumstance...signing is a start in the direction of more complex hand motions, the next big step is to construct a hand that can balance well enough to write.
    • Re:Over Kill? (Score:5, Informative)

      by tpgp ( 48001 ) on Tuesday January 17, 2006 @08:20AM (#14489701) Homepage
      Call me culturally insensitive but, why not simply translate speech to text?

      Because signing is the native 'tongue' for most deaf people - and it is easier for them to communicate using sign language (over text) - just as its easier for you to understand speech (over text).

      Basically - the same reason that some British TV (and undoubtedly many other channels around the world) have a signer translating the news rather then scrolling text.
      • In the time it takes to program the robot to do a bad simulacrum of someone doing each sign, they could have just video'd someone doing all the signs. Then it's more visible to a bigger audience, too.

        I'm not saying it's not an interesting project, but it's not a practical solution to the problem.

        • In the time it takes to program the robot to do a bad simulacrum of someone doing each sign, they could have just video'd someone doing all the signs. Then it's more visible to a bigger audience, too.
          Sign languages tend to be more than just words. the positioning and motion of a sign conveys location and tense of nouns and verbs. It would be like speaking English without being able to conjugate any of the nouns or verbs.

          They could, perhaps, dynamically generate pictures of the signs to convey more infor

          • the positioning and motion of a sign conveys location and tense of nouns and verbs

            You're just being silly. All this robot does is take words and map them to gestures. It doesn't convey all this crap you're imagining. And if you're going to do signs which require relation to body parts - as many do - you're going to need a big f'ing robot body to make it visible to lots of people - and you're back to viewing from one direction.

            In the very worst case scenario, they could have a 3d representation of a hand
      • just as its easier for you to understand speech (over text).

        I realize I may be a bit unusual, but I tend to understand text faster and more accurately than speech. It is certainly faster to speak than it is to write out the text, but I can read an entire paragraph in a second or two, whereas communicating the same information as speech would take much longer. Also, I usually visualize the speech as text in my mind and then "read" it, rather than interpreting the words directly.

    • A good sign language interpreter can read signs from a fair distance, well across a board room at least. How far away can the PDA be before you stop being able to read the text on the screen?
    • Someone could be blind and deaf. But then why not use braille? The situation I can imagine is maybe a person knew sign language but then became blind later in life. So that would be one of the only ways to communicate. From what I understand a lot of older people have eyesight problems, so for the deaf this is even worse.

      The other use could be for teaching sign language. There's a lot of people that know a little sign language, but perhaps not enough to teach someone. Seeing a robotic hand do it in th
    • I thought the same thing initially... but then I thought about deaf *children* who don't know how to read. I have a neice who falls within this category and she can sign quite a bit... but can't read a single word yet (she's only 2).
    • Because if you put an LCD screen onto your humanoid android, it looks less human.
  • ... Does it also distinguish between different 'dialects' in sign language?
    I seem to recall that sign languages differ between countries, same as 'natural' language.

    However this is really great for deaf people.
    • Dialects also differ between, say, Osaka, Hokkaido, and Tokyo. Tokyo dialect is the official, trained one, but with a good ear, despite the frustrations, communication CAN be had.

      However, going from Bahston to Hyooton, to Narlenz can be harrowing. Anyone remember the campaign trail with Bush version 1 was traipsing thru Louisiana? I can, a bit. I remember all that twisty, silky, unintelligible stuff that obviously was aired for entertainment value by the networks. (I am a person of less than a lighter color
  • by commodoresloat ( 172735 ) on Tuesday January 17, 2006 @07:51AM (#14489570)
    signing "I'll be back" [collectoybles.com.au]
  • by Afty0r ( 263037 ) on Tuesday January 17, 2006 @07:54AM (#14489584) Homepage
    Would this not be more useful as software, able to render simple 3d hands with low microprocessor overheads, and preferably available for mobile phones and PDAs?

    Deaf people could carry a PDA, and when they need to find out what someone is saying, they can hold the PDA up like a microphone, and watch the screen, assuming the translation is at least reasonable accurate...
    Of course they could lipread too but some find that harder than others, and this could also be used eventually to cross language barriers?

    I imagine it's extremely hard to lipread a foreign language.
    • by commodoresloat ( 172735 ) on Tuesday January 17, 2006 @08:01AM (#14489629)
      Would this not be more useful as software?

      Yes but not nearly as intimidating. Who's going to get their lunch money taken -- deaf kid with a PDA, or deaf kid with a giant robot hand?

      • Parent is not kidding! From TFA:

        An 80-centimeter robotic hand

        !!!

      • Who's going to get their lunch money taken -- deaf kid with a PDA, or deaf kid with a giant robot hand?

        Remember, though, that this is Japan. Kid with PDA probably merges with the Wired. Kid with part of giant robot merges with... well... pretty much everything, after a while.

        Once that happens, your lunch money is the least of your concerns.

    • > Would this not be more useful as software, able to render simple
      > 3d hands with low microprocessor overheads...

      There is no need for 3d. I know a woman who makes her living as an ASL translator. She spends most of her time sitting sitting in front of a monitor and camera wearing earphones.

      The robot hand is pointless. The computer could just as well generate cartoon images.
    • With a real hand it could be used by deaf-blind people who need to touch the signing hand to understand.
    • So, the steps are these: Recognize language, use translator (of the babelfish kind) to translate to sign language, render signing hand.

      Why not just type it out to the screen? :(
    • You forgot where this was developed: Japan. First step in the Japanese engineering design process is to answer the question: "Can we build a robot to do it?"
  • Amy Pretty (Score:3, Funny)

    by Inkieminstrel ( 812132 ) on Tuesday January 17, 2006 @07:57AM (#14489596) Homepage
    Tickle... Amy... Tickle

    http://www.imdb.com/title/tt0112715/ [imdb.com]
    • The device in the Congo movie used a sign language => speech converter. The japanese story is about a speech => sign language converter.

      If we consider the gestures as a series of movements produced by predetermined actuators (junctions), they can be quantized and stored in a vector, it's just numerical input, and could be classified as a different kind of speech.

      Training the a gesture reader would be equivalent to searching inside a soundwave database (find the closest match, reject if there's any sig
    • Might be redundant, but talk about "Talk to da hand"...

      Say the wrong thing and you might be "bitch-slapped". Or, with that alloy, maybe beaten to a shallow tallow pulp... (woo... any onomatopoeia coming from that hand, a la "AKA I'M BATMAN!!!"?... squish, khoomp, BAM, POW, THOONG...

      But, certainly, the Japanese model might actually USE the middle finger gesture. Over there it is NOT the same thing as here or in some English-speaking countries. And, the Japanese arm/hand most likely WON'T be waving "come here
  • by commodoresloat ( 172735 ) on Tuesday January 17, 2006 @07:58AM (#14489603)
    The robotic hand was shown at a two-legged robot tournament

    So it's not just a hand, but a hand with two legs!

  • Other uses? (Score:1, Funny)

    by Anonymous Coward
    Anyone else thinking there might be some other 'alternate' use for this device?
  • by sikandril ( 924466 ) on Tuesday January 17, 2006 @07:59AM (#14489611)
    This gives a whole new meaning to the phrase "talk to the hand"
  • A different dept. (Score:1, Insightful)

    by Anonymous Coward
    This might as well be from the "doing-it-just-because-we-can" dept. As many slashdotters have already pointed out, this is pretty impractical.
  • Finally deaf people can use a computer too.. err.. wait a second..
  • Without a camera that translates sign language into spoken language.. This is kinda useless isn't it?

    You can talk to the hand, sure but, that doesn't help you read the deaf persons hands..
    --
    In retrospect: ...and yes.. I know some deaf people can talk sorta.. So I guess it helps there.
  • by MobyDisk ( 75490 ) on Tuesday January 17, 2006 @08:21AM (#14489712) Homepage
    ...that can covert spoken words and simple phrases into sign language...
    Ignoring the spelling, this implies that it has speech recognition. It converts SPOKEN words.
    ...recognizes the 50-character hiragana syllabary and about 10 simple phrases such as "ohayo" (good morning)...
    Hiragana is the Japanese phonetic alphabet, so it READS. Huh? Now, does it read only 10 simple phrases, or does it read anything plus it recognizes 10 simple audio phrases. I guess the breakthrough is the hand articulation and the idea, not the rest.
    • I guess the breakthrough is the hand articulation and the idea, not the rest
      Actually, some seniors from the local DeVry made a robotic hand, controlled by an HC11 microcontroller, that could be used for signing. That was around 2000.
  • Can you flip off someone in Japanese? Give them the OK sign? Give them a stop with the full palm?

    It would be interesting to know how these motions translate, if at all.
    • I teach in Japan and I've found some gestures to be the same whilst others are quite different. The OK sign in the west is often misinterpreted as it is similar to the the sign the Japanese use for money. When you call someone over to you or hail a cab you use your hand palm down and make a kind of gesture that looks like a western childs goodbye gesture. These are just a couple of examples.
    • Oh yeah, if you give someone the bird in Japan then, well, you're giving them the bird. Might be good to check they don't have a lot of tattoos or are driving a car with black tinted windows first ;-)
    • Then there is classic expample of this when Richard Nixon visited Australia and gave the sign for Victory. Well that wasn't exactly what he was doing in their minds... http://en.wikipedia.org/wiki/V_sign [wikipedia.org]
  • if i tell someone to f-off will it flip them off for me?
  • For the sake of being informative, here's a good page on Japanese Sign Language [deaflibrary.org]. It's not the same as American Sign Language, which isn't the same as British Sign Language as someone's sure to post eventually. *sigh* Short of Gestuno [wikipedia.org], there is no universal sign language, no more than there is a universal spoken or written language. *rolls eyes* Except, of course, Esperanto [wikipedia.org], which everyone speaks by now, right?
    • This has always been something of sheer amazement to me. I've always been personally keen on ASL and have taken an ASL course. It hasn't done me much good because unless you practice and use it; it just goes away... But the deaf community has really lost an opportunity here. If, in retrospect, the community had developed a common sign language across cultures around the world, then I believe this USL (Universal Sign Language) would have become the lowest common denominator for communication among the non
    • Out of context, your *rolls eyes* could be:

      -- You're on crack, and drooling

      -- You snorted Vegemite while hiking at YO-seh-MIGHT (Yosemite), but ended up (butt-up) a sulphur spring, except it had Killer Sulphur like in parts of Japan

      -- You got whacked aback of the head with a finely-tuned, perfect-pitch, exquisitely-balanced duratanium rod, and freeze frame caught your eyes before they popped out

      -- You're on your back, approaching orgasm, and drooling (just before being hammered by a duratanium rod, totally
  • by SWroclawski ( 95770 ) <serge@@@wroclawski...org> on Tuesday January 17, 2006 @08:51AM (#14489862) Homepage
    There's a researcher [gallaudet.edu] at Gallaudet working on the other side of this equation with a system designed to recognize sign langauge, which seems like a much harder problem.

    ASL isn't like English in that there are always specific words- a lot of it has to do with spacial context (where in the signing space the sign was made) and a whole class of signs that don't translate directly into words (they're hand shapes which can translate into an event or a description of an object or set of objects).

    And, as the research page shows, facial expressions and even facial movements can be part of a sign.

    Of course, this is American Sign Language, Japanese Sign Language may be very different.
    • And, as the research page shows, facial expressions and even facial movements can be part of a sign.

      Actually, AFAIK facial expressions, body tilt etc. have a para-linguistic meaning, much like the tone of voice and facial expressions in hearing communication.

      Therefore, they are not - strictly linguistically (structuralistically) speaking, part of the sign itself, but rather a part of the co-text and context.
      Anyway, that's what I remember from that one lecture a year and a half ago...

      • That's *partially* true.

        They have para-linguistic meaning but they're an important part of signining. It's not equilivant to the tone of voice as it would appear in a Western language (English, French, German), but it's often part of the sign itself in that certain facial expressions or mouthings should accompany certain signs.

        They're also *vital* for things like questions, where eyebrow position are the indicator that the statement is a question. translated to English, it might be more akin to tone: "Going
  • So now I have to learn Japanese to communicate with the deaf?
  • I remember reading about Stanford grad student project doing this ten years ago and a winner of the National Science Fair doing this three or four years ago.
  • The article states that the hand is 80 cm large (doesn't specify, but I'm guessing that's height). 80 cm is almost three feet for non-metric types. My own hand is only about 12 cm long. Is this the largest communicating hand on the planet? Or, as is more likely, the 80 cm takes into account the massive box of micromotors and computing. Pay no attention to the man behind the curtain.
  • There is an article Evolution of Mechanical Fingerspelling Hands for People who are Deaf-Blind [stanford.edu] that talks about the development of this technology since 1977.

    There are a couple of challenges with this type of technology. Sign language does not depend only on finger movements but gestures and facial expressions to convey emotion and context. Finger-spelling hands, being mechanical, can only accept data so fast before they start "choking" and sezing up/breaking (we tried hooking one up to a teleprompter ap

  • There have been other efforts to develop speech-to-sign robots. I recall one being featured on the Discovery channel many years ago that was able to fingerspell a variant of ASL that is used by persons that are both deaf and blind. That was nearly 10 years ago. In that case, the person "listens" by placing their hand over the signer's hand, and feels the different handshapes.

    On another note, this sort of translation is actually more difficult than a voice-to-text, text-to-sign translation. As someone
  • In the early 90's I worked with the robotic finger spelling hands called "Dexter" & "Ralph". Those devices were intended for individuals who are both deaf and blind. An individual with this kind of disability must rest their hand on the back of someone's hand (or on the back of the robotic hand) and feel letters as they are signed by the hand/fingers.
  • Now I can get a freaking ROBOT HAND attached to my HEAD so I can WRITE GOOD MORNING and have aforementioned ROBOT HAND sign it to a deaf guy. That's a LOT EASIER than just writing good morning and SHOWING IT TO THE DEAF GUY!
  • MST3000 (Score:2, Funny)

    by GigsVT ( 208848 ) *
    Bah, Joel invented this on MST3000 years ago. Where's my edible sneakers?
  • It's not like that robotic hand actually has to manipulate anything. That way, the program would be actually usable by any deaf person with a notebook that has a microphone.
  • by bcnu ( 551015 )
    Can I get one of these for the back of my car?
  • Wouldn't it just be easier to have the computer type the words to a screen? I mean if you have the equipment to carry around a robotic hand I'd imagine a LCD screen would be much cheaper and it could probably print more words to a screen than the hand could sign and do so faster.

    guess this is more for the sake of doing rather than being practical

  • Why a robotic hand? Why not simply text on a screen?
  • Why on earth use something as complex as a robot? What's wrong with using ultra-cheap computer graphics instead? Surely the effect must be identical for the viewer. Anything with that many motors has got to be expensive and unreliable.
  • Don't you guys ever consider the fact that some of these breakthroughs are not built for commercial applications?
    Instead of trying to analyze these achievements in the rather constricted mould of "Why not 3D graphics" or "Why not text on a screen", consider the use of this technology in the future - when say, the robots to help disabled people finally get off the assembly lines. By then, this process would've been refined to the point of being able to do an excellent job in communications.
    As a researche
  • by Locke2005 ( 849178 ) on Tuesday January 17, 2006 @01:15PM (#14492027)
    From my observation, much of the "color" or entertainment value in signed conversations comes not from the movement of the hands but the expressions on the face etc. combined with the movements. Clearly, this robot is still nowhere near being capable of the same range of expression as a human being. As a simple test, I'd like to see the robot tell a joke and get the same laughs as a proficient human signer telling the same joke...
  • oh come on, in romanji good morning is spelled "Ohaiyo" NOT "ohayo".. :/
  • If this robot gives the one-finger salute when cussed at, I'll be getting one.
  • No way in hell a computer can actually interpret in sign language within the next 10 years - at least. I appreciate the difficulty in doing what they have done, but there are intense subtlies to sign that will never be attainable by a machine without a human-like form. Puffing the cheeks, eyebrows, and other expressions are semantic modifiers that can change "I drove" into "I was driving forever!" or "I had to drive those kids ALL day!".
    I remember once telling a story, having given place names in space

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...