Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics Hardware Science

When Will We Trust Robots? 216

Kittenman writes "The BBC magazine has an article on human trust of robots. 'As manufacturers get ready to market robots for the home it has become essential for them to overcome the public's suspicion of them. But designing a robot that is fun to be with — as well as useful and safe — is quite difficult.' The article cites a poll done on Facebook over the 'best face' design for a robot that would be trusted. But we still distrust them in general. 'Eighty-eight per cent of respondents [to a different survey] agreed with the statement that robots are "necessary as they can do jobs that are too hard or dangerous for people," such as space exploration, warfare and manufacturing. But 60% thought that robots had no place in the care of children, elderly people and those with disabilities.' We distrust the robots because of the uncanny valley — or, as the article puts it, that they look unwell (or like corpses) and do not behave as expected. So, at what point will you trust robots for more personal tasks? How about one with the 'trusting face'?" It seems much more likely that a company will figure out sneaky ways to make us trust robots than make robots that much more trustworthy.
This discussion has been archived. No new comments can be posted.

When Will We Trust Robots?

Comments Filter:
  • by TaoPhoenix ( 980487 ) <TaoPhoenix@yahoo.com> on Wednesday March 06, 2013 @01:16AM (#43089203) Journal

    Another of those articles that was already partially addressed in SF 60-70 years ago. The guy named Asimov laid out a chunk of the groundwork. But no, they were busy laughing it off as nonsense.

    A robot with *only* Asimov's laws is a pretty good start. A robot programmed with a lot of Social Media crap built in would find itself in violation of a bunch of cases of Rule 1 and Rule 2 pretty fast.

    http://en.wikipedia.org/wiki/Three_Laws_of_Robotics [wikipedia.org]
    1 A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2 A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    (There were some finesses, etc.)

  • by icebike ( 68054 ) on Wednesday March 06, 2013 @02:10AM (#43089537)

    I trust my neato vacuum robot to behave according to its simple rules, as designed. I don't trust any "intelligent" machine to behave in a generally intelligent manner, because they just doserving And that has nothing whatsoever to do with valleys, canny or uncanny.

    You've hit the nail on the head.

    I seriously doubt humans will ever create robots like Data, from Star Trek, because we would never trust them. Regardless of their programming, people would always suspect that the robots would be serving different masters, and spying on us. Hell, we don't even trust our own cell phones or our computers.

    Even if the device doesn't look like a human, people will not likely trust truely intelligent autonomous machines.
    I'm not convinced there is a valley involved. Its a popular meme, but not all that germane.

  • by bill_mcgonigle ( 4333 ) * on Wednesday March 06, 2013 @02:20AM (#43089577) Homepage Journal

    A robot with a human-like face is a lie so I wouldn't trust it.

    Right. C3PO strikes the right balance - humanoid enough to function alongside humans, built for humans to naturally interface with it (looking into its eyes, etc.) but nobody would ever mistake Threepio for a human, nor would that be a good idea.

    Why ever would a robot need to look like a little boy? Outside the weird A.I. plots or creepier.

    My boy has a Tribot [amazon.com] toy and he loves it. Every kid would love to have a Wall-E friend. Nobody wants a VICKI [youtube.com] wandering around the house.

  • Robots are friendly (Score:5, Interesting)

    by impbob ( 2857981 ) on Wednesday March 06, 2013 @02:47AM (#43089723)
    Living in Japan for the last few years, it's funny the contrast perception of robots. In Western movies, people often invent robots or AI which outgrows their human master and go psychotic - Eg. Terminator, War Games, Matrix, Cylons etc. It seems Western people are afraid of becoming obsolete, or fearful of their own parenting skills (why can't we raise robots to respect people instead of forcing them through programing to respect/follow us?). America especially, uses the field of robots for military applications. In Japan, robots are usually are seen more as workers or servants - Astroboy, childrens toys, assembly line workers etc. Robots are made into companions for the elderly or just to make life easier by automating things. Perhaps it's because Shinto-ism believes inanimate objects (trees, water, fire) can have a spirit. While Western (read: Christian) society believes God gives souls to only people, and if people can't play God by creating souls. And yes, I know there are some good robots in Western culture (Kryten) and some bad ones in Japanese culture.
  • by Areyoukiddingme ( 1289470 ) on Wednesday March 06, 2013 @03:14AM (#43089825)

    The last time on Slashdot this question came up, I made a comment observing that people are willing to ascribe human emotions and human reactions to an animated sack of flour. Disney corporation, back in the day, had a test for animators. If the animator could convey those emotions using images of a canvas sack, they passed. And a good animator can reliably do just that.

    Your comment about C3PO or Wall-E makes me want to invert my answer. Because I believe you're right: Wall-E would be completely acceptable, and that's actually a potential problem. The right set of physical actions and sound effects could very easily convince people to trust, like, even love a robot. And it would all be fake. A programmed response. In that earlier post, I remarked about the experiment in Central Park, where some roboticists released a bump-and-go car with a flag on it with a sign that said "please help me get to X". And enough people would actually help that it got there. And that was just a toy car. Can you imagine the reaction if Wall-E generated that signature sound effect that was him adjusting his eye pods and put on his best plaintive look and held up that sign in his paws? Somebody would take him by the paw and lead him all the way there. And yet, that plaintive look would be completely fake. Counterfeit. There would be no corresponding emotion behind it, or any mechanism within Wall-E that could generate something similar. Yet people would buy it.

    And that actually strikes me now as hazardous. A robot could be made to convince people it is trustworthy, while not actually being fit for its job. It wouldn't even have to be done maliciously. Say somebody creates a sophisticated program to convey emotion that way with some specified set of motors and parts and open sources it, and it's really good code, and people really like the results. So it gets slapped on to... anything. A lawnmowing robot that will mulch your petunias and your dog, then look contrite if you yell at it. A laundry folding robot that will fold your jeans and your mother-in-law, and cringe and fawn and look sad when your wife complains. And both of them executed all the right moves to appear happy and anxious to please when first set about their tasks.

    I could see it happening, and for the best of reasons. 'cause hey, code reuse, right?

  • by ryzvonusef ( 1151717 ) on Wednesday March 06, 2013 @06:29AM (#43090711) Journal

    No one *actually* want a rational machine, we want an irrational one, one that can be skewed by emotions.

    Remember the back-story of Will Smith's character in the movie "I, Robot"? In it, the Robot saving him made the "logical" decision of saving him rather than the girl, which is why he distrusts them. He wanted a robot that could judge his emotional outbursts and save the little girl, "despite" the rational choice.

    We *say* we want a robot with Asimov's three laws, but truly. we *want* something that can be manipulated like putty, just like a human can be. That's how we have evolved, and that's how we *want* to evolve.
    ----
    Also, relevant, an XKCD What-If on this issue: http://what-if.xkcd.com/5/ [xkcd.com]

I've noticed several design suggestions in your code.

Working...