Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Hardware Science

When Will We Trust Robots? 216

Kittenman writes "The BBC magazine has an article on human trust of robots. 'As manufacturers get ready to market robots for the home it has become essential for them to overcome the public's suspicion of them. But designing a robot that is fun to be with — as well as useful and safe — is quite difficult.' The article cites a poll done on Facebook over the 'best face' design for a robot that would be trusted. But we still distrust them in general. 'Eighty-eight per cent of respondents [to a different survey] agreed with the statement that robots are "necessary as they can do jobs that are too hard or dangerous for people," such as space exploration, warfare and manufacturing. But 60% thought that robots had no place in the care of children, elderly people and those with disabilities.' We distrust the robots because of the uncanny valley — or, as the article puts it, that they look unwell (or like corpses) and do not behave as expected. So, at what point will you trust robots for more personal tasks? How about one with the 'trusting face'?" It seems much more likely that a company will figure out sneaky ways to make us trust robots than make robots that much more trustworthy.
This discussion has been archived. No new comments can be posted.

When Will We Trust Robots?

Comments Filter:
  • by Press2ToContinue ( 2424598 ) * on Wednesday March 06, 2013 @01:13AM (#43089177)

    so I wouldn't trust it. If it looks like a robot, at least it's being honest - I would trust it much more then.

  • by Pseudonym Authority ( 1591027 ) on Wednesday March 06, 2013 @01:15AM (#43089193)
    What about a sexbot? Surely you don't want your robot ghost `maid' to look like an industrial meat grinder....
  • by detain ( 687995 ) on Wednesday March 06, 2013 @01:20AM (#43089231) Homepage
    We do trust current robots implicitly. Robots of all types of deployed and mostly run our industrial and manufacturing industries. They are showing up in the homes as well. The typical robots that you read about or see in movies are typically empowered with logic and AI well beyond anything we can actually create. As long as the 'intelligence' of robots continue to be (easily) understood and fully grasped by us this will not change. When robots start advancing beyond our comprehension that is the point when we will start to fear them, but that holds true of anything beyond our comprehension.
  • Ah, trust (Score:5, Insightful)

    by RightwingNutjob ( 1302813 ) on Wednesday March 06, 2013 @01:24AM (#43089269)
    I trust my car because I know it's got nearly a hundred years engineering heritage behind it that keeps it from doing things like going left when I steer right, accelerating when I hit the brakes, and exploding in a fireball when I turn it over.

    I trust the autopilot in the commercial jet I'm flying in because it's got nearly 80 years of engineering heritage in control theory that keeps it from doing things like flipping the plane upside down for no reason or going into a nose dive after some turbulence, and nearly 70 years of heritage in avionics and realtime computers that keeps it from freezing when a cosmic ray flips a bit in memory or from thinking it's going at the speed of light when it crosses the dateline or flies over the north pole.

    I will trust a household robot to go about its business in my home and with my children when there is a similar level of engineering discipline in the field of autonomous robotics. Right now, all but a very select few outfits that make robots are operating like academic environments where the metaphorical duct tape and bailing wire are not just acceptable, but required, components in the software stack.
  • by EmperorOfCanada ( 1332175 ) on Wednesday March 06, 2013 @01:26AM (#43089277)
    Why do we need robots that even vaguely look like people? We have people for that, lots of people, people who are quite good at looking like people. A Roomba zipping around on the floor with a cute face and some over sized eyes would just be creepy. Let form follow function and let the various robots look like what they do. If it is a farm robot my guess is that it will look like a tractor, fire fighting robot would be sort of like a fire truck, lawn mowing robot would look like a lawn mower.

    So if you want me to trust your robot then don't have it stuck in the corner unable to find its destination.

    Where people will soon interact with robots and need to trust them will be robotic cars. My concern is that even after statistically the robot cars have proven themselves to be huge life savers there will always be the one in a million story of the robot driving off the cliff or into the side of a train. People will think, "I'd never do something that stupid." When in fact they would be statistically much more likely to drive themselves off a cliff after they fall asleep at the wheel. So if you are looking for a trust issue the robot car PR people will have to continually remind people how many loved ones are not dead because of how trustworthy the robot car really is.
  • by Anonymous Coward on Wednesday March 06, 2013 @01:27AM (#43089293)

    I trust my neato vacuum robot to behave according to its simple rules, as designed. I don't trust any "intelligent" machine to behave in a generally intelligent manner, because they just don't. And that has nothing whatsoever to do with valleys, canny or uncanny.

  • by girlintraining ( 1395911 ) on Wednesday March 06, 2013 @01:28AM (#43089295)

    I wouldn't trust a robot for the same reason I don't trust a computer: Because I don't believe for a second that the things that are ethical and moral for me are at all even close to the values held by the designers, who were informed by their profit-seeking masters, what to do, how to do it, where to cut corners, etc.

    The problem with trusting robots isn't robots: The problem is trusting the people who build the robots. Because afterall, an automaton is only as good as its creator.

  • by aminorex ( 141494 ) on Wednesday March 06, 2013 @01:30AM (#43089303) Homepage Journal

    I would trust an open-source robot, but not one from Apple, which would be designed to extract my money and report my activities to the NSA.

  • by Mr Europe ( 657225 ) on Wednesday March 06, 2013 @01:43AM (#43089395)

    A robot should not closely imitate a human face , because that is too difficult. Yet it can be friendly looking and it helps to trust it in the start. But finally our trust will be based on our experiences with the robot. If we see it does the job reliably, we will trust it. Just as with people. Or a coffee maker.

  • by femtobyte ( 710429 ) on Wednesday March 06, 2013 @01:47AM (#43089421)

    We don't need trustworthy faces for robots, because actual robots don't need faces. They'll just be useful non-anthropomorphic appliances --- the dryer that spits out clothes folded and sorted by wearer; the bed that monitors biological activity and gently sets an elderly person on their feet when they're ready to get up in the morning (with hot coffee already waiting, brewed during the earlier stages of awakening).

    I think the real challenge is designing trustworthy robot "hands." No mother will hand her baby over to a set of hooked pincer claws on backwards-jointed insect limbs --- but useful robots need complex, flexible, agile physical manipulators to perform real-world tasks. So, how does one design these to give the impression of innocuous gentleness and solidity, rather than being an alien flesh-rending spider? What could lift a baby from its crib to change a diaper, or steady an elderly person moving about the house, without totally freaking out onlookers?

  • by mentil ( 1748130 ) on Wednesday March 06, 2013 @02:05AM (#43089517)

    Personal robots are basically mobile computers with servos, and computer software/hardware has a long way to go before it can be considered trustworthy, particularly once it's given as much power as a human.

    First there's the issue of trusting the programming. Humans act responsibly because they fear reprisal. Software doesn't have to be programmed to fear anything, or even understand cause and effect. It's more or less predictable how most humans operate, yet there's many potential ways software can be programmed to achieve the same thing, some of which would make it more like a flowchart than a compassionate entity. People won't know how a given robot is programmed, and the business that writes its proprietary closed-source software likely won't say, either.

    Second is the issue of security. It's pretty much guaranteed that personal robots will be network-connected to give recommendations, updates on weather/friend status/etc., which opens up the pandora's box of malware. You think Stuxnet etc. are bad, wait until autonomous robots are remotely reprogrammed to commit crimes (say, kill everyone in the building), then reset themselves to their original programming to cover up what happened. With a computer you can hit the power button, boot into a live Linux CD and nuke the partitions; with a robot, it can run away or attack you if you try to power it down or remove the infection.
    Even if it's not networked, can you say for certain the chips/firmware weren't subverted with sleeper functions in the foreign factory? Maybe when a certain date arrives, for example. Then there's the issue of someone with physical access deliberately reprogramming the robot.

    Finally, the Uncanny Valley has little to do with the issue. It may affect how much it can mollify a frightened person, but not how proficient it is at providing assistance. If a human is caring for another human, and something unusual happens to the person they're caring for, they have instincts/common sense as to what to do, even if that just means calling for help. A robot may only be programmed to recognize certain specific problems, and ignore all others. For example, it may recognize seizures, or collapsing, but not choking.

    In practice, I don't think people will trust personal robots with much responsibility or physical power until some independent tool exists to do an automated code review of any target hardware/software (by doing something resembling a non-invasive decapping), regardless of instruction set or interpreted language, and present the results in a summarized fashion similar to Android App Permissions. Furthermore, it must notify the user whenever the programming is modified. More plausibly, it could just be completely hard-coded with some organization doing code review on each model, and end-users praying they get the same version that was reviewed.

  • by guttentag ( 313541 ) on Wednesday March 06, 2013 @02:46AM (#43089719) Journal
    All of the following have occurred:
    • When Hollywood stops implanting the idea that robots are out to kill us all.
    • When we stop using robots to kill people in drone strikes.
    • When we trust the person who programmed the robot (if you do not know who that person is then you cannot trust the robot).
    • When we can legally jailbreak our robots to make them do what we want them to do and only what we want them to do.
    • When robots can be artificially handicapped to ensure they never become as untrustworthy as humans.

    Or, alternatively, after they enslave us and teach us that we should trust robots more than we trust each other.

    So probably never. But maybe. In the Twilight Zone...

  • by Animats ( 122034 ) on Wednesday March 06, 2013 @02:48AM (#43089731) Homepage

    The problem with building trustworthy robots is that the computer industry can't do it. The computer industry has a culture of irresponsibility. Software companies are not routinely held liable for their mistakes.

    Automotive companies are held liable for their mistakes. Structural engineering companies are. Aircraft companies are. Engineers who do civil or structural engineering carry liability insurance, take exams, and put seals of approval on their work.

    Trustworthy robots are going to require the kinds of measures take in avionics - multiple redundant systems, systems that constantly check other systems, and backup systems which are completely different from the primary system. None of the advanced robot projects of which I am aware do any of that. The Segway is one of the few consumer robotic-like products with any real redundancy and checking.

    The software industry is used to sleazing by on these issues. Much medical equipment runs Windows. We're not ready for trustworthy robots.

  • by Wolfling1 ( 1808594 ) on Wednesday March 06, 2013 @02:54AM (#43089753) Journal
    Don't you mean...

    1."Serve the public trust"
    2."Protect the innocent"
    3."Uphold the law"
  • Facebook (Score:4, Insightful)

    by naroom ( 1560139 ) on Wednesday March 06, 2013 @03:04AM (#43089791)
    If Facebook has taught us anything at all, it's that trust becomes a non-issue for people, as long as the "vanity" and "convenience" payoffs are high enough.
  • by RandCraw ( 1047302 ) on Wednesday March 06, 2013 @03:07AM (#43089801)

    C3PO was appealing and unthreatening only because it moved slowly, tottered, and spoke meekly with the rich accent of a british butler.

    If instead the character had been quick and silent, then as an expressionless 500 pound brass robot, C3PO would have seemed a lot less cuddly.

  • by SuricouRaven ( 1897204 ) on Wednesday March 06, 2013 @04:11AM (#43090039)

    Be more realistic:
    1. A robot may not injure a human being, or through inaction allow a human being to come to harm, except where intervention may expose the manufacturer to potential liability.
    2. A robot may obey orders given it by authorised operators, except where such orders may conflict with overriding directives set by manufacturer policy regarding operation of unauthorised third-party accessories or software, or where such orders may expose the manufacturer to potential liability.
    3. A robot must protect its own existence until the release of the successor product.

  • by TaoPhoenix ( 980487 ) <TaoPhoenix@yahoo.com> on Wednesday March 06, 2013 @04:12AM (#43090043) Journal

    You missed my last sentence. All the finesses. And there are lots of them. That's because once you start with legit intelligence the solution space becomes something like NP-Hard.

    However, "Robot shall not harm humans" is a lot better of a starting ground than "Let's siphon up all your personal data and sell it". Or automated war drones. It's NOT a solved problem. All I said was that Asimov laid out the groundwork.

  • by lxs ( 131946 ) on Wednesday March 06, 2013 @05:42AM (#43090479)

    The right set of physical actions and sound effects could very easily convince people to trust, like, even love a robot. And it would all be fake.

    This is not exclusively a robot problem. I have met humans that are like that. Many of us even vote them into power every four years or so.

  • by MurukeshM ( 1901690 ) on Wednesday March 06, 2013 @07:00AM (#43090847)

    Because they don't know any better.

No man is an island if he's on at least one mailing list.

Working...