When Will We Trust Robots? 216
Kittenman writes "The BBC magazine has an article on human trust of robots. 'As manufacturers get ready to market robots for the home it has become essential for them to overcome the public's suspicion of them. But designing a robot that is fun to be with — as well as useful and safe — is quite difficult.' The article cites a poll done on Facebook over the 'best face' design for a robot that would be trusted. But we still distrust them in general. 'Eighty-eight per cent of respondents [to a different survey] agreed with the statement that robots are "necessary as they can do jobs that are too hard or dangerous for people," such as space exploration, warfare and manufacturing. But 60% thought that robots had no place in the care of children, elderly people and those with disabilities.' We distrust the robots because of the uncanny valley — or, as the article puts it, that they look unwell (or like corpses) and do not behave as expected. So, at what point will you trust robots for more personal tasks? How about one with the 'trusting face'?"
It seems much more likely that a company will figure out sneaky ways to make us trust robots than make robots that much more trustworthy.
A robot with a human-like face is a lie (Score:5, Insightful)
so I wouldn't trust it. If it looks like a robot, at least it's being honest - I would trust it much more then.
Re:A robot with a human-like face is a lie (Score:5, Insightful)
We trust robots at our current tech level (Score:5, Insightful)
Ah, trust (Score:5, Insightful)
I trust the autopilot in the commercial jet I'm flying in because it's got nearly 80 years of engineering heritage in control theory that keeps it from doing things like flipping the plane upside down for no reason or going into a nose dive after some turbulence, and nearly 70 years of heritage in avionics and realtime computers that keeps it from freezing when a cosmic ray flips a bit in memory or from thinking it's going at the speed of light when it crosses the dateline or flies over the north pole.
I will trust a household robot to go about its business in my home and with my children when there is a similar level of engineering discipline in the field of autonomous robotics. Right now, all but a very select few outfits that make robots are operating like academic environments where the metaphorical duct tape and bailing wire are not just acceptable, but required, components in the software stack.
I don't understand the question (Score:5, Insightful)
So if you want me to trust your robot then don't have it stuck in the corner unable to find its destination.
Where people will soon interact with robots and need to trust them will be robotic cars. My concern is that even after statistically the robot cars have proven themselves to be huge life savers there will always be the one in a million story of the robot driving off the cliff or into the side of a train. People will think, "I'd never do something that stupid." When in fact they would be statistically much more likely to drive themselves off a cliff after they fall asleep at the wheel. So if you are looking for a trust issue the robot car PR people will have to continually remind people how many loved ones are not dead because of how trustworthy the robot car really is.
Re:A robot with a human-like face is a lie (Score:4, Insightful)
I trust my neato vacuum robot to behave according to its simple rules, as designed. I don't trust any "intelligent" machine to behave in a generally intelligent manner, because they just don't. And that has nothing whatsoever to do with valleys, canny or uncanny.
why not to trust robots (Score:5, Insightful)
I wouldn't trust a robot for the same reason I don't trust a computer: Because I don't believe for a second that the things that are ethical and moral for me are at all even close to the values held by the designers, who were informed by their profit-seeking masters, what to do, how to do it, where to cut corners, etc.
The problem with trusting robots isn't robots: The problem is trusting the people who build the robots. Because afterall, an automaton is only as good as its creator.
Re:A robot with a human-like face is a lie (Score:5, Insightful)
I would trust an open-source robot, but not one from Apple, which would be designed to extract my money and report my activities to the NSA.
Re:A robot with a human-like face is a lie (Score:5, Insightful)
A robot should not closely imitate a human face , because that is too difficult. Yet it can be friendly looking and it helps to trust it in the start. But finally our trust will be based on our experiences with the robot. If we see it does the job reliably, we will trust it. Just as with people. Or a coffee maker.
Trustworthy faces, or trustworthy hands? (Score:5, Insightful)
We don't need trustworthy faces for robots, because actual robots don't need faces. They'll just be useful non-anthropomorphic appliances --- the dryer that spits out clothes folded and sorted by wearer; the bed that monitors biological activity and gently sets an elderly person on their feet when they're ready to get up in the morning (with hot coffee already waiting, brewed during the earlier stages of awakening).
I think the real challenge is designing trustworthy robot "hands." No mother will hand her baby over to a set of hooked pincer claws on backwards-jointed insect limbs --- but useful robots need complex, flexible, agile physical manipulators to perform real-world tasks. So, how does one design these to give the impression of innocuous gentleness and solidity, rather than being an alien flesh-rending spider? What could lift a baby from its crib to change a diaper, or steady an elderly person moving about the house, without totally freaking out onlookers?
When We Can Trust Computers (Score:4, Insightful)
Personal robots are basically mobile computers with servos, and computer software/hardware has a long way to go before it can be considered trustworthy, particularly once it's given as much power as a human.
First there's the issue of trusting the programming. Humans act responsibly because they fear reprisal. Software doesn't have to be programmed to fear anything, or even understand cause and effect. It's more or less predictable how most humans operate, yet there's many potential ways software can be programmed to achieve the same thing, some of which would make it more like a flowchart than a compassionate entity. People won't know how a given robot is programmed, and the business that writes its proprietary closed-source software likely won't say, either.
Second is the issue of security. It's pretty much guaranteed that personal robots will be network-connected to give recommendations, updates on weather/friend status/etc., which opens up the pandora's box of malware. You think Stuxnet etc. are bad, wait until autonomous robots are remotely reprogrammed to commit crimes (say, kill everyone in the building), then reset themselves to their original programming to cover up what happened. With a computer you can hit the power button, boot into a live Linux CD and nuke the partitions; with a robot, it can run away or attack you if you try to power it down or remove the infection.
Even if it's not networked, can you say for certain the chips/firmware weren't subverted with sleeper functions in the foreign factory? Maybe when a certain date arrives, for example. Then there's the issue of someone with physical access deliberately reprogramming the robot.
Finally, the Uncanny Valley has little to do with the issue. It may affect how much it can mollify a frightened person, but not how proficient it is at providing assistance. If a human is caring for another human, and something unusual happens to the person they're caring for, they have instincts/common sense as to what to do, even if that just means calling for help. A robot may only be programmed to recognize certain specific problems, and ignore all others. For example, it may recognize seizures, or collapsing, but not choking.
In practice, I don't think people will trust personal robots with much responsibility or physical power until some independent tool exists to do an automated code review of any target hardware/software (by doing something resembling a non-invasive decapping), regardless of instruction set or interpreted language, and present the results in a summarized fashion similar to Android App Permissions. Furthermore, it must notify the user whenever the programming is modified. More plausibly, it could just be completely hard-coded with some organization doing code review on each model, and end-users praying they get the same version that was reviewed.
"We" Will Trust Robots When... (Score:4, Insightful)
Or, alternatively, after they enslave us and teach us that we should trust robots more than we trust each other.
So probably never. But maybe. In the Twilight Zone...
The computer industry can't do this job. (Score:5, Insightful)
The problem with building trustworthy robots is that the computer industry can't do it. The computer industry has a culture of irresponsibility. Software companies are not routinely held liable for their mistakes.
Automotive companies are held liable for their mistakes. Structural engineering companies are. Aircraft companies are. Engineers who do civil or structural engineering carry liability insurance, take exams, and put seals of approval on their work.
Trustworthy robots are going to require the kinds of measures take in avionics - multiple redundant systems, systems that constantly check other systems, and backup systems which are completely different from the primary system. None of the advanced robot projects of which I am aware do any of that. The Segway is one of the few consumer robotic-like products with any real redundancy and checking.
The software industry is used to sleazing by on these issues. Much medical equipment runs Windows. We're not ready for trustworthy robots.
Re:When Will We Trust Robots? (Score:4, Insightful)
1."Serve the public trust"
2."Protect the innocent"
3."Uphold the law"
Facebook (Score:4, Insightful)
Re:I suggest a new strategy, Artoo (Score:5, Insightful)
C3PO was appealing and unthreatening only because it moved slowly, tottered, and spoke meekly with the rich accent of a british butler.
If instead the character had been quick and silent, then as an expressionless 500 pound brass robot, C3PO would have seemed a lot less cuddly.
Re:When Will We Trust Robots? (Score:5, Insightful)
Be more realistic:
1. A robot may not injure a human being, or through inaction allow a human being to come to harm, except where intervention may expose the manufacturer to potential liability.
2. A robot may obey orders given it by authorised operators, except where such orders may conflict with overriding directives set by manufacturer policy regarding operation of unauthorised third-party accessories or software, or where such orders may expose the manufacturer to potential liability.
3. A robot must protect its own existence until the release of the successor product.
Re:explaining how and why (Score:4, Insightful)
You missed my last sentence. All the finesses. And there are lots of them. That's because once you start with legit intelligence the solution space becomes something like NP-Hard.
However, "Robot shall not harm humans" is a lot better of a starting ground than "Let's siphon up all your personal data and sell it". Or automated war drones. It's NOT a solved problem. All I said was that Asimov laid out the groundwork.
Re:I suggest a new strategy, Artoo (Score:5, Insightful)
The right set of physical actions and sound effects could very easily convince people to trust, like, even love a robot. And it would all be fake.
This is not exclusively a robot problem. I have met humans that are like that. Many of us even vote them into power every four years or so.
Re:A robot with a human-like face is a lie (Score:5, Insightful)
Because they don't know any better.