Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Robotics Hardware Science

When Will We Trust Robots? 216

Kittenman writes "The BBC magazine has an article on human trust of robots. 'As manufacturers get ready to market robots for the home it has become essential for them to overcome the public's suspicion of them. But designing a robot that is fun to be with — as well as useful and safe — is quite difficult.' The article cites a poll done on Facebook over the 'best face' design for a robot that would be trusted. But we still distrust them in general. 'Eighty-eight per cent of respondents [to a different survey] agreed with the statement that robots are "necessary as they can do jobs that are too hard or dangerous for people," such as space exploration, warfare and manufacturing. But 60% thought that robots had no place in the care of children, elderly people and those with disabilities.' We distrust the robots because of the uncanny valley — or, as the article puts it, that they look unwell (or like corpses) and do not behave as expected. So, at what point will you trust robots for more personal tasks? How about one with the 'trusting face'?" It seems much more likely that a company will figure out sneaky ways to make us trust robots than make robots that much more trustworthy.
This discussion has been archived. No new comments can be posted.

When Will We Trust Robots?

Comments Filter:
  • by Pseudonym Authority ( 1591027 ) on Wednesday March 06, 2013 @01:34AM (#43089331)
    You are confirmed for never reading anything he wrote. All those robot books were basically explaining how and why those laws would not work.
  • by Anonymous Coward on Wednesday March 06, 2013 @02:16AM (#43089555)

    Which books were you reading? The ones I read played with some odd scenarios to explore the implications of the laws, but the laws always did work in the end. Indeed, the only times humans were really put in danger were in cases where the laws had been tinkered with, e.g. Runaround and (to a lesser extent) Catch that Rabbit. Also, Liar, if you count emotional harm as violating the first law.

    There was another case, (in one of the Foundation prequels, maybe?) where robotic space ships were able to kill people because they assumed that other space ships were also just crewless robots, but that hardly applies to our situation. It's easy enough to get people to kill people -- no need to have a robot do it.

  • by SuricouRaven ( 1897204 ) on Wednesday March 06, 2013 @04:08AM (#43090029)

    C3P0 was a protocol droid: Its function is as a translator and advisor on cultural conventions. Just the thing any diplomat needs: Not only will it translate when you want to talk to the people of some distant planet, it'll also remind you that forks with more than four tines are considered a badge of the king and not permitted to anyone of lower rank. Humanoid appearance is important for this job, as translation is a lot easier when you can use gestures too.

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...