Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Hardware Science

When Will We Trust Robots? 216

Kittenman writes "The BBC magazine has an article on human trust of robots. 'As manufacturers get ready to market robots for the home it has become essential for them to overcome the public's suspicion of them. But designing a robot that is fun to be with — as well as useful and safe — is quite difficult.' The article cites a poll done on Facebook over the 'best face' design for a robot that would be trusted. But we still distrust them in general. 'Eighty-eight per cent of respondents [to a different survey] agreed with the statement that robots are "necessary as they can do jobs that are too hard or dangerous for people," such as space exploration, warfare and manufacturing. But 60% thought that robots had no place in the care of children, elderly people and those with disabilities.' We distrust the robots because of the uncanny valley — or, as the article puts it, that they look unwell (or like corpses) and do not behave as expected. So, at what point will you trust robots for more personal tasks? How about one with the 'trusting face'?" It seems much more likely that a company will figure out sneaky ways to make us trust robots than make robots that much more trustworthy.
This discussion has been archived. No new comments can be posted.

When Will We Trust Robots?

Comments Filter:
  • by Press2ToContinue ( 2424598 ) * on Wednesday March 06, 2013 @01:13AM (#43089177)

    so I wouldn't trust it. If it looks like a robot, at least it's being honest - I would trust it much more then.

    • by Pseudonym Authority ( 1591027 ) on Wednesday March 06, 2013 @01:15AM (#43089193)
      What about a sexbot? Surely you don't want your robot ghost `maid' to look like an industrial meat grinder....
      • by davester666 ( 731373 ) on Wednesday March 06, 2013 @03:02AM (#43089777) Journal

        If it looks like an industrial meat grinder, my junk ain't going into it.

        • by Razgorov Prikazka ( 1699498 ) on Wednesday March 06, 2013 @09:24AM (#43091593)
          After several years of dating, including a fair deal of beer-goggled-1-night-stands with 'persons' that were merely technically female. Some of those would actually make an industrial meat grinder look like a reasonable option. Even the meat grinders that have MRM/MSM/MDM options would look lovely compared to them (http://en.wikipedia.org/wiki/Mechanically_separated_meat)
          Anyway I did put my junk in those ladies and here are some ProTips for you when encountering an industrial meat grinder in your hours of despair:

          Pro-Tip # 1 > Do not despair, in stead undress and roll 1d20 for initiative.
          Pro-Tip # 2 > Turn off the lights (ALL of them! It is vitally important that you do not see a thing otherwise Mr. Limpyman will visit you...)
          Pro-Tip # 3 > Get drunk / stoned out of your brains (or both)
          Pro-Tip # 4 > Turn on some Barry White to drown out the whizzing, whhrring and buzzing noises (http://www.youtube.com/watch?v=x0I6mhZ5wMw)
          Pro-Tip # 5 > Once done give in: $sudo robodoll --pour-drink --hand-cigarettes --auto-clean-all

          I hope these tips will help you get over your anxiety of our sexy-meatgrinding-overlordesses.
      • What about a sexbot? Surely you don't want your robot ghost `maid' to look like an industrial meat grinder....

        No, but change a few lines of code and it can look perfectly safe for your, er, sausage... and yet still be a meat grinder. Are you going to check the firmware?

      • by locater16 ( 2326718 ) on Wednesday March 06, 2013 @04:16AM (#43090059)
        Speak for yourself meatbag! I mean, uhh- gross! Ew, yeah that's, that's sure not, what I would want. A human, an ordinary everyday human. Nothing different about me. All glory to the humans, down with those dirty disgusting robots! That I'm not one of, by the way.
      • Comment removed based on user account deletion
    • by Anonymous Coward on Wednesday March 06, 2013 @01:27AM (#43089293)

      I trust my neato vacuum robot to behave according to its simple rules, as designed. I don't trust any "intelligent" machine to behave in a generally intelligent manner, because they just don't. And that has nothing whatsoever to do with valleys, canny or uncanny.

      • by aminorex ( 141494 ) on Wednesday March 06, 2013 @01:30AM (#43089303) Homepage Journal

        I would trust an open-source robot, but not one from Apple, which would be designed to extract my money and report my activities to the NSA.

      • by icebike ( 68054 ) on Wednesday March 06, 2013 @02:10AM (#43089537)

        I trust my neato vacuum robot to behave according to its simple rules, as designed. I don't trust any "intelligent" machine to behave in a generally intelligent manner, because they just doserving And that has nothing whatsoever to do with valleys, canny or uncanny.

        You've hit the nail on the head.

        I seriously doubt humans will ever create robots like Data, from Star Trek, because we would never trust them. Regardless of their programming, people would always suspect that the robots would be serving different masters, and spying on us. Hell, we don't even trust our own cell phones or our computers.

        Even if the device doesn't look like a human, people will not likely trust truely intelligent autonomous machines.
        I'm not convinced there is a valley involved. Its a popular meme, but not all that germane.

        • If you know that it can only bounce off walls and suck up dust bunnies,then you can trust your robot.

          If you program it yourself, or have an open source peer review you might want to keep an eye on it like a kid or your pet pitbull, depending on its capabilities.

          Otherwise you should pull its batteries.

        • Most people DO trust their cellphones and their computers.
          • by MurukeshM ( 1901690 ) on Wednesday March 06, 2013 @07:00AM (#43090847)

            Because they don't know any better.

            • Because they don't know any better.

              I hate to repost a statement, but I made this a couple of days ago in regards to java in another /. story.

              Who do we trust? We gotta trust someone/something at some point. I use VPN, proxies, Tor, Freenet, and some other things frequently. Still though, I gotta trust Google with some of my mail. I gotta trust Comcast with some of my pipes. Heck, I gotta trust the Devs of Tor/Freenet for that matter. I gotta trust Apple/Samsung/HTC/et al with the hardware.

              I could crawl though every line of precompiled c

        • If the amount of personal information given is anything to go by, I'd say many people consider Facebook on of the most trustworthy web sites of the world...

    • by Mr Europe ( 657225 ) on Wednesday March 06, 2013 @01:43AM (#43089395)

      A robot should not closely imitate a human face , because that is too difficult. Yet it can be friendly looking and it helps to trust it in the start. But finally our trust will be based on our experiences with the robot. If we see it does the job reliably, we will trust it. Just as with people. Or a coffee maker.

    • by bill_mcgonigle ( 4333 ) * on Wednesday March 06, 2013 @02:20AM (#43089577) Homepage Journal

      A robot with a human-like face is a lie so I wouldn't trust it.

      Right. C3PO strikes the right balance - humanoid enough to function alongside humans, built for humans to naturally interface with it (looking into its eyes, etc.) but nobody would ever mistake Threepio for a human, nor would that be a good idea.

      Why ever would a robot need to look like a little boy? Outside the weird A.I. plots or creepier.

      My boy has a Tribot [amazon.com] toy and he loves it. Every kid would love to have a Wall-E friend. Nobody wants a VICKI [youtube.com] wandering around the house.

      • by RandCraw ( 1047302 ) on Wednesday March 06, 2013 @03:07AM (#43089801)

        C3PO was appealing and unthreatening only because it moved slowly, tottered, and spoke meekly with the rich accent of a british butler.

        If instead the character had been quick and silent, then as an expressionless 500 pound brass robot, C3PO would have seemed a lot less cuddly.

        • I think that's a feature - the robot design itself is completely neutral, allowing people to judge it by its actions.

          Run away from the fast menacing Threepio!

          • by SuricouRaven ( 1897204 ) on Wednesday March 06, 2013 @04:08AM (#43090029)

            C3P0 was a protocol droid: Its function is as a translator and advisor on cultural conventions. Just the thing any diplomat needs: Not only will it translate when you want to talk to the people of some distant planet, it'll also remind you that forks with more than four tines are considered a badge of the king and not permitted to anyone of lower rank. Humanoid appearance is important for this job, as translation is a lot easier when you can use gestures too.

      • by Areyoukiddingme ( 1289470 ) on Wednesday March 06, 2013 @03:14AM (#43089825)

        The last time on Slashdot this question came up, I made a comment observing that people are willing to ascribe human emotions and human reactions to an animated sack of flour. Disney corporation, back in the day, had a test for animators. If the animator could convey those emotions using images of a canvas sack, they passed. And a good animator can reliably do just that.

        Your comment about C3PO or Wall-E makes me want to invert my answer. Because I believe you're right: Wall-E would be completely acceptable, and that's actually a potential problem. The right set of physical actions and sound effects could very easily convince people to trust, like, even love a robot. And it would all be fake. A programmed response. In that earlier post, I remarked about the experiment in Central Park, where some roboticists released a bump-and-go car with a flag on it with a sign that said "please help me get to X". And enough people would actually help that it got there. And that was just a toy car. Can you imagine the reaction if Wall-E generated that signature sound effect that was him adjusting his eye pods and put on his best plaintive look and held up that sign in his paws? Somebody would take him by the paw and lead him all the way there. And yet, that plaintive look would be completely fake. Counterfeit. There would be no corresponding emotion behind it, or any mechanism within Wall-E that could generate something similar. Yet people would buy it.

        And that actually strikes me now as hazardous. A robot could be made to convince people it is trustworthy, while not actually being fit for its job. It wouldn't even have to be done maliciously. Say somebody creates a sophisticated program to convey emotion that way with some specified set of motors and parts and open sources it, and it's really good code, and people really like the results. So it gets slapped on to... anything. A lawnmowing robot that will mulch your petunias and your dog, then look contrite if you yell at it. A laundry folding robot that will fold your jeans and your mother-in-law, and cringe and fawn and look sad when your wife complains. And both of them executed all the right moves to appear happy and anxious to please when first set about their tasks.

        I could see it happening, and for the best of reasons. 'cause hey, code reuse, right?

        • by lxs ( 131946 ) on Wednesday March 06, 2013 @05:42AM (#43090479)

          The right set of physical actions and sound effects could very easily convince people to trust, like, even love a robot. And it would all be fake.

          This is not exclusively a robot problem. I have met humans that are like that. Many of us even vote them into power every four years or so.

          • by khallow ( 566160 )

            This is not exclusively a robot problem. I have met humans that are like that. Many of us even vote them into power every four years or so.

            My view on this is that humans have evolved with a large set of nearly automatic body communication. One can tell a lot about a person from the way they act, move, and pose. Similar mechanisms exist for human speech as well.

            Similarly, it is thought that some capacity for deception evolved in these modes of communication. But the deception takes effort which can be picked up on, sometimes unconsciously.

            What is changing as I see it, is that we can build machines or modify living organisms so that they c

        • Would be fun to actually do such an experiment. Replace the eyes by cameras to record what's going on, and optionally watch it happen from a short distance by hiding in the crowd.

    • Facebook (Score:4, Insightful)

      by naroom ( 1560139 ) on Wednesday March 06, 2013 @03:04AM (#43089791)
      If Facebook has taught us anything at all, it's that trust becomes a non-issue for people, as long as the "vanity" and "convenience" payoffs are high enough.
      • "When will we trust robots?" The answer is negative. We already do.

        The threshold for tolerance is when I get something I want, and get it reasonably reliably. Just like trust in humans. I'll loan you $20 after you earn some level of trust.

        I trust using ATM's, because I need cash when the bank's closed, and haven't had one miscount my withdrawl yet.

        I trust my floor cleaner (Mint 5200), because I don't want to do it, and it hasn't hurt the dogs or kids.

        I will trust my first self-driving car when it drives

    • by 1u3hr ( 530656 )
      They're not talking about robots, but androids. We don't need ersatz human slaves to do housework. Just a machine, something small that can fold itself up and go in a cupboard when it's not needed. Not a human sized thing lumbering around the house.

      If you must have something big, the Jetson's "Betty" would do.

  • Another of those articles that was already partially addressed in SF 60-70 years ago. The guy named Asimov laid out a chunk of the groundwork. But no, they were busy laughing it off as nonsense.

    A robot with *only* Asimov's laws is a pretty good start. A robot programmed with a lot of Social Media crap built in would find itself in violation of a bunch of cases of Rule 1 and Rule 2 pretty fast.

    http://en.wikipedia.org/wiki/Three_Laws_of_Robotics [wikipedia.org]
    1 A robot may not injure a human being or, through inaction, a

    • by Pseudonym Authority ( 1591027 ) on Wednesday March 06, 2013 @01:34AM (#43089331)
      You are confirmed for never reading anything he wrote. All those robot books were basically explaining how and why those laws would not work.
      • Re: (Score:2, Informative)

        by Anonymous Coward

        Which books were you reading? The ones I read played with some odd scenarios to explore the implications of the laws, but the laws always did work in the end. Indeed, the only times humans were really put in danger were in cases where the laws had been tinkered with, e.g. Runaround and (to a lesser extent) Catch that Rabbit. Also, Liar, if you count emotional harm as violating the first law.

        There was another case, (in one of the Foundation prequels, maybe?) where robotic space ships were able to kill peo

        • by gl4ss ( 559668 )

          the laws never worked like they were naively supposed to work like.

          but.

          what's the fucking point in debating trusting robots when there's nothing to "trust" in the robot yet? ? ? ? uncanny valley? what the fuck does it matter when the robot can't decide anything

        • I can't think of cases where the Laws were totally broken, but quite a few where they didn't work as expected.

      • by icebike ( 68054 )

        Exactly.
        Why do people always totally fail to understand Amisov? He wasn't trying to be coy or opaque.

        • by khallow ( 566160 )

          Why do people always totally fail to understand Amisov?

          Perhaps you can enlighten us then? The original poster was right after all. Asimov portrays a world where the Three Laws work most of the time. In fact, the people of those sets of stories never ever do away with the Three Laws.

      • by TaoPhoenix ( 980487 ) <TaoPhoenix@yahoo.com> on Wednesday March 06, 2013 @04:12AM (#43090043) Journal

        You missed my last sentence. All the finesses. And there are lots of them. That's because once you start with legit intelligence the solution space becomes something like NP-Hard.

        However, "Robot shall not harm humans" is a lot better of a starting ground than "Let's siphon up all your personal data and sell it". Or automated war drones. It's NOT a solved problem. All I said was that Asimov laid out the groundwork.

      • by khallow ( 566160 )

        You are confirmed for never reading anything he wrote. All those robot books were basically explaining how and why those laws would not work perfectly.

        FIFY. If those laws wouldn't work at all, then why did nobody of the stories, human or robot ever come up with a better idea? In the end, robots and humans were separated not because of flaws in the Three Laws, but because the type of care and support that robots provided proved harmful to humans and their development.

    • Asimov claimed that the Three Laws were originated by "John W. Campbell"
      in a conversation they had on December 23, 1940.

      Campbell in turn maintained that he picked them out of Asimov's stories and discussions,
      and that his role was merely to state them "explicitly".

    • by Wolfling1 ( 1808594 ) on Wednesday March 06, 2013 @02:54AM (#43089753) Journal
      Don't you mean...

      1."Serve the public trust"
      2."Protect the innocent"
      3."Uphold the law"
      • Yeah. and number 4 was "any attempt to arrest a senior OCP employee results in shutdown" - and I can totally see that in the directives of a robot built by any of OCP's real world analogs.

      • 3."Uphold the law"

        Humans can't even decide what the law is- the second amendment is a classic example- so how do you think a robot will be able to interpret law?

    • by SuricouRaven ( 1897204 ) on Wednesday March 06, 2013 @04:11AM (#43090039)

      Be more realistic:
      1. A robot may not injure a human being, or through inaction allow a human being to come to harm, except where intervention may expose the manufacturer to potential liability.
      2. A robot may obey orders given it by authorised operators, except where such orders may conflict with overriding directives set by manufacturer policy regarding operation of unauthorised third-party accessories or software, or where such orders may expose the manufacturer to potential liability.
      3. A robot must protect its own existence until the release of the successor product.

      • You may be thinking of a different robot and a different manufacturer:

        1. Serve the public
        2. Protect the innocent
        3. Uphold the law
        4. Classified

    • A robot with *only* Asimov's laws is a pretty good start.

      Don't even need that. My garage door opener is a robot and it requires no such programming. Same goes for the elevator at work, my AWD gearbox in my car, and the eject button on my DVD player. You gotta stop thinking of robots in the 1950's sci fi terror sense and more like, you know, they actually are.

  • by detain ( 687995 ) on Wednesday March 06, 2013 @01:20AM (#43089231) Homepage
    We do trust current robots implicitly. Robots of all types of deployed and mostly run our industrial and manufacturing industries. They are showing up in the homes as well. The typical robots that you read about or see in movies are typically empowered with logic and AI well beyond anything we can actually create. As long as the 'intelligence' of robots continue to be (easily) understood and fully grasped by us this will not change. When robots start advancing beyond our comprehension that is the point when we will start to fear them, but that holds true of anything beyond our comprehension.
    • by icebike ( 68054 )

      We do trust current robots implicitly. Robots of all types of deployed and mostly run our industrial and manufacturing industries. They are showing up in the homes as well. The typical robots that you read about or see in movies are typically empowered with logic and AI well beyond anything we can actually create. As long as the 'intelligence' of robots continue to be (easily) understood and fully grasped by us this will not change. When robots start advancing beyond our comprehension that is the point when we will start to fear them, but that holds true of anything beyond our comprehension.

      Its a tortured definition of a robot that includes simple machinery designed to do simple tasks driven by simply switches.

      Come back to the discussion when you instruct a machine to get out the flour, yeast, tomato sauce and peperoni and bake you a pizza in your own kitchen and serve it to you with your favorite brew.

  • Ah, trust (Score:5, Insightful)

    by RightwingNutjob ( 1302813 ) on Wednesday March 06, 2013 @01:24AM (#43089269)
    I trust my car because I know it's got nearly a hundred years engineering heritage behind it that keeps it from doing things like going left when I steer right, accelerating when I hit the brakes, and exploding in a fireball when I turn it over.

    I trust the autopilot in the commercial jet I'm flying in because it's got nearly 80 years of engineering heritage in control theory that keeps it from doing things like flipping the plane upside down for no reason or going into a nose dive after some turbulence, and nearly 70 years of heritage in avionics and realtime computers that keeps it from freezing when a cosmic ray flips a bit in memory or from thinking it's going at the speed of light when it crosses the dateline or flies over the north pole.

    I will trust a household robot to go about its business in my home and with my children when there is a similar level of engineering discipline in the field of autonomous robotics. Right now, all but a very select few outfits that make robots are operating like academic environments where the metaphorical duct tape and bailing wire are not just acceptable, but required, components in the software stack.
    • A fair assessment.

      I would go further, and say that the duct tape and bailing wire are still practically literal on the physical side of the autonomous household robot "market". To my knowledge, there are still no devices that qualify for that description. And no, the Roomba does not qualify. It's a bump-and-go car with a suction attachment, not an autonomous robot. I would really like to have a robot the size of an overgrown vacuum cleaner that is tasked with being a mobile self-guided fire extinguisher

    • I trust the autopilot in the commercial jet I'm flying in because it's got nearly 80 years of engineering heritage in control theory that keeps it from doing things like flipping the plane upside down for no reason or going into a nose dive after some turbulence, and nearly 70 years of heritage in avionics and realtime computers that keeps it from freezing when a cosmic ray flips a bit in memory or from thinking it's going at the speed of light when it crosses the dateline or flies over the north pole.

      You should try giving the FAA and NTSB a little credit.
      It was only 6 years ago that the F-22 borked itself crossing the dateline,
      because the military didn't force their contractor to follow the FAA's best practices in writing the software.

  • by EmperorOfCanada ( 1332175 ) on Wednesday March 06, 2013 @01:26AM (#43089277)
    Why do we need robots that even vaguely look like people? We have people for that, lots of people, people who are quite good at looking like people. A Roomba zipping around on the floor with a cute face and some over sized eyes would just be creepy. Let form follow function and let the various robots look like what they do. If it is a farm robot my guess is that it will look like a tractor, fire fighting robot would be sort of like a fire truck, lawn mowing robot would look like a lawn mower.

    So if you want me to trust your robot then don't have it stuck in the corner unable to find its destination.

    Where people will soon interact with robots and need to trust them will be robotic cars. My concern is that even after statistically the robot cars have proven themselves to be huge life savers there will always be the one in a million story of the robot driving off the cliff or into the side of a train. People will think, "I'd never do something that stupid." When in fact they would be statistically much more likely to drive themselves off a cliff after they fall asleep at the wheel. So if you are looking for a trust issue the robot car PR people will have to continually remind people how many loved ones are not dead because of how trustworthy the robot car really is.
    • Where people will soon interact with robots and need to trust them will be robotic cars. My concern is that even after statistically the robot cars have proven themselves to be huge life savers there will always be the one in a million story of the robot driving off the cliff or into the side of a train. People will think, "I'd never do something that stupid." When in fact they would be statistically much more likely to drive themselves off a cliff after they fall asleep at the wheel. So if you are looking for a trust issue the robot car PR people will have to continually remind people how many loved ones are not dead because of how trustworthy the robot car really is.

      Isn't that basically what the nuclear industry did? We know how that went.

      I think car makers should err on the side of acknowledging people's natural fears when they communicate about the safety factor. People are predictably irrational in that they overestimate new dangers over old, invisible dangers over visible, dangers outside of their control over dangers under their control.

      Self-driving car manufacturers could make an effort to make the cars to look as close to other cars as possible to avoid the nove

    • There's good reasons for wanting a humanoid robot, especially in places they have to share with humans, like our homes. You could have a multitude of robots around the house for all manner of tasks, but a humanoid robot could do all of them using the same tools we use ourselves, being much more versatile. And if we're going to share living space with it, it would probably be nice for it to look like a human instead of a monstrosity with 6 arms and tracks.

      Of course it'll be a while before such robots be
  • by girlintraining ( 1395911 ) on Wednesday March 06, 2013 @01:28AM (#43089295)

    I wouldn't trust a robot for the same reason I don't trust a computer: Because I don't believe for a second that the things that are ethical and moral for me are at all even close to the values held by the designers, who were informed by their profit-seeking masters, what to do, how to do it, where to cut corners, etc.

    The problem with trusting robots isn't robots: The problem is trusting the people who build the robots. Because afterall, an automaton is only as good as its creator.

  • An insurance plan with a robot clause [robotcombat.com].
     
    You never know when the metal ones will come for you.

  • by reasterling ( 1942300 ) on Wednesday March 06, 2013 @01:36AM (#43089343) Homepage

    But 60% thought that robots had no place in the care of children, elderly people and those with disabilities.

    At last, we finally know what jobs will be available when robots have replaced the human workforce.

  • Robot are just machines. Currently there is no reason no to trust them. Now, if they start giving robots weapons and program them to kill people, then yes, maybe there might be something to worry about.

    I will also trust it to break down at the worst times possible, cost a ton of money to repair, and probably cost a nice amount to actually buy.

  • by femtobyte ( 710429 ) on Wednesday March 06, 2013 @01:47AM (#43089421)

    We don't need trustworthy faces for robots, because actual robots don't need faces. They'll just be useful non-anthropomorphic appliances --- the dryer that spits out clothes folded and sorted by wearer; the bed that monitors biological activity and gently sets an elderly person on their feet when they're ready to get up in the morning (with hot coffee already waiting, brewed during the earlier stages of awakening).

    I think the real challenge is designing trustworthy robot "hands." No mother will hand her baby over to a set of hooked pincer claws on backwards-jointed insect limbs --- but useful robots need complex, flexible, agile physical manipulators to perform real-world tasks. So, how does one design these to give the impression of innocuous gentleness and solidity, rather than being an alien flesh-rending spider? What could lift a baby from its crib to change a diaper, or steady an elderly person moving about the house, without totally freaking out onlookers?

    • When the brain implants finally arrive, I'll be the first in line, and when I can finally download my brain to the fucking matrix, don't even warn me, just plug me in. I'm as pro-tech as they come, and not afraid of innovation. But when it comes to certain stuff, I don't see why we need the innovation in those areas. Certain things define us as humans, and they are beautiful as they are, no need to add tech. I don't need sex tech, an ordinary old fashioned set of tits and pussy do just fine. And I don't nee

    • What could lift a baby from its crib to change a diaper, or steady an elderly person moving about the house, without totally freaking out onlookers?

      Something like this? [youtube.com] But seriously, a humanoid robot might be really good at those jobs (as well as all the other chores around the house). Once we figure out how to program any robot to safely and reliably take care of babies or the elderly, having it control a humanoid body will be trivial in comparison.

    • What could lift a baby from its crib to change a diaper, or steady an elderly person moving about the house[...]?

      Telekinesis?

      Though I'm not sure if giving robots telekinetic abilities are in our long term best interests.

  • This question turns on the meaning of trust. As I understand the term trust, I only apply it to sentient beings whom I know have the capacity to harm but who reliably choose not to do so. The real question, then, is whether robots will or even can fit this bill.
  • I will trust them if and only if their "positronic brains" can only be manufactured incorporating Asimov's Three Laws of Robotics. Otherwise... well, we've all seen those movies.
  • I would certainly trust a robot to serve me a beer. I'm sure it can be very efficient at it. I would still prefer to have a bartender.

    For the same reasons, an elderly that already see very little human interactions, being taken care by a robot. That is depressing solitude in a tin can.

  • by mentil ( 1748130 ) on Wednesday March 06, 2013 @02:05AM (#43089517)

    Personal robots are basically mobile computers with servos, and computer software/hardware has a long way to go before it can be considered trustworthy, particularly once it's given as much power as a human.

    First there's the issue of trusting the programming. Humans act responsibly because they fear reprisal. Software doesn't have to be programmed to fear anything, or even understand cause and effect. It's more or less predictable how most humans operate, yet there's many potential ways software can be programmed to achieve the same thing, some of which would make it more like a flowchart than a compassionate entity. People won't know how a given robot is programmed, and the business that writes its proprietary closed-source software likely won't say, either.

    Second is the issue of security. It's pretty much guaranteed that personal robots will be network-connected to give recommendations, updates on weather/friend status/etc., which opens up the pandora's box of malware. You think Stuxnet etc. are bad, wait until autonomous robots are remotely reprogrammed to commit crimes (say, kill everyone in the building), then reset themselves to their original programming to cover up what happened. With a computer you can hit the power button, boot into a live Linux CD and nuke the partitions; with a robot, it can run away or attack you if you try to power it down or remove the infection.
    Even if it's not networked, can you say for certain the chips/firmware weren't subverted with sleeper functions in the foreign factory? Maybe when a certain date arrives, for example. Then there's the issue of someone with physical access deliberately reprogramming the robot.

    Finally, the Uncanny Valley has little to do with the issue. It may affect how much it can mollify a frightened person, but not how proficient it is at providing assistance. If a human is caring for another human, and something unusual happens to the person they're caring for, they have instincts/common sense as to what to do, even if that just means calling for help. A robot may only be programmed to recognize certain specific problems, and ignore all others. For example, it may recognize seizures, or collapsing, but not choking.

    In practice, I don't think people will trust personal robots with much responsibility or physical power until some independent tool exists to do an automated code review of any target hardware/software (by doing something resembling a non-invasive decapping), regardless of instruction set or interpreted language, and present the results in a summarized fashion similar to Android App Permissions. Furthermore, it must notify the user whenever the programming is modified. More plausibly, it could just be completely hard-coded with some organization doing code review on each model, and end-users praying they get the same version that was reviewed.

  • People is also afraid of a god that doesn't even exist, of a hell which is equally imaginary, of gays/zombies/terrorists destroying society, of apocalypse, and a bunch of other retarded crap. Yet you talk to them about banning guns (or any other real, actual threat) and they call bullshit.

    Truth is, we don't have any strong A.I, so being afraid of robots is like being afraid of cars: No matter what it does, it's just a machine controlled directly or indirectly by a human. In the case of the car, it's being c

  • How it looks is a marketing issue, not a safety issue. The issue is with what happens in an unexpected scenario.

    Welcome to baby-sitting. The task has always been easy. The job is easy and the scenario is easy. The hard part is the responsibility.

    It's not about feeding the baby; and it's not about putting the baby to sleep. It's also not about changing the diapers.

    It's about what you'll do if the drapes catch fire. What you'll do if the parents get stuck in the snow and can't make it back for 24 hours.

  • When they are hard-wired with the 3 laws of robotics

  • A couple of decades ago I ran a cyberpunk RPG game and my players would get really pissed at me when they were "hacking into the Gibson" on factory produced systems and there heads would explode. Then we'd have an argument about why they thought that a corporation that had all the power to do what it wants wouldn't just build in a real kill switch.

    We aren't there yet, but year by year I feel more vindicated by my argument.

  • "Just stay away from me, Bishop. You got that straight?"
  • We already get taught to not trust people, and they're familiar. As robot behavior gets more complex, it'll be more apparently mysterious, and harder to trust.

  • by guttentag ( 313541 ) on Wednesday March 06, 2013 @02:46AM (#43089719) Journal
    All of the following have occurred:
    • When Hollywood stops implanting the idea that robots are out to kill us all.
    • When we stop using robots to kill people in drone strikes.
    • When we trust the person who programmed the robot (if you do not know who that person is then you cannot trust the robot).
    • When we can legally jailbreak our robots to make them do what we want them to do and only what we want them to do.
    • When robots can be artificially handicapped to ensure they never become as untrustworthy as humans.

    Or, alternatively, after they enslave us and teach us that we should trust robots more than we trust each other.

    So probably never. But maybe. In the Twilight Zone...

  • Robots are friendly (Score:5, Interesting)

    by impbob ( 2857981 ) on Wednesday March 06, 2013 @02:47AM (#43089723)
    Living in Japan for the last few years, it's funny the contrast perception of robots. In Western movies, people often invent robots or AI which outgrows their human master and go psychotic - Eg. Terminator, War Games, Matrix, Cylons etc. It seems Western people are afraid of becoming obsolete, or fearful of their own parenting skills (why can't we raise robots to respect people instead of forcing them through programing to respect/follow us?). America especially, uses the field of robots for military applications. In Japan, robots are usually are seen more as workers or servants - Astroboy, childrens toys, assembly line workers etc. Robots are made into companions for the elderly or just to make life easier by automating things. Perhaps it's because Shinto-ism believes inanimate objects (trees, water, fire) can have a spirit. While Western (read: Christian) society believes God gives souls to only people, and if people can't play God by creating souls. And yes, I know there are some good robots in Western culture (Kryten) and some bad ones in Japanese culture.
  • by Animats ( 122034 ) on Wednesday March 06, 2013 @02:48AM (#43089731) Homepage

    The problem with building trustworthy robots is that the computer industry can't do it. The computer industry has a culture of irresponsibility. Software companies are not routinely held liable for their mistakes.

    Automotive companies are held liable for their mistakes. Structural engineering companies are. Aircraft companies are. Engineers who do civil or structural engineering carry liability insurance, take exams, and put seals of approval on their work.

    Trustworthy robots are going to require the kinds of measures take in avionics - multiple redundant systems, systems that constantly check other systems, and backup systems which are completely different from the primary system. None of the advanced robot projects of which I am aware do any of that. The Segway is one of the few consumer robotic-like products with any real redundancy and checking.

    The software industry is used to sleazing by on these issues. Much medical equipment runs Windows. We're not ready for trustworthy robots.

    • Automotive companies are held liable for their mistakes. Structural engineering companies are. Aircraft companies are. Engineers who do civil or structural engineering carry liability insurance, take exams, and put seals of approval on their work.

      And many of those things either rely on computers for the design or have a computer controlling them. Every new car sold where I live must, by law, have electronic stability control installed. Nowadays if a bridge design is not run through a simulation then it won't get built, a modern computer chip is impossible to design without a modern computer, etc, there really isn't much in the way of modern engineering that does not heavily rely on computer controls and/or simulations.

      That software is part of th

    • by Kjella ( 173770 )

      I think you will find the robot manufacturing industry are in general held liable. Same with the fancy brain of your car or most medical equipment, the only things that get away wtih no liability are those that manipulate just bits and bytes and not real world objects. It just doesn't come free, I'd say most software, operating systems and hardware runs "good enough" for being COTS components that I mix and match as I please. If you want something certified to work and take the liability for it they'll want

  • Maybe as soon as they are able to impose their will on humanity but chose not to?
  • by MacTO ( 1161105 ) on Wednesday March 06, 2013 @03:48AM (#43089965)

    Vendors and researchers have a history of making overstated claims about robots, particularly when it comes down to those that interact with people directly. In other words, people don't distrust robots so much as they distrust the people who are trying to sell them.

    If it was a matter of distrusting robots themselves, we would still see people buying household robots to do impersonal tasks, like cleaning the house. These are not very different from industrial robots after all, which many people are more than happy to accept. But since we distrust the claims of robotic vendors, we wouldn't even be willing to accept that type of robot - never mind a robot that cares for a child.

  • by angel'o'sphere ( 80593 ) <{ed.rotnemoo} {ta} {redienhcs.olegna}> on Wednesday March 06, 2013 @04:11AM (#43090041) Journal

    With the invasion of military drones (and private ones), chinese and korean hackers everywhere, worms infiltrating industrial robots and control computers, the least harmfull I can think about is that a home robot would spy on me.
    The next step is: it is manipulating my home banking. And later one it commits a crime in my name, e.g. breaking into my neighbours WLAN and manipulating *his* e-banking.

    With parts coming from china and other low cost countries, we never can know what a single controller or daughter board in such a thing is really capable of. (Conspiracy theory: all keyboards coming from Taiwan and China have a hardware keyboard logger build in, just collect them from the trash and here you go ...)

  • by ryzvonusef ( 1151717 ) on Wednesday March 06, 2013 @06:29AM (#43090711) Journal

    No one *actually* want a rational machine, we want an irrational one, one that can be skewed by emotions.

    Remember the back-story of Will Smith's character in the movie "I, Robot"? In it, the Robot saving him made the "logical" decision of saving him rather than the girl, which is why he distrusts them. He wanted a robot that could judge his emotional outbursts and save the little girl, "despite" the rational choice.

    We *say* we want a robot with Asimov's three laws, but truly. we *want* something that can be manipulated like putty, just like a human can be. That's how we have evolved, and that's how we *want* to evolve.
    ----
    Also, relevant, an XKCD What-If on this issue: http://what-if.xkcd.com/5/ [xkcd.com]

  • A friend of mine is a senior researcher in robotics. His take is not to trust anything with enough mechanical power to hurt you anytime soon. With the pathetic state practical software engineering is in, I find that very sensible.

  • No one remembers the anti-robot sentiment expressed in Astroboy ? 1962 and then again from 1980-1982? Then again, in the 2002 (remake) ? At least in the cartoon, there were reasons for this! Robot criminals etc. What do we have now, non-thinking assembly robots? Someone, needs to TUG IT LESS. Stop tugging it, MEDIA idiots. At least if you do, do it with vaseline and make sure you keep your robot fantasies quiet. Fucking wankers.
  • If Americans are going to trust robots, we'll have to program religion into the robots [scientificamerican.com].

  • Never.

    Freakin' sympathizers and turncoats.

  • The article talks about the challenges about designing a robot for the home that is fun, useful and safe. Fun is doable. Safe is doable. But useful? Really? I'm sure that such a device could be put to use, but does that make it useful. In college, we built bookshelves using cinder blocks and lumber, but I would not hold out that cinder blocks were "useful" in the home (outside of the actual construction of the home), just because we found a way to use them.

    Obviously, we have numerous things in our homes

  • The article talks about the challenges about designing a robot for the home that is fun, useful and safe. Fun is doable. Safe is doable. But useful? Really? I'm sure that such a device could be put to use, but does that make it useful. In college, we built bookshelves using cinder blocks and lumber, but I would not hold out that cinder blocks were "useful" in the home (outside of the actual construction of the home), just because we found a way to use them.

    Obviously, we have numerous things in our homes tha

  • Look, society is getting dumber, period. I mean most people these days are lacking in basic common sense.

    A robot that is programmed to do the dishes or sweep the floors isn't going to activate in the middle of the night and stab the occupants in their sleep. Anyone with common sense will understand that these robots are not "thinking" they are only programmed to do certain tasks. I've had a Roomba for several years now, I have never feared it having some ulterior motive other then sweeping up my floors.

    B

  • Maybe when we have a basic income and free health care so we can let them take most of the jobs.

  • Trust is something that has to be earned. You can't "design" a trustworthy robot. You have to design robots and get them into the field. Over time, people will either develop trust or solidify their distrust based on interactions with the robots. It seems silly to me that a company would consider the appearance of a robot to be the primary factor in building trust.
  • by mark_reh ( 2015546 ) on Wednesday March 06, 2013 @11:01AM (#43092503) Journal

    when we can have sex with them.

  • I already trust robots. It's their programmers I don't trust. As soon as robots can think for themselves, that's when I'll stop trusting them.

  • by codeAlDente ( 1643257 ) on Wednesday March 06, 2013 @01:58PM (#43095029)
    I don't see foresee trusting a robot, if it's even remotely true that 88% of people believe robots are necessary for warfare because it's just too dangerous for humans. It's all good until one of these people deems that I'm not good enough for this planet, then becomes my judge, jury and executioner with one little hack. I'm starting to wonder whether a robot singularity is the best hope for the survival of humanity.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...