Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics United Kingdom Communications Government Network Sci-Fi Software Science Technology

UK Standards Body Issues Official Guidance On Robot Ethics (digitaltrends.com) 68

An anonymous reader quotes a report from Digital Trends: The British Standards Institution, which is the U.K.'s national standards body charged with creating the technical standards and certification for various products and services, has just produced its first set of official ethics guidelines relating to robots. "The expert committee responsible for this thought there was really a need for a set of guidelines, setting out the ethical principles surrounding how robots are used," Dan Palmer, head of market development at BSI, told Digital Trends. "It's an area of big public debate right now." The catchily-named BS 8611 guidelines start by echoing Asimov's Three Laws in stating that: "Robots should not be designed solely or primarily to kill or harm humans." However, it also takes aim at more complex issues of transparency by noting that: "It should be possible to find out who is responsible for any robot and its behavior." There's even discussion about whether it's desirable for a robot to form an emotional bond with its users, an awareness of the possibility robots could be racist and/or sexist in their conduct, and other contentious gray areas. In all, it's an interesting attempt to start formalizing the way we deal with robots -- and the way roboticists need to think about aspects of their work that extend beyond technical considerations. You can check it out here -- although it'll set you back 158 pounds ($208) if you want to read the BSI guidelines in full. (Is that ethical?) "Robots have been used in manufacturing for a long time," Palmer said. "But what we're seeing now are more robots interacting with people. For instance, there are cases in which robots are being used to give care to people. These are usages that we haven't seen before -- [which is where the need for guidelines comes in.]"
This discussion has been archived. No new comments can be posted.

UK Standards Body Issues Official Guidance On Robot Ethics

Comments Filter:
  • Racist and sexist robots? Are you kidding me?

    The left really has lost their minds.

    • Maybe they had this sort of thing [kotaku.com] in mind?

    • Protect the innocent
      Serve the public trust
      Uphold the law

    • by AmiMoJo ( 196126 )

      It's already happened. We see systematic biases in algorithms all the time. Then you have Twitter bots that get tricked into repeating neo-Nazi propaganda.

      I think most companies would prefer if their socialising robots didn't become foul mouthed bigots, regardless of any guidelines.

    • With sentient machines the machine adapts to its owner. If an owner is racist then their machine will tend to become racist.. I am working on a Strong AI project and this is still a future unsolved problem. One of what feel like hundreds of problems. Strong AI is so hard that it makes rocket science look easy, like a kids game, and rocket science is hard. ..

  • >> "Robots should not be designed solely or primarily to kill or harm humans."

    Well there goes a crapload of the DARPA budget right there.

    • Well, DARPA could use the savings for robot projects that draw butterflies and daisy's.
    • autonomous mechanisms for killing human have existed for centuries, the chinese invented the first land mines quite a while ago.

      funny you would single out the USA, plenty of other countries have automated killing machines.

    • We already have robots designed "solely or primarily to kill or harm humans". They're called "cruise missiles".

      • Among others. I read that the border between the Koreas is equipped with auto-firing machine guns. The funny thing is that these ethical guidelines show us clearly that we often want robots to overcome our own ethical boundaries. Another use of robots is sex toys. You bet that they will discriminate, and exactly how the user or manufacturer wants them to. Reading this article shows me that if there are ethical boundaries to be crossed, we often tend to approach them from the wrong side.
    • I like the defensive formulation of the rule. Not solely or primarily. Clearly some outstanding ethicists have been working on this.
      • by JustNiz ( 692889 )

        Yeah its like if a deathbot also has a can-opener mode, and you call it a can-opener not a deathbot, then that's OK.

  • Perhaps they could consider them for humans next.

    Let's legislate morality for everyone, since that's always worked so well in the past...

    • by Anonymous Coward

      Perhaps they could consider them for humans next.

      Let's legislate morality for everyone, since that's always worked so well in the past...

      Uhm.. Law is a codification of common morals. Why do you think murder is illegal but self defense an exception?
      Legislation of morality have worked extremely well. It's the laws that doesn't have to do with morality that doesn't work.

      • Uhm.. Law is a codification of common morals. Why do you think murder is illegal but self defense an exception?
        Legislation of morality have worked extremely well. It's the laws that doesn't have to do with morality that doesn't work.

        I hadn't realized the teen pregnancy problem had been resolved to everyone's satisfaction. Thank you for enlightening me on the effectiveness of those laws; I was under the mistaken impression that underage sex acts still occurred!

        • by e r ( 2847683 )
          Your argument boils down to "It isn't perfect! So do away with it!".

          You're arguing that since murder happens anyway it should be totally legal?
          You're arguing that underage sex acts should be legalized?
          You're arguing that teen pregnancy should be encouraged?

          No?

          Then it looks like making laws to help enforce morality does have a net effect.
          • No.

            My argument boils down to "legislating morality (rather than ethics) is about as useful as trying to legislate Pi to be 3 to make the math easier".

            If you could make a law against murder that actually *precluded* murder, you might have something. The best you can do otherwise is make it so that people fear the punishment for violating the law (as opposed to fearing the actual law -- which they don't).

            You are merely disincentivizing the behaviour, not eliminating it. The point being is that it'a impossib

  • by Anonymous Coward

    As always, responsibility will fall on the person with the least and lowest-paid lawyers.

    Unless you thought Tesla et. al. actually planned on accepting liability every time their self-driving cars glitch out and kill someone.

  • Unless they can prevent the UK from purchasing robots that are are designed to kill, it's a pointless standard. The US is cranking out lots of killing machines that everyone likes to buy and it won't be long before one is autonomous.

  • No, this robot with all these killing implements and ablative shielding is purely for murdering cows by the dozens. You would have to completely reprogram it by pressing this button to switch it over to killing humans!
  • what many people do not appreciate is that asimov's books were a logical demonstration spanning asimov's lifetime and beyond that the three laws of robotics were a FAILURE. this is only really truly and clearly spelled out in the works written under contract by asimov's estate, for example in the book by Greg Bear. the three laws were so hard-wired into the positronic brain with billions upon billions of checks being carried out to ensure strict compliance with the three laws that there was no room for cr

    • Excellent point. That is precisely what made Asimov's 3 Law of Robotics such a fascinating reading.

      Asimov's stories pointed out all the edge cases, aka, bugs, where the laws broke down and failed. Completely.

      If 3 simple laws aren't enough, and are widely open to interpretation, there is a snowball's chance in hell that any "Robot Ethics" are going to work as well.

  • A robot is a mechanical device coordinated by a complex system of rules (its software). A bureaucracy is an organisation coordinated by a system of rules (law and policy). The rules largely define the behaviour. Whoever is responsible for the rules being the way they are has to take a large degree of responsibility for both writing those rules, and their consequences, and for their testing, maintenance, and if necessary withdrawal. This responsibility needs to be relatively unperturbed by conflicts of inter

    • Unless a positronic brain is created that has all the causal powers of neurons, and matches a human brain in functionality one-to-one. So the "software" isn't explicitly coded - it is built up through sensory experience. Could you even (ethically) sell such a robot? A synthetic being that is conscious of itself, free, and perhaps, moral.
  • Is there anything in it about cups of tea? Very important that robots know about tea.

    If it's any help, there is a British Standard Cup of Tea [bsigroup.com] but like this one, they want silly money for a copy.

  • "Robots should not be designed solely or primarily to kill or harm humans."

    Defense contractor: Meet destructor - our coffee-serving robot, who incidentally can also fire fragmentation grenades from his finger-tips, rip an enemy soldier to pieces, and breathe fire.

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...