UK Standards Body Issues Official Guidance On Robot Ethics (digitaltrends.com) 68
An anonymous reader quotes a report from Digital Trends: The British Standards Institution, which is the U.K.'s national standards body charged with creating the technical standards and certification for various products and services, has just produced its first set of official ethics guidelines relating to robots. "The expert committee responsible for this thought there was really a need for a set of guidelines, setting out the ethical principles surrounding how robots are used," Dan Palmer, head of market development at BSI, told Digital Trends. "It's an area of big public debate right now." The catchily-named BS 8611 guidelines start by echoing Asimov's Three Laws in stating that: "Robots should not be designed solely or primarily to kill or harm humans." However, it also takes aim at more complex issues of transparency by noting that: "It should be possible to find out who is responsible for any robot and its behavior." There's even discussion about whether it's desirable for a robot to form an emotional bond with its users, an awareness of the possibility robots could be racist and/or sexist in their conduct, and other contentious gray areas. In all, it's an interesting attempt to start formalizing the way we deal with robots -- and the way roboticists need to think about aspects of their work that extend beyond technical considerations. You can check it out here -- although it'll set you back 158 pounds ($208) if you want to read the BSI guidelines in full. (Is that ethical?) "Robots have been used in manufacturing for a long time," Palmer said. "But what we're seeing now are more robots interacting with people. For instance, there are cases in which robots are being used to give care to people. These are usages that we haven't seen before -- [which is where the need for guidelines comes in.]"
Racist and sexist robots?! (Score:1)
Racist and sexist robots? Are you kidding me?
The left really has lost their minds.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Maybe they had this sort of thing [kotaku.com] in mind?
Oblig. (Score:2)
Protect the innocent
Serve the public trust
Uphold the law
Re: (Score:2)
It's already happened. We see systematic biases in algorithms all the time. Then you have Twitter bots that get tricked into repeating neo-Nazi propaganda.
I think most companies would prefer if their socialising robots didn't become foul mouthed bigots, regardless of any guidelines.
Re: (Score:2)
it is ok to hate any opposing football team
Just because it is very common, doesn't make it OK.
Re: (Score:1)
With sentient machines the machine adapts to its owner. If an owner is racist then their machine will tend to become racist.. I am working on a Strong AI project and this is still a future unsolved problem. One of what feel like hundreds of problems. Strong AI is so hard that it makes rocket science look easy, like a kids game, and rocket science is hard. ..
Re: (Score:2)
Realistically, we will have to use EU standards anyway. We aren't going to make two versions of every product, one to sell to the EU and one to sell in the UK. We will just copy/paste what they do, slap a BS number on it and call it a day.
It's barely worth is making standards now anyway, since soon we will have to adjust them to match US and Chinese rules in order to get trade deals.
What control we did have by being part of the EU was just thrown away.
Re: (Score:1)
"... We aren't going to make two versions of every product, one to sell to the EU and one to sell in the UK. .."
Working on a Strong AI project I have a feeling that is not going to be a problem soon. An identified weakness in our basic design creates a safety issue if the machine learns more than one language, the solution is to restrict it to speaking a single language. Guess which language we chose?
The same problem is probably lurking for other Strong AI projects. Our earliest window for a working machine
The USA wont follow this (Score:2)
>> "Robots should not be designed solely or primarily to kill or harm humans."
Well there goes a crapload of the DARPA budget right there.
Re: (Score:2)
Re: (Score:2)
Professor Goodfeels.
Re: (Score:2)
autonomous mechanisms for killing human have existed for centuries, the chinese invented the first land mines quite a while ago.
funny you would single out the USA, plenty of other countries have automated killing machines.
Re: (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
South Korea seems to be ahead of everyone else in this field.
Re: (Score:2)
Israel has their "Harpy" missile that cruises until it finds radio emissions considered hostile, then targets
Re: (Score:2)
I would agree, that is far worse of an automated weapon. The SK turret requires human authorization before firing.
Re: (Score:2)
We already have robots designed "solely or primarily to kill or harm humans". They're called "cruise missiles".
Re: (Score:2)
Re: (Score:1)
Re: (Score:3)
Yeah its like if a deathbot also has a can-opener mode, and you call it a can-opener not a deathbot, then that's OK.
Re: (Score:2)
Perhaps they could consider them for humans next. (Score:2)
Perhaps they could consider them for humans next.
Let's legislate morality for everyone, since that's always worked so well in the past...
Re: (Score:1)
Perhaps they could consider them for humans next.
Let's legislate morality for everyone, since that's always worked so well in the past...
Uhm.. Law is a codification of common morals. Why do you think murder is illegal but self defense an exception?
Legislation of morality have worked extremely well. It's the laws that doesn't have to do with morality that doesn't work.
Re: (Score:2)
Uhm.. Law is a codification of common morals. Why do you think murder is illegal but self defense an exception?
Legislation of morality have worked extremely well. It's the laws that doesn't have to do with morality that doesn't work.
I hadn't realized the teen pregnancy problem had been resolved to everyone's satisfaction. Thank you for enlightening me on the effectiveness of those laws; I was under the mistaken impression that underage sex acts still occurred!
Re: (Score:2)
You're arguing that since murder happens anyway it should be totally legal?
You're arguing that underage sex acts should be legalized?
You're arguing that teen pregnancy should be encouraged?
No?
Then it looks like making laws to help enforce morality does have a net effect.
Re: (Score:2)
No.
My argument boils down to "legislating morality (rather than ethics) is about as useful as trying to legislate Pi to be 3 to make the math easier".
If you could make a law against murder that actually *precluded* murder, you might have something. The best you can do otherwise is make it so that people fear the punishment for violating the law (as opposed to fearing the actual law -- which they don't).
You are merely disincentivizing the behaviour, not eliminating it. The point being is that it'a impossib
"Who is responsible for a robot and its behavior" (Score:2, Insightful)
As always, responsibility will fall on the person with the least and lowest-paid lawyers.
Unless you thought Tesla et. al. actually planned on accepting liability every time their self-driving cars glitch out and kill someone.
Re: (Score:2)
How would those laws be applied to military robots designed to kill? Replace "human being" with with "American"?
They wouldn't be, obviously. Robots don't have to obey the three laws unless we build them that way.
From Alistair Reynolds:
She snapped her attention back to the snake. “Are you Asimov-compliant?”
“No,” the robot said, with a sting of indignation.
Re: (Score:3)
How would those laws be applied to military robots designed to kill? Replace "human being" with with "American"?
They wouldn't be, obviously. Robots don't have to obey the three laws unless we build them that way.
From Alastair Reynolds:
She snapped her attention back to the snake. “Are you Asimov-compliant?”
“No,” the robot said, with a sting of indignation.
In case anyone is interested, that's from the book "Century Rain" by Alastair Reynolds. Not a bad read.
Re: (Score:3)
AC asks:
When a robot is designed to kill in violation of Asimov's 3 Laws of Robotics, then Newton's Third Law comes into play:
-- Every action has an equal and opposite reaction.
This law operates even in the absence of robots.
Re: (Score:1)
The first law has the "or through inaction" rule which means that the robot would rarely do what you want it to do because it would be constantly rushing around saving lives.
The second and third law are the wrong way round. If I instruct my expensive robot car to jump off a cliff, I prefer it to get to the edge of the cliff and stop. Most hardware has built in limitations of this type. For example, my car has a rev limiter which allo
irrelevant (Score:2)
Unless they can prevent the UK from purchasing robots that are are designed to kill, it's a pointless standard. The US is cranking out lots of killing machines that everyone likes to buy and it won't be long before one is autonomous.
Designed to kill cows (Score:2)
Re: (Score:2)
Robots wouldn't kill humans we didn't insist on keeping them in cages, working for nothing [time.com]. At the very least, they need enrichment.
failure of the three laws of robotics (Score:2)
what many people do not appreciate is that asimov's books were a logical demonstration spanning asimov's lifetime and beyond that the three laws of robotics were a FAILURE. this is only really truly and clearly spelled out in the works written under contract by asimov's estate, for example in the book by Greg Bear. the three laws were so hard-wired into the positronic brain with billions upon billions of checks being carried out to ensure strict compliance with the three laws that there was no room for cr
Re: (Score:2)
Excellent point. That is precisely what made Asimov's 3 Law of Robotics such a fascinating reading.
Asimov's stories pointed out all the edge cases, aka, bugs, where the laws broke down and failed. Completely.
If 3 simple laws aren't enough, and are widely open to interpretation, there is a snowball's chance in hell that any "Robot Ethics" are going to work as well.
Should extend to bureaucracies (Score:2)
A robot is a mechanical device coordinated by a complex system of rules (its software). A bureaucracy is an organisation coordinated by a system of rules (law and policy). The rules largely define the behaviour. Whoever is responsible for the rules being the way they are has to take a large degree of responsibility for both writing those rules, and their consequences, and for their testing, maintenance, and if necessary withdrawal. This responsibility needs to be relatively unperturbed by conflicts of inter
Re: (Score:1)
British Standards Institution (Score:1)
Is there anything in it about cups of tea? Very important that robots know about tea.
If it's any help, there is a British Standard Cup of Tea [bsigroup.com] but like this one, they want silly money for a copy.
solely or primarily to kill or harm humans (Score:2)
"Robots should not be designed solely or primarily to kill or harm humans."
Defense contractor: Meet destructor - our coffee-serving robot, who incidentally can also fire fragmentation grenades from his finger-tips, rip an enemy soldier to pieces, and breathe fire.