Robot Warriors Will Get a Guide To Ethics 317
thinker sends in an MSNBC report on the development of ethical guidelines for battlefield robots. The article notes that such robots won't go autonomous for a while yet, and that the guidelines are being drawn up for relatively uncomplicated situations — such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target. "Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own? Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an 'ethical governor,' a package of software and hardware that tells robots when and what to fire. His book on the subject, Governing Lethal Behavior in Autonomous Robots, comes out this month."
Re:Been there, done that (Score:5, Insightful)
Ethical War Robots? (Score:5, Insightful)
Weird. So this fails the Asimov criteria.
More importantly, would also necessarily fail the Golden Rule and Kant's Categorical Imperative.
If this is ethics, its a pretty limited version of it, and to be honest sounds more like rules of engagement than actual ethics.
Re:Been there, done that (Score:5, Insightful)
Not to mention... some of the assumptions aren't great. As the article itself points out, it's been a long time since there was a civilian-free battlefield.
As for the direct example of the robot locating a sniper and being offered the choice of a grenade launcher and rifle - how does the robot know that the buildings surrounding it aren't military targets? How do they get classified? How does a hut differ from a mosque, and how does a hut differ from some elaborate sniper cover?
I don't think this is going to work out as planned.
Smart missiles (Score:2, Insightful)
There is no such thing as a smart missile unless it immediately destroys itself safely.
Jesus Christ (Score:4, Insightful)
If you drop a fucking robot into a village where a vast majority of the people don't know how to read, what do you think they're going to do? They'll shoot at it, get the backs of their heads blown off, and then everyone will say, "Well, the dumbass shouldn't have shot at the robot!"
If this war on terror is so important, sign up. If you can't, get your brother or sister or even better, sign your kids up. If they're not of age yet, they'd better be in the JROTC. Then you can talk to me about how using drones and missiles isn't the dominion of motherfucking cowards. It's for freedom lovers defending freedom!
And if you think it isn't, imagine what the headlines would be if China landed a few thousand autonomous tanks and droids in Los Angeles. Oh, but that's right. This is about principles for others to follow, and for us to ignore.
Illegal (Score:4, Insightful)
It should never be legal for a robot to "decide" to take lethal action.... Ever.
Humans (Score:5, Insightful)
Why is this a when question, rather than an if question?
Re:Been there, done that (Score:4, Insightful)
Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.
anyone who shoots at you is a legit target (Score:3, Insightful)
In any war zone (regardless of who has fled and who hasn't), isn't anyone who shoots at you, defined as a combatant and a legitimate target?
Is this a promo? (Score:4, Insightful)
Was this article an attempt to promote Terminator 4?
Re:Been there, done that (Score:5, Insightful)
Put another way, replace the robots with the WOPR, and the humans with, well, the humans in the bunkers.
Re:Meanwhile, back in reality (Score:3, Insightful)
What? this isn't true, there ahve been many battle fields where civilians aren't at.
Re:Jesus Christ (Score:3, Insightful)
Meh.. If the alternative is to bomb the village, a robot that shoots only those that shoot at it sounds like a great idea.
Re:Been there, done that (Score:5, Insightful)
Re:I know this *seems* like a bad idea (Score:3, Insightful)
Another thing that's nice about restricting the ability to kill to humans is that a rogue soldier, no matter how well-trained, can be killed easily enough with the right application of force. We have no idea how advanced lethal robots could be. We don't have any reasonable guarantee that a rogue robot could be stopped.
Re:Robot Warriors Will Lose (Score:3, Insightful)
Good points, but I don't think this is about robotic soldiers lumbering over battlefields just yet. I think this will, at first, be more about semi-automated fire control systems and drones. Like a future Predator drone might decide to wait to fire its Hellfire missile if it thinks there's too many civilians in the area and the projected accuracy is too low due to interference. Or a point-defense system might see a kid walking around in a field and decide that he's not a threat, because he's not carrying any weapons or moving in a threatening manner.
Since our drones are still somewhat dumb, most of the ethical considerations are the responsibility of the programmers and project commanders. For example, that drone might be programmed to distinguish straight dusty road with no other cars or civilians around from twisty road in the middle of downtown with lots of civilians walking around and a poor chance to hit the target.
Besides, if a robotic soldier were pointing a gun at you and demanding that you surrender, it would probably be tracking you with multiple sensors and would blow your face off as soon as your finger twitched in the direction of your gun.
what is the fallback mode? (Score:3, Insightful)
what is the fallback mode / data link lost?
crush kill destroy?
It is a bad thing (Score:4, Insightful)
And yet, is it fundamentally a bad thing? We give less-than-stable humans that responsibility all the time.
Yes it is fundamentally a very bad thing. First instead of being limited to one trigger that unstable human can now pull hundreds of triggers simultaneously. The robot will never question his orders it will simply comply no matter how morally questionable the order is.
Secondly the one big way in which democracy helps maintain peace is that the people who will do the dieing in any conflict are the ones who also effectively control the government through their votes. If suddenly Western democracies can send robots in then they are far more likely to go to war in the first place which is never a good thing.
Re:Illegal (Score:3, Insightful)
Sadly, your humble, kindly engineers will just build and maintain the thing. It'll be a committee of politico-military-management-morons that decide what instructions the thing is given. :-(
Re:A better idea (Score:4, Insightful)
Well duh (Score:5, Insightful)
These are military robots. No military robots would fall under Asimov's list.
What I think some fail to remember is that Asimov was just a science fiction author. He wrote stories. Very compelling ones, his place in modern literature is gigantic, but none the less just fictional stories. Thus his "three laws" have nothing to do with reality. They aren't natural laws, or legal standards, they are just part of a story. Thus they have no standing in the world.
They may well be how Asimov would like to see robots work, they may well be how you'd like to see robots work, however they have nothing to do with how the military wants it to work. They are not a canon of any kind.
When a robot is developed for military purposes, it should be no surprise the ethics are considered in that context. The whole point of it will be to be able to use deadly force if necessary. The programming is then when is that ok and not ok.
So please, let's have all us geeks lay off the Asimov "three laws" when it comes to robots. Every time something like this comes up people start talking about that like it matters to anyone. No, it really doesn't.
Re:We're doomed even if it is flawless (Score:5, Insightful)
Re:We're doomed even if it is flawless (Score:5, Insightful)
Re:We're doomed even if it is flawless (Score:3, Insightful)
By your logic, shooting someone at point-blank range would be significantly more difficult than shooting them from 200 yards away, which would be more difficult than shooting them with battlfield artilary from 1 mile away[...]
Correct, if we're talking about killing the same 1 target. Stabbing someone to death has to be far more difficult than watching a special ops team on a monitor halfway around the world.
The logic doesn't follow, because as you move farther away and impact more people, the decision becomes more and more difficult.
You introduced a second variable here besides distance... "more people". We don't drop nukes because we know that every time we do that we kill tens or hundreds of thousands of people. People we don't really want to kill are going to die. Lots of them. On the other hand, we frequently send cruise missiles and drop smartbombs because we can kill with them far more... discriminately, than with mirv nukes on an icbm.
For some reason, you are assuming that physical separation suddenly turns people into sociopaths.
Nobody said that. You've extrapolated too far all on your own.
Re:Well duh (Score:3, Insightful)
I think you missed the point of the stories. It's about what happens to robots who are built with the best intentions. Science Fiction is speculative fiction - the proverbial "What if?" He didn't try to predict what was going to happen - he tried to figure out what would happen if certain things were in place.
Verhoven might have been the better prognosticator, but Asimov was the better guide.
Re:Jesus Christ (Score:3, Insightful)
Re:Been there, done that (Score:3, Insightful)
I don't see a technical reason why a robot couldn't get that, too. It would be just a negative score for any killed human, which would enter the equation when making the decision.
Re:We're doomed even if it is flawless (Score:3, Insightful)
For some reason, you are assuming that physical separation suddenly turns people into sociopaths.
Well, yeah, because thats proven. Remember the Milgram experiment [wikipedia.org]?
Re:A better idea (Score:3, Insightful)
Imagine that, say, China attacks USA using robots.
Still thinking that excessive casualties are OK?
Re:A better idea (Score:3, Insightful)
General David Petraeus. See here [google.co.uk].