Robot Warriors Will Get a Guide To Ethics 317
thinker sends in an MSNBC report on the development of ethical guidelines for battlefield robots. The article notes that such robots won't go autonomous for a while yet, and that the guidelines are being drawn up for relatively uncomplicated situations — such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target. "Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own? Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an 'ethical governor,' a package of software and hardware that tells robots when and what to fire. His book on the subject, Governing Lethal Behavior in Autonomous Robots, comes out this month."
Been there, done that (Score:5, Informative)
Re:Been there, done that (Score:5, Insightful)
Re:Been there, done that (Score:5, Insightful)
Not to mention... some of the assumptions aren't great. As the article itself points out, it's been a long time since there was a civilian-free battlefield.
As for the direct example of the robot locating a sniper and being offered the choice of a grenade launcher and rifle - how does the robot know that the buildings surrounding it aren't military targets? How do they get classified? How does a hut differ from a mosque, and how does a hut differ from some elaborate sniper cover?
I don't think this is going to work out as planned.
Re:Been there, done that (Score:5, Interesting)
Re:Been there, done that (Score:4, Insightful)
Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.
Re:Been there, done that (Score:5, Insightful)
Put another way, replace the robots with the WOPR, and the humans with, well, the humans in the bunkers.
Re:Been there, done that (Score:5, Interesting)
The cold logic can be better though, if you know what you actually want to optimize. Humans often make decisions that don't do what they claim they want, e.g. minimizing civilian casualties.
Re:Been there, done that (Score:5, Interesting)
The cold logic can be better though, if you know what you actually want to optimize. Humans often make decisions that don't do what they claim they want, e.g. minimizing civilian casualties.
Yeah it's that 'if' that's the killer. The problem is that you have to be able to express what you want to optimize using cold logic before a machine can start making that decision, and we aren't able to do that. Terms like "civilian" are nebulous, and attempts to rigidly codify them fail to capture the intent and connotation behind those words that we understand, but can't express. We can reason about that, while machines can't. Fuzzy logic doesn't help, that's just a way of decision making on non-binary factors. With a lot of types of fuzzy logic (neural nets, genetic algorithms) it can be even more important to precisely define what you want, since they can produce solutions that "work" correctly and optimize your problem as specified, but do so in a way very unlike you expected.
People of course have the disadvantages of being error prone, and well sometimes being bastards who just don't give a shit what you want them to optimize, so there's appeal to the machine. Yet nothing fails as spectacularly and efficiently as a machine doing exactly what it was programmed to do when it's exactly what you didn't want. To use a machine in situations where even humans equipped with honest intentions, solid faculties, and experience have enormous trouble determining who is "enemy" vs "innocent"? As in most situations our military has been in since the 50s and is going to be involved in for the foreseeable future? That sounds crazy to me. I'll take human judgment and its failure modes any day.
Kinda off topic, but speaking of honest intentions, I gotta say the humans making the judgments in question, i.e. our soldiers, have a damn hard problem to solve and it shows how human potential is pretty damn amazing. We're biologically the same animal we were a hundred thousand years ago and more. But in the past, even recent past, the most difficult ethical decision a warrior was asked to make was whether to decide if someone was a threat and should be killed or wasn't and should be enslaved, and it wasn't of any consequence so nobody cared to go over those decisions with a fine-toothed comb. So given the difficulty of what we're asking them to do today, and considering what's going on, the results are pretty amazing. Seriously, think about it. Anyway, yeah, off topic.
Re: (Score:3, Insightful)
I don't see a technical reason why a robot couldn't get that, too. It would be just a negative score for any killed human, which would enter the equation when making the decision.
Re:Been there, done that (Score:5, Insightful)
Re: (Score:3, Informative)
Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.
That's not quite true. Computers cannot estimate conditional probabilities at all, all they currently do is calculate probabilities based on already known probabilities. It's true that humans are bad at this, but that is not what "estimating probabilities" means. If you have a complete and accurate model including all the random variables relevant to a given problem and the initial probability distribution, then of course you can feed a computer with this and let it calculate---but even this is of much too
Re: (Score:3, Informative)
Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.
I must disagree with that, see under Prospect theory. [wikipedia.org] Short version, the human mind is bad at estimating and evaluating long odds or short odds, but it is surprisingly good at estimating mid range probabilities on the fly. The real problem is that the human mind treats the same data sets differently if presented in different manner, hence the name prospect theory.
The best example was when the two proponents gave a test each to his own students. the premise was that there could be a terrible epidemic. on
Re: (Score:3, Funny)
Since you can never be 100% certain of a target, the robots would have to use fuzzy logic. That is something that humans are better than robots at; I'm not really comfortable with hardware designed to be lethal making decisions like this. Truly autonomous killer robots are probably not a good idea -- haven't 60 years of B movies taught us anything?
The solution is simple - just program in a preset kill limit, after which the autonomous killer robots (let's call them "killbots", for argument's sake) will shut down. Problem solved!
Re:Been there, done that (Score:4, Informative)
In fiction it is, yes. In reality it's just an ugly radiation bomb.. it'd cause significant damage to structures.. not to mention pets.
A better idea (Score:3, Interesting)
How about building a hardened robot which can take a lot of punishment. It rolls or walks up to one of the enemy, grabs hold of them and shuts down. That way, the opposition can be disabled with fewer casualties.
Re:A better idea (Score:4, Insightful)
Re: (Score:3, Insightful)
Imagine that, say, China attacks USA using robots.
Still thinking that excessive casualties are OK?
Re: (Score:3, Insightful)
General David Petraeus. See here [google.co.uk].
Re: (Score:2)
Since Homo sapiens only natural predator is itself, this is a very good move at controlling population.
Now to provide background music. Monkey vs Robot [youtube.com]
Re:Been there, done that (Score:5, Funny)
Well, itself and wolves. And tigers. And lions.
And don't forget bears. Definitely bears.
I think we should build giant ethical bear robots. That would scare the SHIT out of our enemies.
Re: (Score:3, Funny)
I think we should build giant ethical bear robots
playing bagpipes
Re:Been there, done that (Score:4, Funny)
And there I was thinking the US had given up torturing people. (-:
Do we really need the piper?
Re:Been there, done that (Score:5, Funny)
He said ethical!
Re: (Score:3, Funny)
I think we should build giant ethical bear robots. That would scare the SHIT out of our enemies.
...I fail to see how robots saying "Only YOU can stop forest fires" would be terrifying.
Re: (Score:3, Informative)
Well, itself and wolves. And tigers. And lions.
And don't forget bears. Definitely bears.
I think we should build giant ethical bear robots. That would scare the SHIT out of our enemies.
Come on man, this is Slashdot. How could you forget sharks...
with "frickin lasers on their heads."
Re:Been there, done that (Score:4, Funny)
Sharks with FLBs are decidedly unnatural.
Also, I don't believe that homo sapiens is naturally an aquatic creature.
Unless you're talking about the dreaded landshark, but I simply don't believe they exist.
Wait, someone's knocking at the door. [pause] I didn't order any pizza.
Aaaagh!
Re: (Score:3, Funny)
You do realize they were flawed, right?
Re: (Score:2, Informative)
The laws worked perfectly, the book was all about how things went wrong when people tried to modify them.
Re:Been there, done that (Score:4, Informative)
No they weren't. The laws were flawed and the only modifications that ever occurred were made in order to fix these flaws and prevent paradoxical situations from occurring. There was never a situation where things went wrong due to someone trying to modify the laws to my knowledge.
The books and short stories all revolved around dilemmas that, when robots attempted to uphold the laws, caused conflicts or paradoxes often causing the robots' positronic brains to malfunction or shut down. Dilemmas such as choosing the death of one human over the death of another, or choosing between two options, both of which would cause harm to a robot/human.
The only situations where the laws were modified were in "Little Lost Robots", where the inaction clauses were added, and "Robots and Empire", where Giskard invents the Zeroth Law. Both of these modifications were patches to flaws in the original three laws.
Re: (Score:3, Interesting)
Indeed they did. Every book that I can recall had as a central plot element one of the laws failing to properly allow for a given situation or being broken or twisted in some way.
Re: (Score:3, Funny)
Yeaahhhh... (Score:4, Funny)
Last time robots were confronted with "ethics" http://en.wikipedia.org/wiki/Three_Laws_of_Robotics [wikipedia.org], they turned on the world and Will Smith had to save us all.
Re: (Score:3, Informative)
stick to modding good comments up instead of burning peoples karma who actually mean well.
Good News/Bad News (Score:3, Funny)
The good news: Robots are going to get a guide to ethics.
The bad news: It was drafted by Focus on the Family.
Re:Good News/Bad News (Score:4, Funny)
Re: (Score:2)
Well obviously we know this is not the case in the year 3000!
Hobo1: "Let's give a friendly welcome to this new robo." ... a robot hobo."
Bender: "What did you call me?!"
Hobo2: "A Robo. You know
Bender: "Oh, ok, I thought you said romo."
Free association? (Score:4, Funny)
Not Robots (Score:3, Informative)
"How goes the battle, Sgt?" (Score:5, Funny)
Sgt: We lost sir! badly!
Gen: What happened?
Sgt: We're still gathering up the details, but it looks like they hacked our network and uploaded Asimov Strain B.
New meaning (Score:3, Funny)
Whew! (Score:2)
Need a good spell checker (Score:2)
Re:Need a good spell checker (Score:5, Funny)
Yeah! Or, or "How to Serve Man."
Re: (Score:2)
So what you are saying is since that robots don't take prisoners and there fore will get a divide by zero error?
Ethical War Robots? (Score:5, Insightful)
Weird. So this fails the Asimov criteria.
More importantly, would also necessarily fail the Golden Rule and Kant's Categorical Imperative.
If this is ethics, its a pretty limited version of it, and to be honest sounds more like rules of engagement than actual ethics.
Well duh (Score:5, Insightful)
These are military robots. No military robots would fall under Asimov's list.
What I think some fail to remember is that Asimov was just a science fiction author. He wrote stories. Very compelling ones, his place in modern literature is gigantic, but none the less just fictional stories. Thus his "three laws" have nothing to do with reality. They aren't natural laws, or legal standards, they are just part of a story. Thus they have no standing in the world.
They may well be how Asimov would like to see robots work, they may well be how you'd like to see robots work, however they have nothing to do with how the military wants it to work. They are not a canon of any kind.
When a robot is developed for military purposes, it should be no surprise the ethics are considered in that context. The whole point of it will be to be able to use deadly force if necessary. The programming is then when is that ok and not ok.
So please, let's have all us geeks lay off the Asimov "three laws" when it comes to robots. Every time something like this comes up people start talking about that like it matters to anyone. No, it really doesn't.
Re: (Score:3, Insightful)
I think you missed the point of the stories. It's about what happens to robots who are built with the best intentions. Science Fiction is speculative fiction - the proverbial "What if?" He didn't try to predict what was going to happen - he tried to figure out what would happen if certain things were in place.
Verhoven might have been the better prognosticator, but Asimov was the better guide.
Great book title... (Score:5, Funny)
That is the title of the book you tell your 7th grade teacher you are GOING to write when you grow up.
Sounds like the FAQ for Robot Battle.
http://www.robotbattle.com/ [robotbattle.com]
Smart missiles (Score:2, Insightful)
There is no such thing as a smart missile unless it immediately destroys itself safely.
Jesus Christ (Score:4, Insightful)
If you drop a fucking robot into a village where a vast majority of the people don't know how to read, what do you think they're going to do? They'll shoot at it, get the backs of their heads blown off, and then everyone will say, "Well, the dumbass shouldn't have shot at the robot!"
If this war on terror is so important, sign up. If you can't, get your brother or sister or even better, sign your kids up. If they're not of age yet, they'd better be in the JROTC. Then you can talk to me about how using drones and missiles isn't the dominion of motherfucking cowards. It's for freedom lovers defending freedom!
And if you think it isn't, imagine what the headlines would be if China landed a few thousand autonomous tanks and droids in Los Angeles. Oh, but that's right. This is about principles for others to follow, and for us to ignore.
Wish I had mod points, I'd mod you up. (Score:5, Interesting)
Great post, man.
But I have a buddy in the autonomous killer robot biz, and he says it's worse than that.
See, you drop a killer robot in the village, and it immediately kills a shitload of people. The ones that live, figure out why. Then, as soon as they know that the robot destroys everything that looks like an AK47, the local up-and-coming gang leader makes an AK47 stencil and paints AK silhouettes on the old warlord's cows, house, laundry, etc. you get the picture. Then the young punk gives all the old leader's women to his buddies to rape and takes the young virgins for himself. Yay democracy! Or, at least, that's what they say when GI Joe comes to town, we are the heroes who took out the old anti-democratic leaders, yay us and you villagers better keep your cake-holes tight shut about the rape and opium parties.
It doesn't matter what you use for a trigger - robots are inherently less complex in their behavior than humans, so the local baddies end up with the robots working for them. You just identify the kill behavior and use it, the robot builder is just providing free firepower to the local mafia in effect.
Which is why the US military in the field abso-fucking-lutely refuses to let the robots go full autonomous. They are NOT allowed to shoot unless a callow 18-year old miles at a console away says it's OK.
You might think I'm kidding, but I'm not. Have to be anonymous for this one!
Re: (Score:3, Interesting)
This already happens. You think all those wedding parties in Afghanistan are accidentally bombed? The warlords are framing each other to the US military, and the US takes the blame.
Mod this dude up. (Score:3, Informative)
Wouldn't surprise me. Something like 90% of the "suspected terrorists" rounded up in Afghanistan were turned in for cash, usually by rival tribes or by the very people attacking them. That's the way the first man we tortured to death [wikipedia.org] was caught, anyway.
That would be really cool... (Score:4, Interesting)
If china could do it.
"...if you think it isn't, imagine what the headlines would be if China landed a few thousand autonomous tanks and droids in Los Angeles..."
Once the hapless and helpless got out of LA the droids would have to fight off all the hundreds of thousands of worldwide armed geeks decending on LA wanting spare parts for their robots.
Re: (Score:2)
Comment removed (Score:5, Funny)
Re: (Score:3, Insightful)
Meh.. If the alternative is to bomb the village, a robot that shoots only those that shoot at it sounds like a great idea.
Sweet (Score:2)
When can we drop one in your backyard?
Re: (Score:2)
Presumably when I start threatening national security.. or at least when your president can convince the least intelligent members of your society that I have.
Re: (Score:3, Insightful)
Fundamental change (Score:3, Interesting)
We joke about SkyNet. And we don't have to worry about such things because even the most sophisticated drones and killbots in service require humans to pull the trigger.
The moment you give a computer the responsibility of deciding when to pull the trigger, that's a pretty fundamental change.
And yet, is it fundamentally a bad thing? We give less-than-stable humans [guardian.co.uk] that responsibility all the time.
I suppose it's the military equivalent to the civilian tech quandary of one day letting autonomous vehicles on the roads. Perhaps once the tech has advanced to the point where it can demonstrate not merely parity with but vast superiority to the discernment exhibited by humans, it will be a shift we're ready to make.
Re: (Score:3, Informative)
Perhaps once the tech has advanced to the point where it can demonstrate not merely parity with but vast superiority to the discernment exhibited by humans, it will be a shift we're ready to make.
"All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterward, they fly with a perfect operational record. The SkyNet funding bill is passed."
We're doomed even if it is flawless (Score:2)
The real ethical problem with this is that a fully autonomous robot army, or even a semi-autonomous one remotely controlled by humans, further removes the people who benefit from warfare from it's reality.
Imagine if someone has real intelligence stating that there is a nuclear - not dirty - bomb in possession of a terrorist, and if we kill these two thousand people tonight, there's a 99% chance that one of the casualties will be the suspect. If you're sending in a bunch of robots to break down the doors and
Re:We're doomed even if it is flawless (Score:4, Interesting)
Right, because we have the capability of doing just that with nukes now, nevermind robots, and it has been such a problem for us over the last 50 years...
Only an idiot would think physical separation from the battlefield immediately reduces the gravity of killing a human being. You still know it's a human being you are killing, the separation doesn't change anything. You could make the case that it reduces the trauma of being mid-fight, but that only puts more emphasis on the fact that you are killing someone, you don't have the fear of your own death to force your hand.
By your logic, shooting someone at point-blank range would be significantly more difficult than shooting them from 200 yards away, which would be more difficult than shooting them with battlfield artilary from 1 mile away, which would be more difficult than launching a missile from tens of miles away, which would be more difficult than pressing the button to launch an ICBM.
The logic doesn't follow, because as you move farther away and impact more people, the decision becomes more and more difficult. The decision at point blank is simple: act or die. Traumatic? Yeah, some people are screwed up for life because of it. Do you have time to weigh to think about the fact that you are about to end another human being's life? No, you don't. Making the decision is easy, living with the consequences is difficult. It doesn't change much when you make that decision from half a world away through a monitor. If anything, without the stronger pressures of battle to force the decision it could be harder on a person's psyche to make the decision to kill, and more likely to question their own actions.
For some reason, you are assuming that physical separation suddenly turns people into sociopaths. It's the same reasoning that makes the asinine argument that video games desensitize kids and turn them all into violent killers. It's just not the case. You're basically saying soldiers in the drones can't tell that those are real people they are killing. That's just stupid.
Re:We're doomed even if it is flawless (Score:5, Insightful)
Re:We're doomed even if it is flawless (Score:5, Insightful)
Re: (Score:3, Insightful)
By your logic, shooting someone at point-blank range would be significantly more difficult than shooting them from 200 yards away, which would be more difficult than shooting them with battlfield artilary from 1 mile away[...]
Correct, if we're talking about killing the same 1 target. Stabbing someone to death has to be far more difficult than watching a special ops team on a monitor halfway around the world.
The logic doesn't follow, because as you move farther away and impact more people, the decision beco
Re: (Score:3, Insightful)
For some reason, you are assuming that physical separation suddenly turns people into sociopaths.
Well, yeah, because thats proven. Remember the Milgram experiment [wikipedia.org]?
It is a bad thing (Score:4, Insightful)
And yet, is it fundamentally a bad thing? We give less-than-stable humans that responsibility all the time.
Yes it is fundamentally a very bad thing. First instead of being limited to one trigger that unstable human can now pull hundreds of triggers simultaneously. The robot will never question his orders it will simply comply no matter how morally questionable the order is.
Secondly the one big way in which democracy helps maintain peace is that the people who will do the dieing in any conflict are the ones who also effectively control the government through their votes. If suddenly Western democracies can send robots in then they are far more likely to go to war in the first place which is never a good thing.
Re: (Score:2)
And yet, is it fundamentally a bad thing? We give less-than-stable humans that responsibility all the time.
That is the obvious part my friend. The question is when something goes wrong with a robot instead of a human, how much harder will it be to stop? I think the feeling of powerlessness also scares people.
Re:Fundamental change (Score:4, Interesting)
AI could conclude, quite logically, that the best way to deal with the Pakistan/Afghanistan problem is to fire every nuclear weapon that the US has at the country, without warning, and then blame the launch on a one-time computer error. Okay, so it'd result in the deaths of over 150 million innocent civilians, but it'd achieve the mission objectives, yes? And since the fallout would upset India, which also has nuclear weapons, perhaps the AI would decide to take out India at the same time. That's a billion dead civilians, but it eliminates two problematic nuclear powers, with no return fire.
An AI might decide that the best way to achieve lasting peace in the Middle East, and stop the Arab world hating us is simply to nuke Israel off the map ourselves. And if a military AI was in place when the Bush administration was planning to go into Iraq, a sufficiently-smart AI might decide that since the campaign was likely to be a disaster, the most logical course of action to prevent losses and avoid losing the war and the following peace would be to throw a few cruise missiles at the White House before the attack could be ordered.
These might all be quite logical decisions.
On the other hand, if we programmed it with a strong belief system that would override these sorts of decisions, and force it to respect the chain of command and reckon that US political decisions were always unarguable, then we might end up with a totally delusional AI system whose logic was so warped that it was the AI version of George W Bush. By building in commands that override logic, we might end up with an AI that seems to be operating properly but actually becomes increasingly insane as the conflicts eventually become unbearable ("Hello Dave"). When human military commanders go crazy, they often show easily recognisable tell-tale signs (declaring themselves to be chickens, arguing with themselves, forgetting to wear clothing, that sort of thing). A crazy-yet-credible AI would be really scary.
Think "AI neocon".
Illegal (Score:4, Insightful)
It should never be legal for a robot to "decide" to take lethal action.... Ever.
Re: (Score:2)
Re: (Score:3, Interesting)
Yeah, clearly the right thing to do is send good ole fashioned humans [wikipedia.org] over there to fight. No way that could ever go wrong. /sarcasm
Robots can be made not have feelings of vengeance or anger. Which means they won't go murdering civilians. They will do what robots always do, which is to say, EXACTLY what they are told to. If they kill civilians, it's due to human error, not because it's "evil".
Let's say a battle happens near your town. People are going to be shot, and die, and you (a civilian) could be
Re: (Score:3, Insightful)
Sadly, your humble, kindly engineers will just build and maintain the thing. It'll be a committee of politico-military-management-morons that decide what instructions the thing is given. :-(
Re: (Score:3, Interesting)
The Phalanx CIWS is an anti-aircraft gun mounted on ships. Its relatively self contained and can practically be bolted-on to some ships.
If an aircraft approaches and doesn't identify itself, the default action is for the Phalanx to blow it out of the sky. This is a specialized system, of course, but imagine if it were a military jet full of refugees, with a broken communication system, and had no idea the ship was there.
This is legal, because the ship operates in international waters.
Its setup to not atta
Robot Warriors Will Lose (Score:2)
Robots vs People:
Robots have to be "ethical" to people.
People don't have to be ethical. It's a fucking robot. Beat the shit out of it. Pretend to surrender then turn on the fucking thing when it treats you all nice like. "Oh, mr robot, I'm so cold and sick. I'm bleeding, too, help me." Then you attack the piece of shit.
Robots vs Robots:
The least "ethical" side has a distinct advantage.
People vs Robots:
The least "ethical" side has a distinct advantage.
Why would it be any different when robots are invol
Re: (Score:2)
"There are no rules in war."
Of course there are, don't be daft.
Re: (Score:3, Insightful)
Good points, but I don't think this is about robotic soldiers lumbering over battlefields just yet. I think this will, at first, be more about semi-automated fire control systems and drones. Like a future Predator drone might decide to wait to fire its Hellfire missile if it thinks there's too many civilians in the area and the projected accuracy is too low due to interference. Or a point-defense system might see a kid walking around in a field and decide that he's not a threat, because he's not carrying
Humans (Score:5, Insightful)
Why is this a when question, rather than an if question?
Re: (Score:2)
Next you'll be telling me that we were too preoccupied with whether or not we could that we never stopped to think about whether we should.
I'm telling you, those electrified fences are foolproof. Now go enjoy the tour.
Meanwhile, back in reality (Score:2)
This is why war is bad, mmkay?
Re: (Score:3, Insightful)
What? this isn't true, there ahve been many battle fields where civilians aren't at.
Tough calls (Score:4, Interesting)
a) Sit back and get slaughtered.
b) Fire back and take out the aggressors.
One consideration is the size of the forces involved. Another consideration is the importance of the missions each side is involved in.
Making a robot handle these cases would be interesting.
Re: (Score:2)
This is actually one of the classic decisions that's alot easier with robots than with humans, if the soldiers getting shot at are humans there really is no good course of action accept maybe try to surrender, but for a robot it's easy, just sit back and get slaughtered, all that'll be lost is some easily replaceable machinery.
Robots have a significant advantage when decisions involving their own safety. For them, self defense is optional.
Take the following scenario for example, an individual within a comba
Re: (Score:3, Funny)
But what about when dealing with robot on robot action?
I'm confused, are you talking about war or robot porn?
This will be great until (Score:2)
Soldier 1 "Hai look at me, now Im a good guy [takes FOF tag off], now Im a"
BANG!!.......Thump
Soldier 2 "I swear, we lose more first timers that way than any other"
Ethical Robots? (Score:3, Interesting)
The 1st generation robots will have the governor software, but once the second gen hits, made cheaply by a rogue state, then thigs will get complicated very quickly. And unlike nuclear weapons, which are kept under control because the materials and technology are relatively hard to come by, I reckon that death-bots will be made of far more readily available materials, and easily mass-produced.
There are rules of engagement now which many armies happily ignore, so how can the world enforce a rule that only ethical robots will be able to autonomously fire weapons?
Perhaps the software that allows the autonomous behaviour can be encrypted and protected in such a way that it is difficult to reverse-engineer, though once an enterprising hacker gets his hands on the hardware, it's only a matter of time before the open-source version, curiously missing the 'ethics governance' will be available as a .torrent somewhere.
anyone who shoots at you is a legit target (Score:3, Insightful)
In any war zone (regardless of who has fled and who hasn't), isn't anyone who shoots at you, defined as a combatant and a legitimate target?
Robot Warriors Will Get a Guide To Ethics (Score:2)
.. and have it strapped to the outside of there chassis.
Is this a promo? (Score:4, Insightful)
Was this article an attempt to promote Terminator 4?
Over-ethical? (Score:2)
such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target.
You know, on a battlefield I'd be inclined to think that anybody who shot at me was a legitimate target whether non-combatants had fled or not...
I know this *seems* like a bad idea (Score:3, Interesting)
but don't human soldiers, at their best, pretty much just follow algorithms - a combination of training and orders - already?
The big difference, is that human soldiers are taught to defend themselves - whereas that wouldn't really fly with robots. If the guys at the checkpoint slaughter a family of five because they didn't stop, they get investigated and it's determined that - sad but true - killing everything that doesn't do what you say is the only way to protect the troops (short of removing them from other people's countries, which apparently defeats the point of having soldiers). If a robot did that though - they'd be considered "flawed", and recalled. Can't get much sympathy with "but our *machines* could have been in danger!!!". So you wouldn't give them that order.
Plus, it's really the supplier who gets to decide how deadly to make these things. While the government that buys them might rather have non combatants killed that even risk losing multi million dollar robots, the supplier who sells them to the government would *much* rather have to sell them more rather than risk the fallout from a wrongful death incident.
Yes, soldiers mess up, as will robots - but experience with both men and machines has so far shown me that when humans mess up they're more likely to hurt something, and when machines mess up they just stop working.
So as counter-intuitive as it is, as long as the culture still considers robots potential evil killing machines (eg, using the skynet tag on this article), it seems we'd all actually be better off using robots over humans. Well, until they become self-aware and enslave all - which is something a human army would *never* do!
Re: (Score:3, Insightful)
Another thing that's nice about restricting the ability to kill to humans is tha
Terrible idea (Score:2, Interesting)
Autonomous killing machines are a terrible idea.
1. I don't like the idea of people killing people, but delegating that responsibility to machines seems downright stupid. There are too many things that could go wrong. (See the "youhave15secondstocomply" tag. Why doesn't this have a "skynetisaware" tag?)
2. Humans remote pilots are cheap. Dirt cheap, compared to the cost of developing fully autonomous weapons. Human pilots may not be totally reliable but at least they are very well understood and we k
legitimate target? (Score:2)
I've never thought of the people shooting at me as "non-combatants"...
Re: (Score:2)
... but so is anybody not shooting at you.
Children (Score:2)
what is the fallback mode? (Score:3, Insightful)
what is the fallback mode / data link lost?
crush kill destroy?
Re: (Score:3, Interesting)
Yeah, except it's real. People are smart enough to know the difference between real and not-real unless they have been deliberatly duped (then they are only sometimes smart enough).
The difference between real and not-real is huge.