Weapons Systems That Kill According To Algorithms Are Coming. What To Do? 514
Lasrick writes "Mark Gubrud has another great piece exploring the slippery slope we seem to be traveling down when it comes to autonomous weapons systems: Quote: 'Autonomous weapons are robotic systems that, once activated, can select and engage targets without further intervention by a human operator. Advances in computer technology, artificial intelligence, and robotics may lead to a vast expansion in the development and use of such weapons in the near future. Public opinion runs strongly against killer robots. But many of the same claims that propelled the Cold War are being recycled to justify the pursuit of a nascent robotic arms race. Autonomous weapons could be militarily potent and therefore pose a great threat.'"
Skynet (Score:5, Insightful)
Yet another predictor.
Bring on the Terminators.
Re:Skynet (Score:5, Funny)
The easiest way to avoid being vaporized is to wear a shirt that reads "/dev/null". No intelligent system will send anything your way.
captcha: toasted (damn, /dev/null has never failed me before)
Re: (Score:3)
It's a shame we can't persuade the muzzies of that, but seriously I think it only counts if its in an attempt to kill non believers.
Go directly to Hell. Do not pass Paradise, do not collect 72 virgins.
Re:Skynet (Score:5, Interesting)
That's pretty much it.
These are only a problem if they are built and used.
We cannot stop anyone from building them (in secret). But we can get updates added to the Geneva Conventions. And we can choose how we deal with anyone who uses these.
Although at the moment it looks like we (USA! USA!) will be the ones using them. So contact your Congress Critters and make sure they know that you'll support them if they vote to ban our usage of these.
I doubt other major powers are ignoring the tech (Score:3)
But we can get updates added to the Geneva Conventions. And we can choose how we deal with anyone who uses these.
I think countries would need to sign the revised Convention before they would become liable for violation.
Although at the moment it looks like we (USA! USA!) will be the ones using them.
I doubt that other major powers are ignoring such technology. I think other powers have a more closed procurement process and greater control over their design/development bureaus. We are less likely to hear about their designs until they are fielded or made available for export.
Re:Skynet (Score:4, Insightful)
These are only a problem if they are built and used.
how do you know that smart weapons won't result in fewer deaths, and fewer deaths of non-combatants?
humans have a pretty poor track record and it wouldn't take much to approve upon. if you think the man in the trenches is making good judgments about when and who to kill, you should talk to a vietnam vet.
Re: (Score:3)
humans have a pretty poor track record and it wouldn't take much to approve upon.
So how do you program them without said human's intervention? Humans are fallible so all things produced by them are also fallible. While "I Robot" and "RoboCop" are fictions they address very real concerns. It is not a matter of if but when will these systems make incorrect/inaccurate choices and kill innocents. And corruptible humans will sell these things under the table to less than scrupulous individuals for protection/collections purposes. Tanks and missiles go missing from the military all the time.
Re: (Score:3)
So how do you program them without said human's intervention? Humans are fallible so all things produced by them are also fallible.
not what i meant. yes, there will be bugs and innocents will be killed. but humans also fallible. that's something we don't need to speculate on. i'd rather have a software engineer coding the rules of engagement in a quiet, calm environment with their peers reviewing and re-reviewing them, adding multiple levels of failsafes, than a soldier that just saw his buddy's brains splattered on the pavement and gun has jammed and has multiple bad guys bearing down on him.
You're either forgetting or ignoring another, very important aspect of humanity - namely, the ability to feel compassion, shame, empathy, etc.
I wouldn't let a machine pet a kitten because machines lack the ability to understand things like pain; why the hell would I want to program them to kill?
Re: (Score:3)
"how do you know that smart weapons won't result in fewer deaths, and fewer deaths of non-combatants?"
We dont need smart weapons for that. All we need is someone to deem that everyone that died in the attack was a combatant.
It seems to work well for the government so far.
Re: (Score:3)
You could make exactly the opposite argument with exactly the same evidence (i.e. none): that the decision should be made by someone not at immediate risk of death because they'll be more likely to make the safer-for-others choices and clearly identifying targets rather than making the safer-for-them choices and shooting anything that moves.
Re: (Score:3)
Also, philosophically speaking, I'd say a human's decision making is just a really complex set of algorithms that we don't understand particularly well at this point. What we do know is that humans make significant mistakes with regularity, so the test isn't whether or not these autonomous systems make mistakes in difficult circumstances, but rather the ratio of mistakes compared to human agent.
Right, humans fuck up.
And these killbots would be programmed and controlled by humans.
Therefore, it stands to reason that killbots would do everything their fuck-up human masters tell them, except without the compassion, remorse, shame, and other emotions that prevent many humans from doing fucked up things.
If a general tells a soldier, "go murder your entire family," said soldier will likely not follow that order. A robot, conversely, would always do what its master tells it, regardless of whether the mast
Re:Skynet (Score:4, Funny)
A robot, conversely, would always do what its master tells it, regardless of whether the master says, "go pick some daisies," or "go commit genocide."
ORDER RECEIVED: Pick daisies.
TARGET LOCATED: Daisy lawn, municipal park.
WEAPON SELECTED: BLU-82B ammonium nitrate/aluminium tactical thermobaric device "Daisy cutter"
EVALUATION: Commander will be so pleased.
Re:Skynet (Score:4, Interesting)
Yes, it does. They've produced *fewer* civilian deaths than the airstrikes they replaced.
Re: (Score:3)
Although at the moment it looks like we (USA! USA!) will be the ones using them. So contact your Congress Critters and make sure they know that you'll support them if they vote to ban our usage of these.
And while you're at it, land mines kill something like 70 civilians every day, including a lot of children. Remind your congressman that land mines are barbaric, that the US should be opposed to children's legs getting blown off. Urge them to sign the Ottawa treaty. Other non-signatories are the usual countries we think we're morally superior to: Russia, China, Myanmar, United Arab Emirates, Cuba, Egypt, and Iran. (Israel too, but they might think that Israel is also a good guy and take that as a sign t
Re: (Score:3)
Skynet? Really? That's the one thing /.-readers can think of that could go wrong with this technology?
So, as long as we don't develop self-aware AI that somehow decides to rise against its creator, we're fine with having weaponry that can acquire and engage human targets autonomously? We're fine with armies of these devices at the direction of a few mad men, with just a single conscience deciding the fate of thousands instead of having a human at every trigger?
We should oppose this type of weapon for the sa
Re:Skynet (Score:5, Insightful)
because all evidence shows that the weak point always lies with the soldier that has to pull the trigger and decide to kill a fellow human being.
All evidence that I've seen shows that a large number - possibly even the majority - of soldiers have been brainwashed in to following orders unconditionally and will commit the most horrendous crimes against humanity when ordered to do so. And - even when not ordered - that same brainwashing includes training in not thinking of 'the enemy' as human, because that causes you to delay in the critical moment. So they dehumanise the enemy to the point that further atrocities can be committed even when not under orders to do so.
Note that I don't blame the soldiers themselves in a lot of these situations - they are often good people who given time to think and reason it through would not behave that way, but their training has so messed with them that some actions they'll take don't reflect on the person they are.
Also note that I did say "a large number of soldiers" and not all. There are plenty of cases you can find of soldiers going against orders they believe to be morally reprehensible, but the fact that OTHER soldiers then do it is a testament for the argument and not against it.
Re: (Score:3)
Oh, how I'd love for you to present said evidence, that "proves" people like my brother are mindless killing machines that do everything the government "programs" them to do...
The irony being, of course, that you just described a robot, rather than a human.
That is a serious strawman and if you think I said anything like that at all, you either lack reading comprehension or are just looking for a fight.
In case it's the former, please note that I also wrote: "Also note that I did say "a large number of soldiers" and not all. There are plenty of cases you can find of soldiers going against orders they believe to be morally reprehensible, but the fact that OTHER soldiers then do it is a testament for the argument and not against it.".
Beyond that, assuming your br
Where have I heard this before? (Score:2)
Re:Where have I heard this before? (Score:5, Funny)
Select targets? Really?
Wait until the system realizes ALL humans are targets.
Don't worry. Fail safe measures will be implemented in order to keep the systems secure. Look all that fabulous advances made on our computer security nowadays and rest assur... Oh, wait!
Re: (Score:2)
But then finally we'll see some kind of response to the problem, because then FINALLY there will be people dying from faulty software.
Re: (Score:3)
Select targets? Really?
Wait until the system realizes ALL humans are targets.
Don't worry. Fail safe measures will be implemented in order to keep the systems secure. Look all that fabulous advances made on our computer security nowadays and rest assur... Oh, wait!
Failsafe system will be contracted out to the people who profited by writing and then fixing the Affordable Healthcare websites.
Be afraid. Be very afraid.
Re: (Score:3)
Re:Where have I heard this before? (Score:5, Funny)
I have your security hole right here.
Re:Where have I heard this before? (Score:5, Insightful)
We still rely on chemical energy to power our weapons and as such they all have the ultimate fail safe system.
Brace yourself before clicking the link. This may come as a surprise to you.
http://en.wikipedia.org/wiki/Nuclear_weapon [wikipedia.org]
Re:Where have I heard this before? (Score:5, Funny)
But many of the same claims that propelled the Cold War are being recycled to justify the pursuit of a nascent robotic arms race.
You environmental weenies are all the same, you go on and on about how we all need to recycle, but when we do it you complain about how we`re not doing "right"
We could not make them (Score:5, Insightful)
They're not "coming" as if from space. We just need to choose for them not to exist and they won't. These things will (or won't) be made by individuals who can make moral decisions.
Don't be a terrible individual; don't make or participate in the making of terrible things.
Re: (Score:2, Insightful)
Except looking at history, they will probable lead to fewer soldier deaths, fewer bystander deaths, more accurate targeting.
I don't know why people think they are bad.
Re:We could not make them (Score:5, Insightful)
We have more accurate weapons than ever. Compare the average cruise missile to the average arrow and tell me:
1. Which one is more accurate?
2. Which one causes more deaths?
You will notice that they are NOT mutually exclusive. Quite the opposite.
Re: (Score:3)
Actually, at their intended ranges an arrow is more accurate.
Re:We could not make them (Score:5, Insightful)
Except looking at history, they will probable lead to fewer soldier deaths, fewer bystander deaths, more accurate targeting.
I don't know why people think they are bad.
Extra-judicial killings of US citizens.
Re:We could not make them (Score:5, Insightful)
Except looking at history, they will probable lead to fewer soldier deaths, fewer bystander deaths, more accurate targeting.
I don't know why people think they are bad.
Extra-judicial killings of US citizens.
Let's call it what is is: murder of innocent US citizens.
(don't think they are innocent? They are innocent until proven guilty!)
Re:We could not make them (Score:5, Insightful)
How about: Murder of innocent citizens.
95% of them aren't americans (me included). Why would the distinction be important?
Re:We could not make them (Score:5, Insightful)
How about: Murder of innocent citizens.
95% of them aren't americans (me included). Why would the distinction be important?
Americans don't seem to think that non-Americans are people, therefore not deserving of rights
Re: (Score:3)
Hence why a lot of the lesser educated Americans believe all Muslims are terrorists.
Every country has it's nationalism turned up to 11. Great Britian, Germany, France, Spain, Iran, China, etc... Please show me a humble country.
Re:We could not make them (Score:5, Insightful)
Or... and I know this sounds crazy... we could just not kill people anymore. I know we like to be the super heroes of the world, running around fighting everyones wars and everything... hell, I used to think that way to. But at a certain point you just have to stand back and say "you know what? Fuck it. I'm done blowing 1/3rd of our budget dropping bombs on people I don't know for a cause I barely understand just to have any and all progress erased in a few years because the real problems in other parts of the world have little to do with their totalitarian leaderships."
Re: (Score:3)
Or... and I know this sounds crazy... we could just not kill people anymore. I know we like to be the super heroes of the world, running around fighting everyones wars and everything... hell, I used to think that way to. But at a certain point you just have to stand back and say "you know what? Fuck it. I'm done blowing 1/3rd of our budget dropping bombs on people I don't know for a cause I barely understand just to have any and all progress erased in a few years because the real problems in other parts of the world have little to do with their totalitarian leaderships."
You think that's why we go to war? We don't go because we need to save and help people. We go because it is in the nation's perceived interest. We go to maintain and extend US hegemony. We go to make the climate friendly to US businesses. Sure, we could stop killing people. But without the threat of force, how do we get people to do what we want them to do?
The reasons given publicly for war are almost never the actual reasons. If it seems ridiculous to you that we go to war and don't achieve the obje
Re:We could not make them (Score:5, Insightful)
Yea, but they CANT destroy the US. It's not possible. It's like we live in a mansion and a rat ran in and shit on our floor. So now we have the entire staff chopping up the floorboards and taring the plaster off the walls looking for the fucking thing. We're doing far more damage than the stupid rat ever could. Some pests just don't go away, so you have to keep the cheese in the fridge, put out some traps and deal with it. Don't burn the house down around you just to win.
Re: (Score:3)
Mod this up. This is a good analogy. How many people die annually in this country of obesity-related causes? Automobile accidents? Murders (with or without guns)?
Now compare that with all terrorist deaths in the past 20 years.
You can't justify what's being done (wars, unconstitutional laws and practices) in the name of protecting us from this albeit scary sounding, but relatively inconsequential threat. Dismantle the war machine.
Re: (Score:3)
It's just like the internet, you stop a fight with trolls by ignoring them.
Re: (Score:3)
But the fact remains that the primary goal of the Afghan war and the current bombings in Pakistan and Qatar is to disrupt a large and well funded terrorist group that attacked first and has as *its* goal the destruction of the US and other Western or other non-fundamentalist-Islamic nations.
Ah yes, the terrorists that use "defence" as a justification for their actions. Perhaps your "fixing" of a non-existent problem is the actual cause for the problem? Violence leads to more violence, and the only way to break that cycle is to stop aggressing against "enemies" and just defend your own if they decide to aggress. They will eventually go away, or become big enough with their aggression to warrant stepped-up responses.
Re:We could not make them (Score:4, Interesting)
Well, I think the problem was that we thought people like us were in Iraq... and once Saddam was gone they come out of their houses and go about being free and democratic like Europe did after WW2. Well, they're not like us. They didn't do that. And while we do have our own problems, the kind of shit they are willing to put up with is a lot different than the kind of shit we're willing to put up with. Their society needs to change fundamentally. Something deep and eye opening like what happened in the US during the civil rights movement. We can't help them with that, just like no-one could have helped us through the 60s.
Re:We could not make them (Score:4, Insightful)
Far too easy for all humans involved to disavow any responsibility when the thing shoots up a busload of children. No ability to decide the CO has gone nutsy cuckoo and report up the chain of command. No ability to decide the CO's order is just plain illegal and refuse.
Nobody to report back home about how ugly and unnecessary it all is. Killing people, especially lots of people should NOT be cost effective.
Other than that, it's just great.
Re: (Score:3)
We have no history of robots selecting their targets autonomously.
Its been done. One example: homing torpedoes. Especially in the WW2'ish era. More recent designs may have offered a little more control but some of the older designs basically were told to go somewhere, find a ship and target it. There was no Identification Friend or Foe.
Re:We could not make them (Score:5, Interesting)
I disagree. At some point a civilian smartphone, or self-driving car, will contain practically all the technology to be weaponized. (E.g. "avoid people" becomes "pursue people"!) Once you have the sensors, pattern recognition, and mobility, there's no way to control all the possible applications.
You won't even know if you're helping make them. (Score:5, Insightful)
Another guy'll be making a robot painting system that aims it's cars "so make a more profitable assembly line".
Yet another'll make a self-driving car "so you won't have to worry about drunk drivers anymore".
Once those pieces are all there (hint, today), it doesn't take much for the last guy to glue the 3 together; hand it a gun instead of spraypaint; and load it with a databases of faces you don't like.
It seems a poor comparison. (Score:5, Insightful)
My prediction is that this technology will float about the edge of popular awareness, until an unbalanced individual sets up a KILLMAX(tm) brand 'smartgun perimeter defense turret' in an elementary school and murders a bunch of children and escapes because he didn't have to be on the scene. Then national outrage will lead to mass bans on such weapons.
Should we be making such weapons? I don't know, I suppose that the argument can be made that they fill the same role as land mines, but have the upside that there is less problem with getting rid of them when the fighting stops. I find the glee we as a species have in building better was of killing each other to be really depressing on the whole.
Re: (Score:3)
I think that I find the glee we as a species have in building better was of killing each other to be really depressing on the whole.
The very existence of flamethrowers proves that sometime, somewhere, someone said to themselves, "You know, I want to set those people over there on fire, but I'm just not close enough to get the job done." ~George Carlin
Re:We could not make them (Score:5, Insightful)
No. These things *will* be made, by people who make immoral decisions. The people who get to make those sorts of decisions are already mostly terrible people.
Don't be a target or act like a target... (Score:5, Funny)
Problem solved!
Sci-Fi to watch... (Score:2, Insightful)
Terminator
ST TNG: Arsenal of Freedom
Etc...
Do This! (Score:3)
Hack the system with an algorithm that kills the deployers, of course!
What does "Automatically Selecting Targets" Mean? (Score:2)
While the first thing that comes to mind is a machine that instantly targets and destroys, I wonder if this could be something more methodical. Since "friendly" human lives aren't on the line for the decision maker, these could be used to slow down the process of determining whether or not to use lethal force.
For example, much larger sets of data could be used that just "Looks like a bad guy with a gun and I think he might want to shoot me." With facial recognition, individual enemy combatants could be tr
Re: (Score:2)
I can think of some situations where you don't even have to use facial recognition per say. If you're in a vehicle and the system detects an RPG fired at you. It's pretty easy to distinguish "RPG" from background noise. It should also be relatively easy to detect the 'source' and immediately return fire.
If firing an RPG is a guaranteed way to get hit with several belts of radar/IR guided 50 caliber machine gun fire--you might have a really hard time finding people willing to pull the trigger. Similarly
Re:No tech advances can stop war (Score:5, Interesting)
Well, to be the devil's advocate, in fact fewer and fewer people are dying in wars the more advanced the weaponry gets.
I realize this is a very minority position on this page. But it's pretty easy to take a position against defense weaponry and feel on a moral high ground, and pretty easy to adapt a fearful / risk-averse position to unknown change and new developments. It's harder to present a risk-benefit analysis that says electronics wars are hurting more people. It's not impossible to imagine that the robots will do a better job, and we'd have fewer headlines like "US Marine Sargent Kills 16 in Kandahar, 9 of them children". [https://en.wikipedia.org/wiki/Kandahar_massacre]
Easy (Score:5, Funny)
Wear a tshirt with a message written in a carefully formatted font so it causes a buffer overflow, giving your tshirt root privileges.
Mine would have the decss code on it, so the drone starts shooting pirated DVDs at everybody. The RIAA will make short work of the problem at that point.
Already in use? (Score:2)
Re: (Score:2)
2 points. First, while there is no human “in the loop”, there is a human “on the loop”. They have discretion here. Second, it is basically a defensive system to shoot incoming cruise missiles – it’s range of targets is pretty limited. This would be very different than an offensive autonomous system that hunts and kills on it’s own.
Re: (Score:2)
will keep control at the top where it belongs (till the systems at the top take control and there is no one in the missile silos to stop the launch)
Re: (Score:2)
The problem with remotely piloted vehicles is the up and down links are the weak link. If you take out your opponents comm links with jamming or by shooting down their relays you take out their entire drone capability at least until you can restore the comm links.
If you are going to depend on drones the only solution is they have to be autonomous. The only other solution is they have to be manned and introducing pilots entails increased cost, lowers mission duration, increase risk of loss of life and capt
Weapons Systems That Kill According To Algorithms (Score:5, Interesting)
What To Do?
"Endeavor to be one of the people writing the algorithms" would probably be a good idea.
Microsoft inside (Score:2)
Killer Robots... (Score:3)
Re:Killer Robots... (Score:5, Interesting)
Select, but not fire (Score:2)
Results are known (Score:2)
Devices which can engage targets without human intervention are fairly common: landmines.
We do know that they kill hundreds of innocents every year.
Put some cameras and algorithms, and you may kill/maim less innocents, but you won't get to zero. You can't get to zero when you put a human brain behind the trigger, how do you make a machine decide which teenager is a bad guy?
Actually, let me offer a simple solution to that last question:
Connect the machine to a massive database which contains data about every
Re: (Score:3)
Which is why the campaign against landmines.
http://www.nobelprize.org/nobel_prizes/peace/laureates/1997/icbl-facts.html [nobelprize.org]
Can't wait until the DHS no fly list gets integrated with the ok-kill software.
How do human soldiers kill? (Score:4, Insightful)
I don't get this... Aren't human soldiers killing based on something other than algorithms? Or is it that the implementations are coded in vague human languages, that makes them feel somehow warm and fuzzy? Well, Pentagon's Ada may be considered similar, but only in jest...
I'd say, whether such systems are bad or good is still up to the algorithms, not the hardware (nor pinkware), that executes them.
On the bright side (Score:3)
On the bright side, algorithm-driven machines are unlikely to pull their guns just because they have an attitude problem like some cops do.
Re: (Score:3)
On the bright side, algorithm-driven machines are unlikely to pull their guns just because they have an attitude problem like some cops do.
It also wouldn't have to worry about any of those pesky emotions like compassion or remorse slowing down its murder spree.
Re:How do human soldiers kill? (Score:4, Insightful)
I don't get this... Aren't human soldiers killing based on something other than algorithms? Or is it that the implementations are coded in vague human languages, that makes them feel somehow warm and fuzzy? Well, Pentagon's Ada may be considered similar, but only in jest...
I'd say, whether such systems are bad or good is still up to the algorithms, not the hardware (nor pinkware), that executes them.
For me the big difference is that if you activate the military to suppress their own populace when it demonstrates that the soldiers can at least choose not to follow orders.
The idea of the US (for example) with the ever increasing trend of the suppression of constitutional rights having robots that kill whoever they're activated against is terrifying.
We already have mines (Score:5, Interesting)
... both land and naval. They have become more sophisticated in that they can be triggered by target characteristics, and in the naval case, maneuver.
Re: (Score:3)
Yep you also have anti ship missiles that you can fire along a vector that will pick their target. Anti radar missles that will hang from a chute waiting for the radar to come on.... And so on.
Re:We already have mines (Score:4, Informative)
Most of the world has illegalized mines, with the exception of America. I wonder if the Geneva conventions were being drawn up now, the Americans would boycott them?
let's play global thermonuclear war (Score:2)
what side do you want?
1. United States
2. Russia
3. United Kingdom
4. France
5. China
6. India
7. Pakistan
8. North Korea
9. Israel
Greetings Professor Falken. (Score:3)
Shit just got real.
Fictional treatment in _David's Sling_ (Score:3)
David's Sling, a novel by Marc Stiegler, is about the first "information age" weapons systems. These are autonomous robotic weapons that use algorithms to decide which targets to hit, and the algorithms are designed to take out enemy communications and decision-making. The weapons would try to identify important comm relays and take them out, and would analyze comm traffic to decide who is giving orders and take them out.
The book was written before the fall of the Soviet Union, and the big finale of the book involves a massive Soviet invasion of Europe and the automated weapons save the day.
Unlike some portrayals of technology, this book covers project planning, testing, and plausible software development. It contains tense scenes of QA testing, where the team makes sure their hardware designs are adequate and that their software mostly works. (They can remote-update the software but of course not the hardware.)
Mostly they left the weapons autonomous, but there was a memorable scene where a robot was having trouble whether to kill someone, and the humans overrode the robot and had it leave the guy alone. (The guy was injured, and lying there but moving a little bit, and the robot was not sure whether the guy was already killed or should be killed again. Hmm, now that I think about it, this seems rather implausible, but it was a nifty scene in the book.)
http://www.goodreads.com/book/show/3064877-david-s-sling [goodreads.com]
P.S. I bought the book when it first came out, and there was an ad for a forthcoming hypertext edition that never came out. I think it was never actually made, but I wish it had been.
Easy (Score:4)
Hack in. Make military-industrialists fit the target profile. Problem solved.
In the words of Lord Kril (Score:2)
I feel sorry for... (Score:2)
The BETA testers of this system....
Like the Death Penalty (Score:3)
Already exists: Aegis (Score:2)
From what I understand, Aegis already does this - and it did it a long time ago. Where has subby been, in the basement?
Haven't we... (Score:3)
if both sides have these robots (Score:3)
and they can just fight among themselves it could be televised live for everyone and war would suddenly become wholesome entertaining
False Postives (Score:4, Interesting)
I'm sure the DMCA has shown you what automated systems can do.
Turnabout is fair play (Score:3)
We developers have been killing software bugs for decades. Why can't software bugs start killing us?
Re: (Score:3)
The real problem is if the person that will provide the working parameters to these algorithms defines that some of us, as humans, are mistakes that must be "resolved" and that the weapons working on these algorithms are the tools to perform that improvement.
It's already happened to Captain Kirk. Remember that old episode about Nomad [memory-alpha.org]?
They're here (Score:2)
It will fine (Score:2)
Or when protecting our 'strategic interests' become very important. For instance in order to protect Israel, a nation we can not live without.
Oh, and also in case any one pisses us off and does anything we do not like.
What to do? (Score:3)
Die, mostly.
Dearest engineers... (Score:3, Insightful)
To all the engineers working on this: you're responsible. You are doing this. You are a terrible person.
Re: (Score:3)
Well, you made me feel bad. To make amends, just send me your picture and I'll make sure it's on the do-not-kill roster.
Human soldiers are already beign desensitized (Score:4, Insightful)
I can't remember the documentary; maybe Fog of War starring Satan's favorite child Robert McNamara. But, they figured out that in combat 25% of of soldiers weren't actually shooting at other people. They were intentionally shooting up in the air to avoid killing. So, part of the Army's training post WWII was to get soldiers to fire without thinking. The outcome was soldiers were more effective in battle. The consequence was soldiers weren't evaluating the act of taking lives until AFTER they'd done it which contributed to the increased mental issues Vietnam-era soldiers endure.
Re: (Score:3)
The 25% figure was proposed by S.L.A. Marshall in "Under Fire". He claimed it was from extensive interviews with US soldiers. That, at least, was a lie: he had no time to conduct all those interviews, no records have turned up, and no veterans remember such interviews. This doesn't mean the conclusion is wrong, but that the claimed support doesn't exist. David Grossman in "On Killing" claimed that it was reasonably accurate (providing some evidence), and that the number had been boosted to near 100% by
Guns don't kill people (Score:4, Interesting)
Re:Guns don't kill people (Score:5, Insightful)
To kill someone with a knife, you have to stand very close to them and thrust the weapon into their body. To kill them with a gun, they have to be in line of sight and pull the trigger. To kill them with a drone, you need them on live camera and push a button. To kill them with an autonomous robot, you need to have a description of what they look like and what area they are located in and program that into the robot. Every step becomes more indirect, more emotionally detached.
"Guns don't kill people" is just a slogan. A gun is a tool. For killing people. The real questions include "Do guns deter crime or make it more violent?" and "Does home gun ownership help prevent a government from turning on its own people?", but those have no simple answers, so they are not as useful in propaganda.
Kill Algorithms. (Score:3)
Already exist. In our human behavior for one as part of instinct. As part of learned moral code. As part of operational orders such as rules of engagement. Simply codifying them and allowing a machine to do it isn't necessarily a bad thing. For one it takes away the negative mental effects it must have on human operators to have to make such life and death decisions.
What we are really talking about is A) how well can it be coded, and B) avoiding potential mistakes, like "Kill all Humans!" or " All Humans must Die", or more serious making a distinction between soldier and non-combatant (assuming there is such a thing in the distant future).
As war had taught us anything (and apparently it hasn't) Humans are perfectly capable of making mistakes and fucking that up all by themselves. Friendly fire happens all the time, and I can't give you a statistic, but it is a significant amount of issue and always has been. Civilian casualties particularly in urban centers has also been an issue since such things as urban centers have ever existed.
At least if a machine is doing it, it will do it in a consistent, and discoverable way that is hopefully correctable, and not because some soldiers get mentally messed up by all the stress that putting people in those situations is bound to produce (or trying to desensitize them by making the enemy appear subhuman).
Hopefully in the future all wars will be fought by autonomous robots, fighting other autonomous robots, who once they kill off all the opposing robot forces simply send a C3PO type representative to the defeated leadership to tell them they lost the war. I would imagine it would even make for pretty good TV (and betting opportunity: Go 23rd Fighting Heavy Mech Robot Battalion!).
Re:Source code: (Score:5, Funny)
Bender: [while sleeping] Kill all humans, kill all humans, must kill all hu...
Fry: [shakes him] Bender wake up.
Bender: I was having the most wonderful dream. I think you were in it.
Re: (Score:3)
I have a feeling it'll be closer to
while(muslims.count() > 0) {...
It will be even more depressing than that...you can't identify religious affiliation visually:
while(target.skincolor < 0.5) {....
Re: (Score:2)
Re: (Score:3)
Landmines can automatically select a target and fire (though not very intelligently), and they've been around for 100 years.
And look how the civilized world responded to that:
http://en.wikipedia.org/wiki/Mine_Ban_Treaty [wikipedia.org]
Of course the US didn't sign it.