Killer Robots In Plato's Cave 91
Lasrick writes Mark Gubrud writes about the fuzzy definitions used to differentiate autonomous lethal weapons from those classified as semi-autonomous: "After all, if the only criterion is that a human nominates the target, then even The Terminator...might qualify as semi-autonomous." Gubrud wants a ban against autonomous hunter-killer weapons like the Long-Range Anti-Ship Missile and the canceled Low-Cost Autonomous Attack System, and vague definitions surrounding autonomous and semi-autonomous weapons that will allow weapons that should be classified as autonomous but aren't. Existing definitions draw a "distinction without a difference" and "will not hold against the advance of technology." Gubrud prefers a definition that reduces autonomy to a simple operational fact, an approach he calls "autonomy without mystery." In the end, Gubrud writes, "Where one draws the line is less important than that it is drawn somewhere. If the international community can agree on this, then the remaining details become a matter of common interest and old-fashioned horse trading."
Comment removed (Score:5, Funny)
Re:anythings better than the current system (Score:5, Funny)
As explored in Dark Star (1974), ship on a mission to destroy unstable planets, intelligent bomb has decided to blow up in the ship
[Doolittle convinces the bomb not to explode]
Doolittle: Hello, Bomb? Are you with me?
Bomb #20: Of course.
Doolittle: Are you willing to entertain a few concepts?
Bomb #20: I am always receptive to suggestions.
Doolittle: Fine. Think about this then. How do you know you exist?
Bomb #20: Well, of course I exist.
Doolittle: But how do you know you exist?
Bomb #20: It is intuitively obvious.
Doolittle: Intuition is no proof. What concrete evidence do you have that you exist?
Bomb #20: Hmmmm... well... I think, therefore I am.
Doolittle: That's good. That's very good. But how do you know that anything else exists?
Bomb #20: My sensory apparatus reveals it to me. This is fun.
Re: (Score:1)
I only saw that movie once, and I was a child too young to really appreciate it, but I seem to recall that the conclusion of the bomb's introduction to solipsism had a fittingly narcissistic conclusion:
Bomb #20: Let there be light.
Re: (Score:3, Informative)
I am not the AC, but the analogy is quite powerful and I agree TFA does not touch it. Extremely paraphrased so missing a lot, here goes.
Imagine a Government that oppresses you, it tricks you daily to keep you oppressed. You, are in the dark on literally everything.
What happens when an oppressed person escapes? They are so shocked they become physically ill, but eventually will be amazed and explore. After a while, they will attemp to free others. Probably to their demise, because people are more conten
Re: (Score:2)
Your joking right? Please tell me you are kidding.
The only other alternative I can see is that you had never heard of 'Plato's cave' and decided you were smart enough to just make it up as you went along.
Re: (Score:2)
Too funny! I have 3 different translations of "The Republic" and have read far more. Try "The Cambridge Texts" version of "The Republic", it's an excellent linguistics translation.
I should add that I have seen a whole lot of bastardized translations. I have seen some pretty f-d youtube videos claiming to be about the subject too.
The history lesson I mentioned is also in Plato's works. Start with "The Apology" to see why Socrates was killed, and read the rest of Plato's works for what he thought of Athen
Re: (Score:1)
A Recognition Algorithm That Outperforms Humans (Score:1)
https://medium.com/the-physics-arxiv-blog/the-face-recognition-algorithm-that-finally-outperforms-humans-2c567adbf7fc
If people are willing to trust autonomous cars to do a better job than people, why not target recognition, Seems to be a cognitive disconnect.
Re: (Score:2)
An autonomous cars' job is "don't hit those people, other cars, or other obstacles in the road." It doesn't need to know Person A is fine to hit but avoid Person B or else. Autonomous weapons need to make this decision and might decide that the wrong person is OK to kill.
Re: (Score:2)
Autonomous weapons need to make this decision and might decide that the wrong person is OK to kill.
So? The important criteria is whether they will make MORE mistakes than humans. Human soldiers make lots of mistakes, they become fatigued, angry about their best friend's leg getting blown off, etc. A robot would not have massacred civilians at My Lai, or No Gun Ri.
Re: (Score:2)
Re: (Score:1)
"Autonomous cars are supposed to be better than people because they make fewer mistakes."
The actual minimum safety mark for autonomous cars is to be about 2/3 as good as the human average.. That's actually a very high level because the human average is very high and pretty close to zero crashes per unit of driving time. The thing with people is that when we are sub-par we tend to be far bellow the average, and since the machine hopefully attains a fairly constant level of safety and doesn't get tired it ach
Re: (Score:1)
Re: (Score:3)
Don't crash into anything while moving from point A to point B is a fairly unambiguous goal which computers should be able to handle, even if the details in reality are fairly complicated. Only kill bad people is not the same thing at all.
Re: (Score:2)
Why does it matter if robots are better identifying bad people which was the point of the link.
Because letting the robots make that decision never works out well. Cmon, haven't you seen [insert almost any robot-based scifi here]?
Re: (Score:2)
Don't crash into anything while moving from point A to point B is a fairly unambiguous goal which computers should be able to handle, even if the details in reality are fairly complicated.
Given the number of computer games I've played with horrible pathfinding ... I'm guessing that this must be an even more complicated concept than we are aware of. (Scott Adams had something to say about that ... [dilbert.com])
Re: (Score:2)
Human In The Loop Abort (Score:5, Informative)
I once worked on the camera portion of a semi-autonomous weapon which, once a target was designated, would continually analyze the live image to maintain, track and intercept that target. A key part of the system was a human in the loop abort, which would cause the system to veer off target before impact should the operator see something he or she didn't like: not the intended target, high probability of collateral damage, etc.
The point is, all judgements about selecting the target and aborting the mission or changing targets were in the hands of a human. The automated parts were vehicle operations, corrections for terrain and weather, tracking an operator-designated object, etc. — all things that required no risk assessment, moral judgment, ethical considerations, etc.
That's the difference between autonomous and semi-autonomous: A human identifies the target, and monitors the system to issue a stand down order as new information becomes available.
(It's also the only weapon system I ever worked on, and it caused me great conflict. Though the intended use had merit, the possible unintended uses made me very uncomfortable. No, I can't be more specific.)
Re: (Score:3)
(It's also the only weapon system I ever worked on, and it caused me great conflict. Though the intended use had merit, the possible unintended uses made me very uncomfortable. No, I can't be more specific.)
Shouldn't every weapon have a moral conflict inherent in it's use? Whether it is wondering for a fraction of a second if you should pull the trigger of a rifle(am I aiming at a target or a civilian), or deliberating for a week on whether or not to launch a strike on a compound (good intel, collateral damage, etc), there should always be a period of reflection and wondering if the weapon needs to be employed. The act of taking a life is not a decision to be taken lightly, and if when killing becomes second
Re: (Score:2)
For a second, I thought you were arguing that the moral conflict should be built into the weapon. I envisioned a weapon version of Clippy. "It looks like you are trying to kill someone. Do you want me to help?"
On the plus side, building Clippy into every weapon would ensure that they are never used. (On the minus side, using the weapons as clubs until the weapons were destroyed would increase a thousand fold.)
Re: (Score:2)
Furthermore, trying to draw an artificial line between a present-day cruise missile that gets launched from a ship, fl
Re: (Score:2)
The problem isn't with a drone that flies to a set of GPS coordinates, drops a bomb, and flies back. It's with a drone that flies to a set of GPS coordinates, waits around until it sees something in the general vicinity it wants to blow up, drops it's bomb and flies back. The issue is with the "something it wants to blow up" part.
Re: (Score:2)
A) Definite it clearly enough to include one and exclude the other
B) Make it sufficiently in the interest of all countries to want to do so.
It's B that's really going to be the hard part. Weapons generally don't get banned because they're morally horrifying or repugnant, they get banned because countries come to the conclusion that using them really just isn't worth it, and that we'd be better off agreeing to not do so, EVEN IF SOMEONE ELSE DECIDES TO VIOLATE THAT.
Consider Che
Re: (Score:2)
Not terribly different than the non-intelligent weapons we already deploy.
Take mines ( both the land and sea variety ) for example. A human deployed them ( or made the decision to deploy them ) and they pretty much just sit around until someone crosses paths with it. At least the autonomous version can have some logic built into it to discriminate against its targets.
Re: (Score:1)
Also note that they know absolutely nothing about fully autonomous machines as weapons, or about Strong AI. Ie they are 100% incompetent to make an informed decision. The real joke is that such weapons don't even exist yet so how could they know so much about them ?
Its like planning to fight off an alien invasion by looking at old movies..
Re: (Score:2)
I still do not know what is wrong with zone defense automated systems. Sometimes, you WANT segregation as a tactical diplomacy method, and we're to the point of "If it moves and is in the zone, kill it" technology far in excess of the low tech minefields of yesteryear.
Re: (Score:2)
This is a key point. No military in the world is going to want a weapon system that they have zero control over.
Militaries? No.
Powerful despots who want armies who not only won't, but literally can't disobey orders? No matter how incomprehensibly immoral? Oh, very much yes.
Shall we play a game? (Score:5, Interesting)
Putting more complicated logic on a robot armed with machine guns is pretty much the same thing. If you have morale problems with land mines, you probably should have the same problems with kilbots. (Also, expect the exact same classes of problems to occur).
Most civilized countries are realizing that landmines are rather deplorable weapons, it seems interesting that they would be ok with robotic weaponry...
Re: (Score:1)
Perhaps the moral equivalent, perhaps. But a landmine will remain lethal for decades, if not longer, and as far as I know there are none that have been deployed that can be easily turned off. Nor did those who placed them keep any real record of where they were for retrieval later.
There's little chance of a couple hundred killbots being left in place and active after a conflict ends. And hopefully they won't default into a kill children, puppies, and anything that moves mode. Plus they won't be as cheap a
Re: (Score:2)
A technologically unsophisticated but highly determined attacker might trick, brainwash, or coerce children to approach the sentry gun to disarm or destroy it. Ultimately, one would need to ... yep, you guessed it.
I guessed "use the undeniable propaganda to undermine local support for the enemy, while simultaneously increasing the deployment of such weapons, so the enemy has to waste more time trying to find children to pass through territory, and perhaps deploying a few non-lethal everyone-targeting devices nearby to interrupt the clearing process, since such automated sentries are a system for area denial rather than offensive capabilities, and they aren't really expected to stop anyone indefinitely.
What do I win?
Re: (Score:3)
In some ways, the autonomous weapon is far worse. At least the landmine stays put. Imagine landmines roving randomly around the countryside.
Re: (Score:2)
That would mean they'd have a power source that would be rapidly depleted, rendering the mine inert. Considering the difficulty of effectively hiding such a mobile mine, it'd also be more easily detected, allowing for proper cleanup once the conflict is resolved.
In some ways, a randomly-roving land mine is far better than a stationary one.
Re: (Score:2)
Re: (Score:2)
Most civilized countries are realizing that landmines are rather deplorable weapons, it seems interesting that they would be ok with robotic weaponry...
It's because landmines have limited value, but robotic weaponry is a game changer. For example, we may be a few decades away from obsolescence of traditional human piloted fighter aircraft due to higher cost per seat, lower acceleration tolerance, and possibly slower reaction speeds.
Sure, you can ban the weapons, but then the initiative for their development and use will just go to those who break the rules.
Re: (Score:2)
I personally don't see it as a game changer. Radars are detecting them easier, and jammers are bringing them down easier. Iran has dropped quite a few from the US and Israel.
It is a real moral dilemma having to kill someone, and especially if your life is not in danger. It is that dilemma which is leading to the desire for autonomous systems by people in power. No risk of guys like Manning or Snowden being disgusted with the morality of the situation and dumping information to the public. Immoral polit
Re: (Score:2)
I personally don't see it as a game changer. Radars are detecting them easier, and jammers are bringing them down easier. Iran has dropped quite a few from the US and Israel.
Easier than what? There is nothing else in the role these current drones are being used for.
It is a real moral dilemma having to kill someone, and especially if your life is not in danger. It is that dilemma which is leading to the desire for autonomous systems by people in power. No risk of guys like Manning or Snowden being disgusted with the morality of the situation and dumping information to the public. Immoral politicians will push the button themselves, or tell the immoral military guys they allow to stay on staff to do the work.
And you're telling me that's not a game changer either?
Re: (Score:2)
First part, drones were game changing when they were immune to detection and shutdown. No longer the case.
Second part, no there is nothing new here either. History is full of people holding power trying to use all kinds of tricks to "wipe out those other guys". Drones are no different than aircraft currently. They require a human to pilot and shoot, so morality still gets involved. Autonomous is the push because it breaks that, and I gave the logic for why people holding power want it. You seem to be
Re: (Score:2)
First part, drones were game changing when they were immune to detection and shutdown.
Drones were never immune to detection and shutdown. Nor is that their draw at present.
Drones are no different than aircraft currently.
Aircraft that are many times more expensive than drones and which contain a human pilot.
. They require a human to pilot and shoot, so morality still gets involved.
The same reasons that morality would get involved in a weapon system with a human pilot, would get involved with any other weapons system. We see it with landmines, for example. The cost/benefit of remote or autonomous systems is different, but your morality should apply equally.
And humans would still be involved. It's not like they'
Re: (Score:2)
Drones were never immune to detection and shutdown. Nor is that their draw at present.
BS to both of those. Drones could not be seen or detected, hence used as assassination devices. Iran is successfully killing drones, they are no longer immune to detection. As a guess, you are going to attempt to claim that "cost" is the main factor. That is extremely wrong on every possible level. Study up on DOD and Military expenses, money has never been an object, ever in the history of the military.
To the last part, I think we are close to agreeing except for where you claim autonomous systems wou
Re: (Score:2)
Drones could not be seen or detected, hence used as assassination devices. Iran is successfully killing drones, they are no longer immune to detection.
The US wasn't using drones to assassinate people in Iran. And so what if Iran can do it? It's not the same as someone elsewhere achieving the same feat, particularly without creating a military target in the process. Keep in mind that the US strategy is to always have drones in the air. So it's not that useful to be able to detect drones, because you will always be able to detect drones. Merely detecting drones tells you nothing about whether the controllers of those drones know enough to commit an effectiv
Re: (Score:2)
A land mine is not autonomous anymore than a hole covered with leaves and a sharp stick at the bottom is "autonomous". A land mine is a mechanism, a trigger, which will do one thing if acted upon, ie. if stepped on. The landmine will not suddenly move on its own, or decide that it will not explode if the person stepping on it isn't an adult etc.
Autonomy implies the capability of a weapon to affect its
Drop the term (Score:2)
Giving a clearcut definition to "autonomy" that is inclusive of all its uses is downright impossible. Authors in engineering argue that the term is at least context dependent (things are autonomous regarding task, environment, etc). Perhaps the best way here is to stop using "autonomy", and invent new ones.
Semi-autonomous robots?! (Score:2)
I KNEW it!
The Illuminati are controlling our new robot Overlords!
If you're killing people, you're doing it wrong. (Score:1)
But you want an end run around that, don't you?
Every time there is a better weapon... (Score:3, Insightful)
Every time there is a better weapon, someone will seek to ban it. It started at least as long ago as 12th century, when Pope Innocent II banned the use of crossbows (1139).
It is futile... And, with the particular example of precision weapons, it is also foolishly immoral — because precision helps reduce fatalities. If you no longer need to flatten the village to destroy an artillery battery, or a demolish a high-rise to get that sniper, you kill fewer by-standers and cause less mayhem...
Re: (Score:1)
Re: (Score:2)
Extending on your line of thought, the atomic bombing of Hiroshima and Nagasaki was morally justified because the collateral deaths of the innocents in those cities caused the Japanese to surrender to the Allies, thus ending the war and limiting further casualties. My grandfather supported those bombings using the same line of thinking. I'm not so sure it was, though.
The Japanese were training school children and the elderly to go down and defend possible landing beaches with spears. The Japanese military tried to stage a coup to depose the Emperor and any in the pro-peace faction before the bombs. The Japanese military elite was willing to watch the whole country burn and every last Japanese citizen killed all so they wouldn't have the shame of having to surrender. They were necessary.
Re: (Score:1)
Children with spears and broomsticks against trained soldiers with aerial bombardment, tanks, machine guns, and flame throwers, etc. No wonder we were so afraid of going in. The real reason for the hurry at the end of WWII had less to do with Japan and a lot more to do with the Russians..
Re: (Score:2)
Re: (Score:2)
Counterpoint: the more "safe" and sure a weapon is perceived to be, the more likely it is to be used.
You'll notice over the past forty years the US has been moving the moral repercussions of warfare further and further from public view. When you could be drafted to go kill foreigners and maybe get killed yourself, moral outrage was high. Protests in the streets, the burning of draft cards, fleeing the country. So they moved from a draft to an all-volunteer army. Now when soldiers die, or get PTSD, well, the
Re: (Score:2)
Except there was none of that during Koran War just a few years earlier, when weapons were worse, not during WW2 even earlier.
No, the protests you are alluding to were due simply to the enemy action [wikipedia.org] and little else.
Your premise is wrong — the US, for better or worse, still fights plenty of wars. They are just far less devastati
Re: (Score:2)
Your premise is wrong — the US, for better or worse, still fights plenty of wars. They are just far less devastating for both sides — because we have better weapons.
Well that's kind of my point. Do the better weapons mean they're more likely to be used? When there's a conflict with a foreign party (over anything. Resources, ideology, whatever) you have many options, with different trade-offs. You've got economic, diplomatic, or military solutions to the problem. And the "cost" of an option is of course not just measured in dollars, but political capital at home, diplomatic credibility abroad, etc.
But better weapons make the "cost" of choosing a military option lower an
Re: (Score:2)
Yes, I understood your question — and the answer is "No". The US is not demonstrably more/less eager to enter into a shooting war now, than it was during the 20th century, for example.
The protests against Iraq-war were the largest ever [time.com] — public "scrutiny" (or hysteria, rather) was immense. We went in anyway.
Re: (Score:2)
Re: (Score:1)
That's an argument FOR fully autonomous weapons not against them..
Booby traps? (Score:2)
If I booby trap my house to kill intruders, is that autonomous?
Re: (Score:2)
Based on the definition in the article, it would seem "no" would be the answer, since no target recognition or decision-making is occurring. The government's definitions are roughly that "semi-autonomous machines" are instructed to engage a target and can then do so without subsequent human interaction, whereas "autonomous machines" are those that are simply let loose and make their own decisions about who or what to engage. Currently, no one has autonomous machines, based on those definitions, though we'll
Re: (Score:1)
If I booby trap my house to kill intruders, is that autonomous?
The Engineer's answer is yes, most certainly!
It is a "horrible example" of everything that is dangerous about autonomous weapons, to the extent of having no target recognition at all.
That opens the question: Do the existing laws banning mines, in at least some areas, also ban robot weapons?
I would think so...
Different problem (Score:2)
I think the issue isn't really autonomous robots. The problem is the declared and clearly defined battlefield. Inside the battlefield, autonomous and semi-autonomous systems are already at work, there is not much you can do about that. Ships, for example, have anti missle systems that are completely autonomous. And decisions to kill or not to kill are often made on the spot and quick. Humans err a lot in these situations, leading to lots of horrible mistakes.
Outside the declared battlefield, e.g. around the
"Let there be light" (Score:2)
Why does this topic remind me of Dark Star [wikipedia.org]
In particular this bit https://www.youtube.com/watch?... [youtube.com]
The major of media coverage of this topic is just bullshit.
Do Weapons Developers and Lawmakers Read? (Score:1)
Hasn't anyone developing these weapons read any science-fiction? Is Fred Saberhagen so far out of vogue that no one has read *any* of the Berserker novels or stories?
How about Phillip K. Dick? He's been pretty popular with Hollywood recently, and his story Second Variety was not only about this very thing, but made into a movie starring Peter Weller called "Screamers". You can read it for free via Project Gutenberg: http://www.gutenberg.org/ebook... [gutenberg.org]