US Navy Wants Smart Robots With Morals, Ethics 165
coondoggie writes: "The U.S. Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability. While the idea of robots making their own ethical decisions smacks of SkyNet — the science-fiction artificial intelligence system featured prominently in the Terminator films — the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications. One possible scenario: 'A robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it? If the machine stops, a new set of questions arises. The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it’s for the soldier’s well-being?'"
Up to 11 (Score:1)
If the enemy is more injured, should it switch sides and help them instead?
Re: (Score:2)
Re: (Score:2)
OXYMORON ALERT: "Military Morality"
Re: (Score:3)
Not at all. Every institution needs some kind of morality to guide its actions. One can debate whether a particular institution should exist, altough in our current level of development it seems unlikely we could do without military, and of course even in a completely peaceful one we'd still need personnel and equipment for difficult or dangerous missions, such as search and rescue; but once one does, it needs rules about what's desirable or acceptable and what's not.
Re:Up to 11 (Score:4, Insightful)
Is funny because since WWII the army has worked to get the kill rates up. In WWII only 15% of soldiers shot to kill, but they the army brainwashes them so that 90% kill. Moral. Killers. Can't have both.
And Moral and Ethical for the NSA? LMAO.
Re: (Score:2)
In WWII only 15% of soldiers shot to kill
What kind of nonsense is that? Ever since the introduction of rifled barrel, soldiers have been shooting to kill. (Well, those who could aim. The other ones - and their ancestors using smoothbore muskets - were obviously shooting to miss.)
Re: (Score:2)
APK, you are a fucking psycho. Enough with the stalking, already. And no, I'm not K.S. Kyosuke, nor do I have anything to do with him.
Re: (Score:2)
" In WWII only 15% of soldiers shot to kill, but they the army brainwashes them so that 90% kill"
Citation needed
Anyway In way the soldiers (and marines since the subject was Navy) first task is to prevent the enemy killing him. If that can be accpmlished by disabling the enemy, then thats OK, but shooting him in the leg may not prevent him from firing back. A head shot may miss (unless the soldier is a marksman or sniper) so a center body shot (ie chest) is preferable, even if its not fatal immediately its
Re: (Score:1)
http://en.wikipedia.org/wiki/On_Killing [wikipedia.org]
"The book is based on SLA Marshall's studies from World War II, which proposed that contrary to popular perception,[1] the majority of soldiers in war do not ever fire their weapons and that this is due to an innate resistance to killing. Based on Marshall's studies the military instituted training measures to break down this resistance and successfully raised soldier's firing rates to over ninety percent during the war in Vietnam."
Re: (Score:3)
Actually, with the invention of the NATO round, bullets are designed to maim instead of kill. This way, one bullet can take out 2 or 3 people from the immediate "action". One to get shot, and up to two to carry the wounded solider to safety.
Re: (Score:2)
In world war 2, the fighter pilots had a sign that said, "Pilots first mission is to see the bombers get home."
The new commander saw this and was appalled. He had the sign changed. "Pilots first mission is to kill enemy fighters". His adjutant openly wept when he saw the sign change because he understood it meant they would stop losing so many fighter pilots.
(from "The Aviators"-- great book on Rickenbacker, Lindberg*, and James Doolittle)
I think what the parent poster is trying to say is that 15% of sol
Re: (Score:2)
IIRC, Lindbergh was not a fighter pilot in WWI. Rickenbacker was.
Re: (Score:2)
You are correct. My mind can jumble things and I read the book a few months ago.
He did go on the 50+ missions (over the resistance of commanding officers).
He also had a ton of flying experience including multiple near fatal crashes in air mail plains.
Thanks for the correction! Hate the way my mind mangles things sometimes.
It was a great book tho.
Re: (Score:2)
Yep. I read about his missions fairly recently. IIRC he was there to teach the kids how to stretch fuel for long trips.
Re: (Score:2)
In "the aviators", it is presented as not his assigned mission but while there he figured out a way to extend the range of the Lightnings by 300 miles which made a huge difference in the reach of the air forces.
Re: (Score:2)
My recollection was that that was the one and only reason for his presence.
And it was lightnings, but I think he may have taught some corsair pilots as well.
And, so I looked it up.
According to this, it was corsairs first
http://www.charleslindbergh.co... [charleslindbergh.com]
And this seconds it
http://www.eyewitnesstohistory... [eyewitnesstohistory.com]
and this says that it was corsairs, but the issue he solved was taking off with large bomb loads ( the corsair was designed as a fighter, but was in use with the Marines as the Navy didnt like it's landing c
Yes. That's the zeroth law (Score:3)
The problem is not putting morality into machines. The problem is letting these machines execute this "morality" in a complex environment with life-or-death stakes.
I program protective systems in factories. A guy opens a monitored gate and walks into a conveyor area. If the conveyor runs while he's in there, he will have a messy and painful death. The conveyor "knows" it's "wrong" to move under those conditions.
We don't use the word "morality", we say "safety". When auditing the software that lets the
Re: (Score:2)
Friendly Medical Robot(FMR):"OK, scans indicate you have a broken leg and a 4 inch cut to your abdomin. You will die in 4.23 hours most likely from infection if not treated with the next 1.01 hours."
(EM): "HELP ME!"
(FMR): "I have already notified Command of your condition. A Peace Keeper Drone will be here to monitor your actions in 3.02 minutes. Your options for help, from me are to verbally surrender. Should you attack me, I will self destruct; while holding the closest body
Re: (Score:2)
I have trouble suspending my disbelief with your scenario. You left out the banner advertisements for body armor, life insurance, and the new, improved enemy detection app, now available on the Samsung Galaxy X12.
Re: (Score:2)
Re: (Score:2)
Why all the hate?
I for one, welcome our new robotic, military theocracy overlords!
Ethics and Morals ? (Score:1)
No they dont.
If anything, they want such a robot with a particular set of morals and ethics. If it really would have morals and ethics it would refuse to kill humans, terrorists or not, that have no chance to defend themselves against such a machine.
But than again, I think of drone attacks (by people who, sitting in their comfy chairs far, far away, are not exposed to any kind of risk) as even more cowardice as the acts of snipers picking off unsuspecting targe
Re: (Score:3)
What they want is a robot that will not embarrass them, but that will do their killing for them. I want a pony, but I can't have one. The situation here is similar. Coding up a robot that makes ethical choices is so far beyond the state of the art that it's laughable. Sort of like the story the other day of the self-driving car that decides who to kill and who to save in an accident.
When will they figure out that what you really need is a robot that will walk into the insurgent's house, wrestle the
Re: Ethics and Morals ? (Score:2, Flamebait)
Snipers are cowardly? What the actual fuck.
Here's the thing. War isn't very nice. In war the objective is to stop the enemy from resisting your movements. There are lots of ways of doing this, but the best way is by killing them. In order to do this, you want to kill as many of them as necessary, while getting your own guys killed the least. This is, distilled down to its purist essence, war.
So it's not cowardly to snipe from a rooftop, drop bombs from 50,000 feet, or launch Hellfires from a continent away.
Re: (Score:3)
>There are lots of ways of doing this, but the best way is by killing them
Correction: The most effective way is killing them. There's a difference. In a real war it should always be remembered that the folks shooting back at you are just a bunch of schmucks following orders, just like you. The actual enemy is a bunch of politicians vying for power who have never set foot anywhere near an active battlefield. And not necessarily the ones giving orders to the *other* side.
Re: (Score:3)
I think what the parent means to say is that, in a war created by politicians, it should be fought by politicians. My Prime minister doesn't like your president. Ok. Grudge match! Stick em both in a ring and let them fight it out. First blood, till death, whatever. Doesn't matter. Or perhaps a forfeiture of that leader's assets should be on the line. Hit em where it hurts. You lose, you retire and lose the entirety of your assets to the victor.
Point being... leave the rest of us out of it.
Re: (Score:2)
You only care about body count, or spectacular victories ("let's put the fear of God to them"), when you don't know what you're imposing if anything, or to whom you're imposing it on. Then body count becomes the only measure of prgress that you can use. It's like your fighting a war either because you can, or because you don't know what else to do...
Besides, what made Red Army r
Re: (Score:2)
Says the guy posting as an Anonymous Coward.
Re: (Score:2)
So, "tooth and claw" only?
The moment you pick up a stick to give yourself an edge over the enemy, you have put yourself somewhere on that spectrum.
Re: (Score:2)
Re: (Score:2)
Humans Can Not (Score:5, Insightful)
Re:Humans Can Not (Score:5, Insightful)
Would the robot shoot a US commander that is about the bomb a village of men woman and children?
The US navy don't want robots with morals, they want robots that do as they say.
Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win. Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.
Re: Humans Can Not (Score:2)
Killer robots allow to solve conflicts without sacrifice.
Re: (Score:2)
Dream on.
Re: Humans Can Not (Score:5, Interesting)
Killer robots allow to solve conflicts without sacrifice.
A conflict without a risk of sacrifice is slaughter. Only stupids would want that.
We even have casualties in our never-ending war against trees (aka logging).
Re: (Score:2)
No, slaughter is indiscriminate killing. Reducing casualties is a definite move away from that. While attaching a cost to war is one way of prohibiting it -- hence the success of M.A.D. -- the problem is someday you do wind up having to pay that cost. Overall it's better to reduce the cost than trying to make it as frightful as you can.
But if soldiers can be made obsolete, perhaps killing people can be made obsolete as well. Just as women and children have sometimes enjoyed a certain immunity for not be
Re: (Score:2)
Killer robots allow to solve conflicts without sacrifice.
If you think they won't be turned against you, Educate Yourself. [theguardian.com] Anti-activism is really the only reason to use automated drones: They can be programmed not to disobey orders, and murder friendly people. Seriously, humans are cheaper, more plentiful, and more versatile, etc. Energy resupply demands must be met any way you look at it. Unmanned drones with human operators just allow one person to do more killing -- take the lead of the pack of drones, it gets killed, they switch to the next unharmed unit.
Re: (Score:2)
Re: (Score:2)
Far more likely if that's what they intend is a network of spiderbot mines. Making a whole expensive robot capable of surviving a simple mine is extremely difficult, incorporating sufficient intelligence in a single robot to interpret human interactions and acceptable response is also extremely difficult. Creating a creeping crawling mobile carpet of networked mines that share information back to a control and decide where to go and when it is appropriate to detonate is far simpler, especially when they ca
Re: (Score:2)
Any robot that can help a wounded person could easily be re-purposed to fire weaponry instead of administer first aid -- Especially if they can do injections.
And it's pretty much guaranteed that they will be coerced [schlockmercenary.com] as demands dictate.
Re: (Score:2)
Would the robot shoot a US commander that is about the bomb a village of men woman and children?
The US navy don't want robots with morals, they want robots that do as they say.
Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win. Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.
Wars are as much political conflicts as anything else, so acting in a moral fashion, or at a bare minimum appearing to do so, is vital to winning the war. Predator drones are a perfect example of this. In terms of the cold calculus of human lives, they are probably a good thing. They are highly precise, minimize American casualties, and probably minimize civilian casualties compared to more conventional methods like sending in bombers, tanks, platoons, etc. etc. That's cold comfort if your family is slaught
Re: (Score:2)
Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.
No.
Landmines are killer robots. And we're inventing much better killer robots.
Re: (Score:2)
Country B's killer robot, now with new and improved No-Moral(TM): Oooooh, an embargo. I'm sooooo scared. Guess I'd better not kill anyone anymore, the big bad embargo might tell me to talk to the hand at the border, or even worse, write me an angry letter.
Re: (Score:3)
Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win.
Except, will all else be equal? What are morals, from a robotic point of view? Higher-order thinking? Greater awareness of consequences? Whatever way you slice it, robots with morals by definition will need to be smarter than robots without morals - and that intelligence may well be applicable to the art of war.
I'm reminded of the Bolos [wikipedia.org], fictional military AIs which developed sentience and morals due to the simple military necessity of having to keep making them smarter - not only to effectively counter the
Re: (Score:2)
Which is going to do the 'enemy' more harm,
A) A robot that doesn't maim innocent civilians
or
B) The robot that harms medics, engineers, road builders, gas, water, electric and comm's workers people, shop-keepers, delivery people, car mechanics, children (because they will grow to be soldiers etc)....Almost all people, robots and infrastructure.
Re: (Score:2)
That depends. Do your child-killler robots wait to start shooting until they've reached my country? You didn't teach them morals, after all, and your country's children might grow up to be rebels....
So while your amoral robots are shooting children (in whichever country), my moral robots are shooting your amoral robots. Meanwhile, your populace - along with the rest of the world - is turning against you due to my widely distributing the HD videos my robots took of your atrocities.
Unless of course your amora
Re: (Score:2)
Practically I suspect it will be more like a chess engine, you give everything a number of points and the robot tries to maximize "score". How do you assign points? Well you can start with current combat medic guidelines, then run some simulations asking real medics what they'd do in such a situation. You don't need the one true answer, you're just looking for "in this situation, 70% of combat medics would do A and 30% B, let's go with A". I suspect combat simulations will be the same, you assess the risk o
Re: (Score:2, Insightful)
People in the US think too much about killing. It's as if you don't understand that killing is a savage thing to do. Maybe it's the omnipresence of guns in your society, maybe it's your defense budget, but you can't seem to stop thinking about killing. That's an influence on your way of problem-solving. Killing someone always seems to be a welcome option. So final, so definite. Who could resist?
Re: (Score:3)
Wish I could mod this up. This is _the_ problem that needs to be dealt with. Taking the "Easy" way out.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
For example would a moral robot have refused to function in the Vietnam War?
The decision whether to fight in the Vietnam War is political. A robot does not have a vote, so should not participate in politics at all.
Would a drone take out an enemy in Somalia knowing that that terrorist was a US citizen?
If the enemy is judged to be seriously threatening US interests, the drone should take him out, just as a police officer would take out a dangerous criminal.
How many innocent deaths are permissible if a valuable target can be destroyed?
In this case the drone should weigh human lives against other human lives. Can it be estimated how many human lives are at risk if the valuable target remains intact
what they should want (Score:5, Insightful)
US armed forces should want leaders with morals and ethics, instead of the usual bunch that send them to die based on lies (I'm looking at you Chenney, you bastard).
Re: (Score:2, Flamebait)
Why don't you ask Ambassador Chris Stevens, Sean Smith, Tyrone Woods and Glen Doherty about that........
Re: (Score:3)
Sorry, the media is too busy trying to make sure that those stories remain buried. After all it can't embarrass Obama, or Holder, or shine any light on his administration. That would be racist, or show that they're political hacks who are actively supporting an administration which is corrupt, and they're playing political favorites. Never mind that the IRS targeting of conservative groups also falls into this, and was done at the behest of a high ranking democrat. [dailycaller.com]
Re: (Score:2)
That seems to work very well for Al Quedah, some of whose leaders have very clear moral principles involving national autonomy and religiously based morals and ethics. Simply having strong "morals and ethics" is not enough.
I could not think of more boring questions (Score:4, Insightful)
Every single one comes down to "do I value rule X or rule Y more highly?" Who gives a shit. Morals are things we've created ourselves, you can't dig them up or pluck them off trees, so it all comes down to opinion, and opinions are like assholes: everyone's asshole is a product of the culture it grew up in.
This is going to come down to a committee deciding how a robot should respond in which situation, and depending on who on the committee has the most clout it's going to implement a system of ethics that already exists, whether it's utilitarianism, virtue ethics, Christianity, Taoism, whatever.
Re: (Score:2)
and opinions are like assholes: everyone's asshole is a product of the culture it grew up in.
Did you grow up with NAMBLA or something? ;)
it's going to implement a system of ethics that already exists, whether it's utilitarianism, virtue ethics, Christianity, Taoism, whatever.
And it'll be the PR mode, used for fighting "just wars" against vastly inferior military forces. In an actual major conflict a software update would quickly patch them to "total war" mode where the goal is victory at all costs. No matter what you do they'll never have morality as such, just restrictions on their actions that can be lifted at any moment.
Re: (Score:2)
Every single one comes down to "do I value rule X or rule Y more highly?" Who gives a shit. Morals are things we've created ourselves, you can't dig them up or pluck them off trees, so it all comes down to opinion, and opinions are like assholes:
Meh. There's always this rant about "opinion" vs. "fact" or whatever when the topic of ethics comes up.
Here's a better way of thinking about it: war (and life, for that matter) is about finding a pragmatic strategy toward success and achieving your goals.
Morality is not arbitrary in that respect. Different strategies in setting up moral systems will produce different results. For example, suppose we don't have treaties against torture, executing prisoners of war arbitrarily, killing non-combatants an
Comment removed (Score:5, Insightful)
Morals and Ethics? (Score:5, Funny)
It would be great if they could develop a politician with morals and ethics........but I doubt even the Pentagon's budget would be big enough...........
Re: (Score:2)
I'd vote for a machine. An incorruptible unfeeling machine programmed only for the good of the entire country it's in charge of. No personal agendas, no ambition, just saving the economy and the wellbeing of its citizens.
One day, one day
Why can't they do what humans do and... (Score:3)
If they calculate that you can't be helped and must be left to die, just say, "Sorry, I've been given specific orders to do X, so I can't help you."
All of this 'ethical debate' surrounding robots that can make life-or-death decisions has absolutely nothing to do with technology, or AI, or any issue that can be resolved technically at all. All it boils down to, is that people are mad that they can't hurt a robot that has hurt them. See, before machine intelligence we had a pretty sweet system. When a human being commits a crime, we stick them in prison. It doesn't feel good to be in prison, therefore this is "justice." But until robots can feel pain or fear or have a self-preservation instinct, prison (or, hell, even the death sentence) wouldn't affect them at all. And that's what drives people nuts. That technology has shown us that beings can exist that are smart enough to make life-or-death decisions, but lack the concept of pain or suffering and if they do something bad there's no way we can PUNISH them.
Another joke from US gov (Score:2)
Given how "moral" and "just" US govt is when pursuing such atrocities as Obama's drone campaign, funding and arming islamist fundamentalists in Syria, supporting and funding neo-nazis in Ukraine, murdering millions of people all over Middle East etc., I'd rather have anything but military robots with US government "ethics" onboard. They just want fully autonomous killing machines without human conscience standing in the way. Maybe they're running out of drone operators eager to blindly follow murdering orde
Sounds dangerous (Score:1)
Re-inventing the human (Score:1)
Whats the point of re-inventing the human to that level. If the robot has to be so self ware as to be moral and know compute ethics, then it starts a new debate of ethics = should we humans be ready to sacrifice/put in risk our couterparts which are so self-aware? You will only complicate stuff .. I guess PETOR = People for Ethical Treatement Of Robots will form even before the first prototype.
Also, even if you think practically, if you can have robots which are so self aware, why have other sodiers at al
Just give me the chassis I'll get the 7.5 million. (Score:3)
The chassis is the hard part, not the ethics. The ethics are dead simple. This doesn't even require a neural net. Weighted decision trees are so stupidly easy to program AIs that we are already using them in video games.
To build the AI I'll just train OpenCV to pattern match wounded soldiers in a pixel field. Weight "help wounded" above "navigate to next waypoint", aaaaand, Done. You can even have an "top priority" version of each command in case you need it to ignore the wounded to deliver evacuation orders, or whatever: "navigate to next waypoint, at any cost". Protip: This is why you should be against unmanned robotics (drones): We already have the tech to replace the human pilots and machine ethics circuits can be overridden. Soldiers will not typically massacre their own people, but automated drone AI will. Even if you could impart human level sentience to these machines, there's no way to prevent your overlords from inserting a dumb fall-back mode with instructions like: Kill all Humans. I call it "Red Dress Syndrome" after the girl in the red dress in The Matrix. [youtube.com]
We've been doing "ethics" like this for decades. Ethics are just a special case of weighted priority systems. That's not even remotely difficult. What's difficult is getting the AI to identify entity patterns on its own, learn what actions are appropriate, and come up with its own prioritized plan of action. Following orders is a solved problem, even with contingency logic. I hate to say it, but folks sound like idiots when they discuss machine intelligence nowadays. Actually, that's a lie. I love pointing out when humans are blithering idiots.
Right (Score:5, Insightful)
Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications.
Just like drones were first used for intelligence gathering, search and rescue and communications relays.
Re: (Score:2)
Just like drones were first used for intelligence gathering, search and rescue and communications relays.
And still are. What's your point? Tools are tools.
Communication!! (Score:1)
It's been covered (Score:2)
Most recently, check out the May 15 Colbert Report. He skewers the concept of military morality pretty well.
Then, take a trip in the wayback machine to another machine-orchestrated conflict [wikipedia.org] .
Re: (Score:2)
Most recently, check out the May 15 Colbert Report. He skewers the concept of military morality pretty well.
The individual video segment [cc.com] is available to watch directly -- it's relevant, funny, and even oddly poignant.
Skipping mere "technical problems" (Score:2)
Since it's all conjecture, really fiction, let's drop back to Asimov for a moment.
1 - A robot may not harm a human being, or through inaction allow a human being to come to harm.
What is a "human being"? Is it a torso with 2 arms, 2 legs, and a head? How do you differentiate that from a manniquin, a crash-test dummy, or a "terrorist decoy"? What about an amputee missing one or more of those limbs? So maybe we're down to the torso and head?? What about one of those neck-injury patients with a halo suppor
Re: (Score:2)
1 - A robot may not harm a human being, or through inaction allow a human being to come to harm.
The contradiction in that sentence makes whole rule worthless. Suppose the robot knows an aircraft has been hijacked and is being flown toward a building full of people. It can't shoot down the aircraft, but not shooting it down means other people are harmed. This was a real life scenario on 9/11, the fourth plane was headed toward Washington, but it would not get there because an armed fighter jet was already on the way to intercept it.
Re: (Score:2)
Issues like this are why Asimov sold a lot of books, and why the Three Laws come up whenever robots are discussed. He came up with a reasonable, minimal code of conduct, and then explored what could possibly go wrong.
I don't remember him writing about your type of situation, which is rather odd when you think about it, because that scenario is rather obvious. But his stories often lived in the cracks where it was really hard to apply the Three Laws. Two examples that come to mind, off the top of my head
Re: (Score:1)
It can't shoot down the aircraft, but not shooting it down means other people are harmed.
This is why the ends don't justify the means. As soon as someone says "the ends justify the means" it gives everyone an excuse to use any solution to a problem, even if it isn't the best solution. An intelligent robot would figure out some way to stop the plane without killing anyone.
War 101 (Score:2)
A vast majority of local population is on your side as they are 'your' people. Any outsider is shunned, reported and dealt with. You win over time
A small majority of local population is on your side as they see your forces as less evil. Any outsider is shunned, reported and dealt with to keep the peace. You hold and hope for a political change.
A small portion of local population is on your side as they see your forces as less evil.
joshua (Score:2)
Let's play global thermonuclear war
My late father talked about this a lot (Score:3)
He had a formula, V=DNT
V is value
D is degree; i.e how happy or unhappy somebody is
N is number; the number of people
T is time, how long they were affected
Morality is very tricky, but objective attempts to quantify and make optimal decisions cannot be a step in the wrong direction. Maybe well programmed machines will help improve human behavior.
To all politicians... (Score:1)
"US Navy Wants Smart Robots With Morals, Ethics"
To all politicians...be afraid. Be very afraid.
Re: (Score:2)
Naw, as R Daneel Olivaw said: justice is "That which exists when all laws are enforced." No need for the politicians to be worried, except about how they phrase the laws. Just have to make sure that "but some humans are more equal than others" is slipped in to the middle of some 900-page agricultural subsidy act.
Far far future (Score:2)
Let's just bypass all the Slashtards saying "heh heh, the US military doesn't have any ethics anyway" and ask a more fundamental question:
Have you ever seen a robot medic that can treat a wounded person at all without a human micromanaging its every move? Even in a hospital or another non-military situation? Have you ever seen a robot that can vacuum a floor *and* can put small objects aside, use an attachment to reach under narrow spaces, and follow instructions like "stay off my antique rug"? Have you
If I Made a Robot with Ethics (Score:2)
Premise (Score:1)
Start at the top (Score:2)
Let's have everyone from the Joint Chiefs down examine morals from a human rights perspective. War is immoral from the git go.
Re: (Score:2)
OK, now we've decided that and decided not to fight any more wars. Oops, we're now being run by a repressive dictatorship without the same moral hangups. Life sucks.
First things first (Score:1)
Re: (Score:2)
we have, as such concepts are entirely subjective and matters of opinion.
How about less-ironic robots? (Score:2)
http://www.pdfernhout.net/reco... [pdfernhout.net]
"Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?"
That said, sure, I've always likes Isaac Asimov's three laws of robotics. He explores how they work and how they don't work. Asimov came from strong Jewish religious tradition, and it seems to me likely aspects of religion influenced his thoughts on them. A big part of
I'm sorry, Dave, I can't let you launch missiles. (Score:2)
Things like nuclear war have came up before, and their usually attributed to human error.
3 laws OS (Score:1)
machines are extensions of humans (Score:2)
the problem is that this approach omits the human from the beginning...
human medics would face the *exact same* factors in this decision situation...how would humans decide???
oh man...
I usually love it when mainstream culture learns more about tech, smarter customers make my business work better, but having to listen to 1000 idiot "ethicists" and whatnot running their head-holes ad infinitum about the "implications" and I just...arrrggghhh!!!
machines are programmed by humans to execute instructions
you can c
Read Asimov (Score:2)
Before the Navy goes this route, they need to sit down and read the short story collection "I, Robot" by Isaac Asimov. Everyone's familiar with Asimov's 3 Laws of Robotics. They seem reasonable. Yet, every story in that collection (and in fact most of Asimov's robots stories) is about how the 3 Laws fail in practice. If you want to try doing a better job of writing ethical rules for robots than Isaac, you'd better be familiar with how to work through all the ways those rules can backfire on you. For instanc
From the summary, the approach is wrong (Score:1)
[And who would ever read TFA; we are in /. !]
Reading the summary, I gather the usual driss that AI has been offering over the last 2 generations: A pre-programmed decision tree instead of an instance of real ethics, morality, or thought. The whole scenario does not sound like the US Navy would get anything close to an autonomous apparatus to be send out into the field, gather information, learn and improve from it, and take reasonable decisions based on a full analysis of the underlying facts. It rather rea
Game of Thrones quote (Score:3)
”So many vows. They make you swear and swear. Defend the king. Obey the king. Obey your father. Protect the innocent. Defend the weak. What if your father despises the king? What if the king massacres the innocent?” - Jaime Lannister
SkyNOT? (Score:2)
The Vacillator.
The real story (Score:2)
7.5 million dollars just went down the drain.
US Navy wants robots that can be blamed (Score:2)
If the robots have morals and ethics, there will be less opposition to them and the commanders are not responsible for the robot's actions.
Re: (Score:2)
Sapient, not sentient, they're commonly confused but profoundly different. Sentient simply means "possessing a subjective experience of self" and is generally accepted to be common among most of the higher animals. Whereas sapience is the "ability to apply knowledge or experience or understanding or common sense and insight". The degree to which it's possible to get sapience without sentience is an ongoing question being explored by AI research. Any form of data processing is an expression of at least a t