



US Navy Wants Smart Robots With Morals, Ethics 165
coondoggie writes: "The U.S. Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability. While the idea of robots making their own ethical decisions smacks of SkyNet — the science-fiction artificial intelligence system featured prominently in the Terminator films — the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications. One possible scenario: 'A robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it? If the machine stops, a new set of questions arises. The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it’s for the soldier’s well-being?'"
Humans Can Not (Score:5, Insightful)
what they should want (Score:5, Insightful)
US armed forces should want leaders with morals and ethics, instead of the usual bunch that send them to die based on lies (I'm looking at you Chenney, you bastard).
I could not think of more boring questions (Score:4, Insightful)
Every single one comes down to "do I value rule X or rule Y more highly?" Who gives a shit. Morals are things we've created ourselves, you can't dig them up or pluck them off trees, so it all comes down to opinion, and opinions are like assholes: everyone's asshole is a product of the culture it grew up in.
This is going to come down to a committee deciding how a robot should respond in which situation, and depending on who on the committee has the most clout it's going to implement a system of ethics that already exists, whether it's utilitarianism, virtue ethics, Christianity, Taoism, whatever.
Comment removed (Score:5, Insightful)
Re:Humans Can Not (Score:5, Insightful)
Would the robot shoot a US commander that is about the bomb a village of men woman and children?
The US navy don't want robots with morals, they want robots that do as they say.
Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win. Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.
Re:Up to 11 (Score:4, Insightful)
Is funny because since WWII the army has worked to get the kill rates up. In WWII only 15% of soldiers shot to kill, but they the army brainwashes them so that 90% kill. Moral. Killers. Can't have both.
And Moral and Ethical for the NSA? LMAO.
Right (Score:5, Insightful)
Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications.
Just like drones were first used for intelligence gathering, search and rescue and communications relays.
Re:Humans Can Not (Score:2, Insightful)
People in the US think too much about killing. It's as if you don't understand that killing is a savage thing to do. Maybe it's the omnipresence of guns in your society, maybe it's your defense budget, but you can't seem to stop thinking about killing. That's an influence on your way of problem-solving. Killing someone always seems to be a welcome option. So final, so definite. Who could resist?