Forgot your password?
typodupeerror
AI Robotics The Military United States

US Navy Wants Smart Robots With Morals, Ethics 165

Posted by Soulskill
from the i'm-sorry-dave,-the-value-of-your-life-is-a-string-and-i-was-expecting-an-integer dept.
coondoggie writes: "The U.S. Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability. While the idea of robots making their own ethical decisions smacks of SkyNet — the science-fiction artificial intelligence system featured prominently in the Terminator films — the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications. One possible scenario: 'A robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it? If the machine stops, a new set of questions arises. The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it’s for the soldier’s well-being?'"
This discussion has been archived. No new comments can be posted.

US Navy Wants Smart Robots With Morals, Ethics

Comments Filter:
  • Humans Can Not (Score:5, Insightful)

    by Jim Sadler (3430529) on Saturday May 17, 2014 @05:52AM (#47024329)
    Imagine us trying to teach a robot morality when humans have little agreement on what is moral. For example would a moral robot have refused to function in the Vietnam War? Would a drone take out an enemy in Somalia knowing that that terrorist was a US citizen? How many innocent deaths are permissible if a valuable target can be destroyed? If a robot acts as a fair player could it use high tech weapons against an enemy that had only rifles that were made prior to WWII? If many troops are injured should a medical robot save two enemy or one US soldier who will take all of the robot's attention and time? When it comes to moral issues and behaviors there are often no points of agreement by humans so just how does one program a robot to deal with moral conflicts?
  • by dmbasso (1052166) on Saturday May 17, 2014 @05:55AM (#47024341)

    US armed forces should want leaders with morals and ethics, instead of the usual bunch that send them to die based on lies (I'm looking at you Chenney, you bastard).

  • by kruach aum (1934852) on Saturday May 17, 2014 @05:56AM (#47024343)

    Every single one comes down to "do I value rule X or rule Y more highly?" Who gives a shit. Morals are things we've created ourselves, you can't dig them up or pluck them off trees, so it all comes down to opinion, and opinions are like assholes: everyone's asshole is a product of the culture it grew up in.

    This is going to come down to a committee deciding how a robot should respond in which situation, and depending on who on the committee has the most clout it's going to implement a system of ethics that already exists, whether it's utilitarianism, virtue ethics, Christianity, Taoism, whatever.

  • by houghi (78078) on Saturday May 17, 2014 @05:56AM (#47024347)

    If they are talking about the moral of the US government, I rather have the robots from Terminator.

    And they are talking about helping wounded soldiers. Why talk about the (US) marine with the broken leg? What about the injured Al-Quaida fighter?

    The question of causing pain for the better wellbeing of the patient is obvious for most people. What if it means killing 1 person to save 10? What if that one person is not an enemy?

    What if it realizes that killing 5% of the US population would save the rest of the world? What if that 5% is mostly children? Even if you can answer that as a human being, would you want it enforced by robots?

  • Re:Humans Can Not (Score:5, Insightful)

    by MrL0G1C (867445) on Saturday May 17, 2014 @06:04AM (#47024357) Journal

    Would the robot shoot a US commander that is about the bomb a village of men woman and children?

    The US navy don't want robots with morals, they want robots that do as they say.

    Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win. Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.

  • Re:Up to 11 (Score:4, Insightful)

    by CuteSteveJobs (1343851) on Saturday May 17, 2014 @07:26AM (#47024537)

    Is funny because since WWII the army has worked to get the kill rates up. In WWII only 15% of soldiers shot to kill, but they the army brainwashes them so that 90% kill. Moral. Killers. Can't have both.

    And Moral and Ethical for the NSA? LMAO.

  • Right (Score:5, Insightful)

    by HangingChad (677530) on Saturday May 17, 2014 @07:36AM (#47024573) Homepage

    Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications.

    Just like drones were first used for intelligence gathering, search and rescue and communications relays.

  • Re:Humans Can Not (Score:2, Insightful)

    by Anonymous Coward on Saturday May 17, 2014 @07:39AM (#47024585)

    People in the US think too much about killing. It's as if you don't understand that killing is a savage thing to do. Maybe it's the omnipresence of guns in your society, maybe it's your defense budget, but you can't seem to stop thinking about killing. That's an influence on your way of problem-solving. Killing someone always seems to be a welcome option. So final, so definite. Who could resist?

Real Programmers don't write in FORTRAN. FORTRAN is for pipe stress freaks and crystallography weenies. FORTRAN is for wimp engineers who wear white socks.

Working...