Soulskill from the i'm-sorry-dave,-the-value-of-your-life-is-a-string-and-i-was-expecting-an-integer dept.
coondoggie writes: "The U.S. Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability. While the idea of robots making their own ethical decisions smacks of SkyNet — the science-fiction artificial intelligence system featured prominently in the Terminator films — the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications. One possible scenario: 'A robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it? If the machine stops, a new set of questions arises. The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it’s for the soldier’s well-being?'"
All theoretical chemistry is really physics; and all theoretical chemists
-- Richard P. Feynman