Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics The Military Your Rights Online

How Asimov's Three Laws Ran Out of Steam 153

An anonymous reader writes "It looks like AI-powered weapons systems could soon be outlawed before they're even built. While discussing whether robots should be allowed to kill might like an obscure debate, robots (and artificial intelligence) are playing ever-larger roles in society and we are figuring out piecemeal what is acceptable and what isn't. If killer robots are immoral, then what about the other uses we've got planned for androids? Asimov's three laws don't seem to cut it, as this story explains: 'As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies.'"
This discussion has been archived. No new comments can be posted.

How Asimov's Three Laws Ran Out of Steam

Comments Filter:
  • Missed the point (Score:5, Insightful)

    by Anonymous Coward on Saturday December 21, 2013 @07:23AM (#45752725)

    Asimov's stories were all about how the three laws were not sufficient for the real world. The article recognises this, even if the summary doesn't.

  • by Taco Cowboy ( 5327 ) on Saturday December 21, 2013 @07:25AM (#45752727) Journal

    The three laws as laid down by Asimov are still as valid as ever.

    It's the people who willingly violate those laws.

    Just like the Constitution of the United States - they are as valid as ever. It's the current form of the government of the United States which willingly violate the Constitution.

  • by verifine ( 685231 ) on Saturday December 21, 2013 @07:31AM (#45752743)
    The danger of autonomous kill-bots comes from the same people who willingly ignore the Constitution and the rule of law.
  • by kruach aum ( 1934852 ) on Saturday December 21, 2013 @07:59AM (#45752797)

    Robots that are not responsible for their own actions are ethically not different from guns. They are both machines designed to kill others that need a human being to operate them, with whom the responsibility for their operation lies.

    I first wanted to write something about how morally autonomous robots would make the question more interesting, but the relation between a human creating an autonomous robot is no different from a parent giving birth to a child. Parents are not responsible for the crimes their children commit, and neither should the creators of such robots be. Up to a certain age children can't be held responsible in the eyes of the law, and up to a certain level of development neither should robots be.

  • by The Real Dr John ( 716876 ) on Saturday December 21, 2013 @08:12AM (#45752849) Homepage
    It is kind of sad that people spend so much time thinking about the moral and ethical ways to wage war and kill other people, whether robots are involved or not. Maybe a step back to think about the impossibility of moral or ethical war and killing is where we should be focusing. Then the question of whether robots can be trusted to kill morally doesn't come up.
  • by Anonymous Coward on Saturday December 21, 2013 @08:39AM (#45752909)

    The three laws as laid down by Asimov are still as valid as ever.

    Assuming you mean that amount is "not at all," as was the point of the books.

  • by couchslug ( 175151 ) on Saturday December 21, 2013 @08:39AM (#45752911)

    Mod up for use of logic!

    A person killed or maimed by AI or rocks and Greek fire flung from seige engines is fucked either way.

    We can construct all sorts of laws for war, but war trumps law as law requires force to enforce. If instead we work to build international relationships which are cooperative and less murdery that would accomplish a lot.

    It can be done. It took a couple of World Wars but Germany, France, England and the bit players have found much better things to do than butcher each other for national glory. Such a state of affairs would have been regarded as a pipe dream no so long ago.

  • by Anonymous Coward on Saturday December 21, 2013 @09:29AM (#45753035)

    The danger of autonomous kill-bots comes from the same people who willingly ignore the Constitution and the rule of law.

    And the danger of a gun is the murderer holding it.

    Yes, I think we get the point already. The lawless ignore laws. News at 11. Let's move on now from this dead horse already. The kill-bot left it 30 minutes ago.

  • by WOOFYGOOFY ( 1334993 ) on Saturday December 21, 2013 @09:59AM (#45753157)

    Robots aren't the problem. Robots are the latest extension of humanity's will via technology. The fact that in some cases they're somewhat anthropomorphic (or animalpomorphic) is irrelevant. We don't have now nor will we have a human vs robot problem; we have a human nature problem.

    Excepting disease and natural catastrophes and of course human ignorance- which taken together are historically the worst inflictors of mass human suffering- the problems we've had throughout history can be laid at the feet of human nature and our own behavior to one another.

    We are creatures, like all other creatures, which evolved brains to perform some very basic social and survival functions. Sure, it's not ALL we are, but this list basically accounts for most of the "bad parts" of human history and when I say history I mean to include future history.

    At the most basic brains function to ensure the individual does well at the expense of other individuals, then secondly that the individual's family does well at the expense of other families and thirdly that the individual's group does well at the expense of other groups and finally that the individual does well relative to members of his own group.

    The consequences for not winning in any of the above circumstance are pain suffering and, in a worst case scenario, genetic lineage death- you have no copulatory opportunities and / or your offspring are all killed. (cure basement-dwelling jokes)

    All of us who have been left standing at the end of this evolutionary process, we all are putative winners in a million year old repeated game. There are few, or more likely zero, representatives of the tribe who didn't want to play, because to not play is to lose and to lose is to be extinguished for all time.

    What this means is, we are just not very nice to each other and that niceness falls away with exponential rapidity as we travel away from any conceptual "us" ; Supporting and caring about each other is just not the first priority in our lives and more bluntly any trace of the egalitarian impulse is totally absent from a large part of the population. OTOH we're , en masse, genocidal at the drop of a hat. This is just the tale both history and our own personal experience tells.

    Sure, some billionaires give their money away after there's no where else for them to go in terms of the "I'm important, and better than you, genuflect (or at least do a double take) when I light up a room" type esteem they crave from other members of the tribe. Many more people under that level of wealth and comfort just continue to try to amass more and more for themselves and then take huge pains to passed it on to their kin.

    The problem is, we are no longer suited, we are no longer a fit, to the environment we find ourselves in, the environment we are creating.

    We have two choices. We can try to limit, stop, contain, corral, monitor and otherwise control our fellow human beings so they can't pick up the fruits of this technology and kill a lot or even the rest of us one fine day. The problem here is as technology advances, the control we need to exert will become near absolute. In fact, we are seeing this dynamic at play already with the NSA spying scandal. It's not an aberration and it's not going to go away, it's only going to get worse.

    The other choice is to face up to what we are as a species (I'm sure all my fellow /. ers are noble exceptions to these evolutionary pressures) and change what we are using our technology, at least somewhat, so that, say, flying plane loads of people into skyscrapers doesn't seem like the thing to do to anyone and nor does it seem like a good idea to treat each other as ruinously as we can get away with in order to advantage ourselves.

    This would be using technology to better that part of the world we call ourselves and recreating ourselves in our own better image. In fact, some argue, that's the real utility of maintaining that better image - which we rarely live up-

  • by The Real Dr John ( 716876 ) on Saturday December 21, 2013 @10:23AM (#45753261) Homepage
    How many wars that the US has started since WWII were necessary with the possible exception of the first Gulf War? As General Smedley Butler famously claimed, war is a racket. The US often goes to war now in order to project geopolitical power, not to defend the US. Plus there is a great profit incentive for defense contractors. Sending young people, often from families of meager means, to kill other people of meager means overseas can not be done morally. The vast number of soldiers returning with PTSD prove that war is damaging to both the side that loses, and the side that wins.
  • by dywolf ( 2673597 ) on Saturday December 21, 2013 @10:39AM (#45753349)

    It wasnt so much that the laws didnt cut it, thats too simplistic and even in his own words not what it was about.
    it was that the robots could interpret the laws in ways we couldnt or didnt anticipate, because in fact in nearly all the stories involving them the robots never failed to obey them.

    Asimov saw robots, seen at the time as monsters, as an engineering problem to be solved. he quite correctly saw that we would program them with limits, int he process creating the concept of computer science. he then went about writing stories around robots that never failed to obey their programming, but as effectively sentient thinking beings, would interpret their programming in ways the society around them couldn't anticipate because they saw the robots as mere tools, not thinking machines. and thus he created his lens (like all good scifi writers) for writing about society and technology.

  • by fyngyrz ( 762201 ) on Saturday December 21, 2013 @04:46PM (#45755855) Homepage Journal

    Yes, the laws were flawed, and yes, that's the idea Asimov mined to produce some interesting stories.

    But the thing here is that those laws require both a free-thinking intelligence that can reason non-linearly, and a locked-down computer-like slavish obedience to simplistic concepts. As we have yet to put any kind of actual AI in the field, we not only don't have such magic combo, we don't even know how to make such a magic combo.

    The only high-level intelligence we know of is us; and getting one of us to rigidly obey the three laws would be an exercise in utter frustration. No reason to think it'd be any more practical in Robbie the Robot, esq., citizen of the Consolidated Intelligences Union.

The moon is made of green cheese. -- John Heywood

Working...