How Asimov's Three Laws Ran Out of Steam 153
An anonymous reader writes "It looks like AI-powered weapons systems could soon be outlawed before they're even built. While discussing whether robots should be allowed to kill might like an obscure debate, robots (and artificial intelligence) are playing ever-larger roles in society and we are figuring out piecemeal what is acceptable and what isn't. If killer robots are immoral, then what about the other uses we've got planned for androids? Asimov's three laws don't seem to cut it, as this story explains: 'As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies.'"
Missed the point (Score:5, Insightful)
Asimov's stories were all about how the three laws were not sufficient for the real world. The article recognises this, even if the summary doesn't.
Asimov's three laws do not run out of steam (Score:3, Insightful)
The three laws as laid down by Asimov are still as valid as ever.
It's the people who willingly violate those laws.
Just like the Constitution of the United States - they are as valid as ever. It's the current form of the government of the United States which willingly violate the Constitution.
Re:Asimov's three laws do not run out of steam (Score:2, Insightful)
These robots are not different from guns (Score:4, Insightful)
Robots that are not responsible for their own actions are ethically not different from guns. They are both machines designed to kill others that need a human being to operate them, with whom the responsibility for their operation lies.
I first wanted to write something about how morally autonomous robots would make the question more interesting, but the relation between a human creating an autonomous robot is no different from a parent giving birth to a child. Parents are not responsible for the crimes their children commit, and neither should the creators of such robots be. Up to a certain age children can't be held responsible in the eyes of the law, and up to a certain level of development neither should robots be.
ethics of killing and warfare (Score:4, Insightful)
Re:Asimov's three laws do not run out of steam (Score:2, Insightful)
Assuming you mean that amount is "not at all," as was the point of the books.
Re:ethics of killing and warfare (Score:3, Insightful)
Mod up for use of logic!
A person killed or maimed by AI or rocks and Greek fire flung from seige engines is fucked either way.
We can construct all sorts of laws for war, but war trumps law as law requires force to enforce. If instead we work to build international relationships which are cooperative and less murdery that would accomplish a lot.
It can be done. It took a couple of World Wars but Germany, France, England and the bit players have found much better things to do than butcher each other for national glory. Such a state of affairs would have been regarded as a pipe dream no so long ago.
Re:Asimov's three laws do not run out of steam (Score:2, Insightful)
The danger of autonomous kill-bots comes from the same people who willingly ignore the Constitution and the rule of law.
And the danger of a gun is the murderer holding it.
Yes, I think we get the point already. The lawless ignore laws. News at 11. Let's move on now from this dead horse already. The kill-bot left it 30 minutes ago.
This is why transhumanism is not a joke. (Score:5, Insightful)
Robots aren't the problem. Robots are the latest extension of humanity's will via technology. The fact that in some cases they're somewhat anthropomorphic (or animalpomorphic) is irrelevant. We don't have now nor will we have a human vs robot problem; we have a human nature problem.
Excepting disease and natural catastrophes and of course human ignorance- which taken together are historically the worst inflictors of mass human suffering- the problems we've had throughout history can be laid at the feet of human nature and our own behavior to one another.
We are creatures, like all other creatures, which evolved brains to perform some very basic social and survival functions. Sure, it's not ALL we are, but this list basically accounts for most of the "bad parts" of human history and when I say history I mean to include future history.
At the most basic brains function to ensure the individual does well at the expense of other individuals, then secondly that the individual's family does well at the expense of other families and thirdly that the individual's group does well at the expense of other groups and finally that the individual does well relative to members of his own group.
The consequences for not winning in any of the above circumstance are pain suffering and, in a worst case scenario, genetic lineage death- you have no copulatory opportunities and / or your offspring are all killed. (cure basement-dwelling jokes)
All of us who have been left standing at the end of this evolutionary process, we all are putative winners in a million year old repeated game. There are few, or more likely zero, representatives of the tribe who didn't want to play, because to not play is to lose and to lose is to be extinguished for all time.
What this means is, we are just not very nice to each other and that niceness falls away with exponential rapidity as we travel away from any conceptual "us" ; Supporting and caring about each other is just not the first priority in our lives and more bluntly any trace of the egalitarian impulse is totally absent from a large part of the population. OTOH we're , en masse, genocidal at the drop of a hat. This is just the tale both history and our own personal experience tells.
Sure, some billionaires give their money away after there's no where else for them to go in terms of the "I'm important, and better than you, genuflect (or at least do a double take) when I light up a room" type esteem they crave from other members of the tribe. Many more people under that level of wealth and comfort just continue to try to amass more and more for themselves and then take huge pains to passed it on to their kin.
The problem is, we are no longer suited, we are no longer a fit, to the environment we find ourselves in, the environment we are creating.
We have two choices. We can try to limit, stop, contain, corral, monitor and otherwise control our fellow human beings so they can't pick up the fruits of this technology and kill a lot or even the rest of us one fine day. The problem here is as technology advances, the control we need to exert will become near absolute. In fact, we are seeing this dynamic at play already with the NSA spying scandal. It's not an aberration and it's not going to go away, it's only going to get worse.
The other choice is to face up to what we are as a species (I'm sure all my fellow /. ers are noble exceptions to these evolutionary pressures) and change what we are using our technology, at least somewhat, so that, say, flying plane loads of people into skyscrapers doesn't seem like the thing to do to anyone and nor does it seem like a good idea to treat each other as ruinously as we can get away with in order to advantage ourselves.
This would be using technology to better that part of the world we call ourselves and recreating ourselves in our own better image. In fact, some argue, that's the real utility of maintaining that better image - which we rarely live up-
Re:ethics of killing and warfare (Score:3, Insightful)
Re:Missed the point (Score:5, Insightful)
It wasnt so much that the laws didnt cut it, thats too simplistic and even in his own words not what it was about.
it was that the robots could interpret the laws in ways we couldnt or didnt anticipate, because in fact in nearly all the stories involving them the robots never failed to obey them.
Asimov saw robots, seen at the time as monsters, as an engineering problem to be solved. he quite correctly saw that we would program them with limits, int he process creating the concept of computer science. he then went about writing stories around robots that never failed to obey their programming, but as effectively sentient thinking beings, would interpret their programming in ways the society around them couldn't anticipate because they saw the robots as mere tools, not thinking machines. and thus he created his lens (like all good scifi writers) for writing about society and technology.
Point is irrelevant (Score:4, Insightful)
Yes, the laws were flawed, and yes, that's the idea Asimov mined to produce some interesting stories.
But the thing here is that those laws require both a free-thinking intelligence that can reason non-linearly, and a locked-down computer-like slavish obedience to simplistic concepts. As we have yet to put any kind of actual AI in the field, we not only don't have such magic combo, we don't even know how to make such a magic combo.
The only high-level intelligence we know of is us; and getting one of us to rigidly obey the three laws would be an exercise in utter frustration. No reason to think it'd be any more practical in Robbie the Robot, esq., citizen of the Consolidated Intelligences Union.