How Asimov's Three Laws Ran Out of Steam 153
An anonymous reader writes "It looks like AI-powered weapons systems could soon be outlawed before they're even built. While discussing whether robots should be allowed to kill might like an obscure debate, robots (and artificial intelligence) are playing ever-larger roles in society and we are figuring out piecemeal what is acceptable and what isn't. If killer robots are immoral, then what about the other uses we've got planned for androids? Asimov's three laws don't seem to cut it, as this story explains: 'As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies.'"
Re:"robots are immoral" (Score:5, Informative)
The correct term is "amoral": Robots have no moral sense whatsoever. "immoral" would imply they had moral sense but were actively engaging in the behavior that is against that morality.
Re:Asimov's three laws do not run out of steam (Score:5, Informative)
Stop saying that. That isnt it at all and you failed to grasp his points, even as he himself spelled out his thinking in his essays on the topic.
Asimov never thought the rules he created were "not at all valid". On the contrary.
Asimov saw robots, seen at the time as monsters, as an engineering problem to be solved. he quite correctly saw that we would program them with limits (in the process creating the concept of computer science).
he then went about writing stories around robots that never failed to obey their programming, but as effectively sentient thinking beings, would interpret their programming in ways the society around them couldn't anticipate because they saw the robots as mere tools, not thinking machines. and thus he created his lens (like all good scifi writers) for writing about society and technology.
he NEVER said the laws were not valid or were insufficient.
that was NEVER the point.