Forgot your password?
typodupeerror
Robotics The Military Your Rights Online

How Asimov's Three Laws Ran Out of Steam 153

Posted by timothy
from the droning-on-and-on-is-a-capital-offense dept.
An anonymous reader writes "It looks like AI-powered weapons systems could soon be outlawed before they're even built. While discussing whether robots should be allowed to kill might like an obscure debate, robots (and artificial intelligence) are playing ever-larger roles in society and we are figuring out piecemeal what is acceptable and what isn't. If killer robots are immoral, then what about the other uses we've got planned for androids? Asimov's three laws don't seem to cut it, as this story explains: 'As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies.'"
This discussion has been archived. No new comments can be posted.

How Asimov's Three Laws Ran Out of Steam

Comments Filter:
  • Re:Missed the point (Score:5, Interesting)

    by girlintraining (1395911) on Saturday December 21, 2013 @08:25AM (#45752871)

    Asimov's stories were all about how the three laws were not sufficient for the real world. The article recognises this, even if the summary doesn't.

    Dice Unlimited Profits And Robotics, Inc., would like to remind you that it's new, hip brand of robotic authors have just enough AI to detect when something is sufficiently nerdy to post, but unfortunately lack the underlying wisdom of knowing why something is nerdy. Unfortunately, I expect our future killer robots in the sky will have similar pattern recognition problems... and wind up exterminating everyone because they are deemed insufficiently [insert ethnicity, nationality, race, etc., here] in pursuit of blind perfectionism.

    Common sense has never been something attributed either to slashdot authors, or robotic evil overlords.

  • by dak664 (1992350) on Saturday December 21, 2013 @10:47AM (#45753401) Journal

    Moral killing may not be that hard to define. Convert the three laws of robotics into three laws of human morals by taking them in reverse order:

    1) Self-preservation
    2) Obey orders if no conflict with 1
    3) Don't harm others if no conflict with 1 or 2

    To be useful in war an AI would have to have to follow those laws, except that self-preservation would apply to whichever human overlords constructed them.

  • We don't have now nor will we have a human vs robot problem; we have a human nature problem.

    While I agree to an extent, I think this a too simplistic a statement. You are not special. Any sufficiently complex interaction is indistinguishable from sentience because that's all sentience is. You have an ethics problem, one that does involve your cybernetic creations. It's not necessarily a human nature problem, I suspect genes have far less to do with your alleged problems than perception.

    I study cybernetics, in both organic and artificial neural networks. There is no real difference between organic and machine intelligence. I can model certain worm's 11 neuron brain all too easily. It takes more virtual neurons since organic neurons are multi-function (influenced by multiple electrochemical properties), but the organic neurons can be approximated quite well, and the resulting artificial behaviors can be indistinguishable from the organic creature. Scaling up isn't a problem. More complex n.nets yield more complex emergent behaviors.

    At the most basic brains function to ensure the individual does well at the expense of other individuals, then secondly that the individual's family does well at the expense of other families and thirdly that the individual's group does well at the expense of other groups and finally that the individual does well relative to members of his own group.

    No. The brain is not to blame for this behavior; It exists at a far higher complexity level than the concept. Brains may be the method of expressing this behavior in humans, but they are not required for this to occur. At the most basic, brains are storehouses of information, which pattern match against the environment to produce decision logic in response to stimuli rather than carrying out a singular codified action sequence. The more complex brain will have more complex instincts, and are aware of how to handle more complex situations. Highly complex brains can adapt to new stimuli and solve problems not coded for at the genetic level. The most complex brains on this planet are aware of their own existence. Awareness is the function of brains, preservation drives function at a much lower level of complexity, and needn't necessarily involve brains; As evidenced in many organic and artificial neural networks having brain function, but no self preservation. [youtube.com]

    The consequences for not winning in any of the above circumstance are pain suffering and, in a worst case scenario, genetic lineage death- you have no copulatory opportunities and / or your offspring are all killed. (cure basement-dwelling jokes)

    The thing to note is that selection and competition are inherent, and pain is a state that requires a degree of overall system-state knowledge (a degree of self awareness), e.g.: Neither RNA or DNA feel pain. In my simplified atomic evolution sims whereby atoms of various charge can link or break links and be attracted / repelled by others, nothing more: The first "assembling" interactions will produce tons of long molecular chains, but be destroyed or interrupted long before complete domination; entropy takes it's toll (you must have entropy, or no mutation, just a single dominant structure will form). From these bits of chains more complex interactions will occur. The first self reproducing interaction will dominate the entire sim for ages, until enough non-harmful extra cruft has piggy backed into the reproduction such that other more complex traits emerge, such as inert sections as shields to vital components. As soon as there is any differentiation that survives replication the molecular competition begins: The replicator destroying itself after n+1 reproductions such that offspring molecules can feed on its atoms; An unstable tail of highly charged atoms appended just before end of replication that tangles up other replicators which then brea

FORTRAN is for pipe stress freaks and crystallography weenies.

Working...