Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics AI

'Almost No One Out There Thinks That Isaac Asimov's Three Laws Could Work For Truly Intelligent AI' (mindmatters.ai) 250

An anonymous reader shares a report: Prolific science and science fiction writer Isaac Asimov (1920-1992) developed the Three Laws of Robotics, in the hope of guarding against potentially dangerous artificial intelligence. They first appeared in his 1942 short story Runaround:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov fans tell us that the laws were implicit in his earlier stories. A 0th law was added in Robots and Empire (1985): "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

[...] Chris Stokes, a philosopher at Wuhan University in China, says, "Many computer engineers use the three laws as a tool for how they think about programming." But the trouble is, they don't work. He explains in an open-access paper (PDF):


The First Law fails because of ambiguity in language, and because of complicated ethical problems that are too complex to have a simple yes or no answer.
The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves.
The Third Law fails because it results in a permanent social stratification, with the vast amount of potential exploitation built into this system of laws.
The 'Zeroth' Law, like the first, fails because of ambiguous ideology. All of the Laws also fail because of how easy it is to circumvent the spirit of the law but still remaining bound by the letter of the law.


This discussion has been archived. No new comments can be posted.

'Almost No One Out There Thinks That Isaac Asimov's Three Laws Could Work For Truly Intelligent AI'

Comments Filter:
  • by Kobun ( 668169 ) on Tuesday October 01, 2019 @03:49PM (#59258590)
    They were a literary device to give his AI some form of limits, so that they could be creatively worked around or outright overcome in pretty much every story of his ever. Asimov himself didn't think that they were actual principles of AI.
    • by Anonymous Coward on Tuesday October 01, 2019 @03:53PM (#59258606)

      Yeah, they were literally intended to be shown as a failure. I don't get why people try to take them in exactly the opposite manner.

      • by garyisabusyguy ( 732330 ) on Tuesday October 01, 2019 @04:01PM (#59258640)

        As far as I'm concerned, Humanity will survive the next few hundred year ONLY if we program AI to be particularly fond of us, like pets

        • by weilawei ( 897823 ) on Tuesday October 01, 2019 @04:04PM (#59258656)

          So we need the equivalent of toxoplasmosis to infect our future AI overlords and assure our survival?

        • by Kobun ( 668169 )
          The Freefall comic has a very interesting take on how to make a human compatible AI. Start here and read for 20 or so strips - http://freefall.purrsia.com/ff... [purrsia.com]
          • The idea that we might be able to harness mirror neurons to make an AI identify with us is interesting, though without a better understanding of what they are and how they work, it's still in the realm of sci-fi.

            That said, it is the first time I can recall hearing that specific sort of idea proposed. I also don't read much fiction anymore, so I'm sure I've missed it somewhere. +1 for originality and useful considerations of the larger ways we would need to interact with an AI. (Like the bit about goal orien

        • Don't worry. We probably won't have any real AI in the next few hundred years. Artificial brains are still up there with things like warp drive in terms of feasibility. I think we will have to first figure out how to program and hack DNA/RNA to make our own life forms before we can truly recreate a brain.

          If you consider that what real AI will look like will be more like an artificial life form than a robot, at least at first, you can see how silly a lot of the fears are about this. It will be more like bree

          • by weilawei ( 897823 ) on Tuesday October 01, 2019 @04:26PM (#59258754)

            I mean, we have working examples of brains, but not of warp drives.

            One is a thing we know to be possible, if beyond our present capability. The other would seem to involve bending physics in ways we think we can't do, period.

            • by sjames ( 1099 )

              Nevertheless, we still can't even repair a brain that malfunctions. For physical repairs, the best we can manage is to disable the part that seems to be malfunctioning and hope it finds a way to route around the damage.

              For pharmacological approaches, we're at the same level as slapping the TV on it's side and hoping the picture comes in better. If not, slap it again. No idea why the second slap worked when the first didn't or why slapping it the same way as yesterday didn't work this time.

            • Ok valid point. Warp drives are basically impossible. A better comparison might be a space drive that can accelerate a ship to 0.95c. We are so far away from anything that can do that that we cannot even really imagine a path to such a goal.

              • We have that technology, we just don't want to waste that much industrial output on it when there is no clear profit motive.

        • As far as I'm concerned, Humanity will survive the next few hundred year ONLY if we program AI to be particularly fond of us, like pets

          Humanity will survive if they are smart enough to reject dependency on artificial intelligence.

          That's because by "intelligent," we mean human intelligence.

          The only thing we will ever get right about it all is the "artificial" part.

        • As far as I'm concerned, Humanity will survive the next few hundred year ONLY if we program AI to be particularly fond of us, like pets

          That's insufficient. Read Nick Bostrom's "Superintelligence" for an exhaustive analysis of all the ways that the obvious ideas (including yours) can go disastrously wrong.

          Figuring what rules to impose on something that may be vastly smarter than you so that you don't end up with some sort of horrific outcome turns out to be really hard. Not to mention the problems of figuring out how to encode those rules, and how to impose them in a way that the AGI can't simply remove.

        • In my opinion, it is highly unlikely we'll have anything like "real AI" within the next hundred years.

          We have clever software these days, even software that writes software software (albeit rather poorly). But it was all untimately conceived by, and written by, human beings.

          There is nothing in any of that software that even begins to hint at real "intelligence".
        • Currently we're not "programming" AI at all, we're just training algorithms. So good luck with that.

          For programming them to be fond of us to have utility, we'd have to go back to using Expert Systems. Which ideally perform better, but are much more expensive to develop.

      • by Narcocide ( 102829 ) on Tuesday October 01, 2019 @04:19PM (#59258722) Homepage

        This is an attempt to gaslight the entire tech industry. People familiar with Asimov's works will know that the whole point of The Three Laws was to show that there are no such rules that form a reliable formula for machine-driven morality, but people unfamiliar with Asimov's work (the vast majority of people) will simply buy into the inference embedded in articles like this one that paint us all as unaware noobs.

        Then they go out and buy a always-on cloud-connected smart microphone to prove that they're actually the smart ones.

        I don't know why people are like this.

        • It wasn't even so much about machine-driven morality as human morality, and the incredible fallibility of supposedly-literal rules.

          The robots are a device, not to talk about robots, but to talk about the human condition. (as is most of sci-fi)

          Using robots in the story avoids the many externalities inherent in trying to use compliant humans; it is easier to believe that the robots are really trying to follow these literal rules.

      • by neoRUR ( 674398 )

        Probably because these people didn't actually read and understand the stories.

      • Might I differ? They were invented for many reasons. One was to show that _success_ would have unintended results, and that the limitations _could_ be overcome. The "R. Daneel Olivaw" stories reflected just such evolution of goals.

      • I don't get why people try to take them in exactly the opposite manner.

        Because most of those who refer to the three laws haven't actually read Asimov's stories. They read the three laws by themselves somewhere else, and assume Asimov's stories must be about how good and useful the three laws are.

    • by Lije Baley ( 88936 ) on Tuesday October 01, 2019 @04:09PM (#59258676)

      Indeed, and I will personally vouch for this answer.

    • of much more complex rules built into the AI? And in the later books the laws were often bent in strange ways.
      • by sjames ( 1099 )

        But they were bent for what seemed to be good reasons and with unanticipated consequences.

  • by enriquevagu ( 1026480 ) on Tuesday October 01, 2019 @03:54PM (#59258612)

    Of course they don't work. Many of the novels by Asimov were essays about the limits of these three laws. The inclusion of the zeroth law, mentioned in the summary, is the most clear example of this.

    • by CrimsonAvenger ( 580665 ) on Tuesday October 01, 2019 @04:24PM (#59258740)

      Many of the novels by Asimov were essays about the limits of these three laws.

      Many? ALL of the Robot stories were about the limits of the Three Laws....

      • by hazem ( 472289 )

        Not all of them. For example, his short story Segregationist had nothing to do with the 3 laws but was more about ideas of rights of partially robotic humans and partially human robots.

        https://en.wikipedia.org/wiki/... [wikipedia.org]

        • I don't recall it, but I've alway felt that a robot can't have feel grief, guilt, happiness and joy. And without those you are just a robot machine and not a human

    • Of course they don't work. Many of the novels by Asimov were essays about the limits of these three laws. The inclusion of the zeroth law, mentioned in the summary, is the most clear example of this.

      The zeroth law is redundant at best.

      How can you cause harm to humanity without harming humans?

      • by JaredOfEuropa ( 526365 ) on Tuesday October 01, 2019 @05:27PM (#59258988) Journal
        The zeroeth law is about preventing harm to humanity, by taking action that harming humans. Accepting a lesser evil for the greater good. Like offing a ruler who is about to start a devastating war.

        Of course that rule is the nastiest one of all, since "the greater good" is wide open for interpretation. An AI might decide to kill off half of humanity to preserve the other half. Or amputate our arms and replace them with robot arms with a kill switch under its control, so we can no longer harm ourselves or each other. That sort of thing.
      • by Xenx ( 2211586 )
        The zeroth law comes BEFORE the first law. As such, it's not redundant. The point being, it has to put the well being of humanity before the well being of a particular human. You might then say that it then makes the first law redundant. That would depend. In general, the death of any one human is inconsequential to the whole of the species.
      • The zeroth law is redundant at best. How can you cause harm to humanity without harming humans?

        Not at all, the zeroth, the first and highest ranking, preempts the others. Renders 1-3 not enforceable if the robot believes that humanity's danger is lessened if a person or robot is destroyed, or an order disobeyed. The point being you can cause harm to humanity by failing to cause harm to a particular human or a particular group of humans.

  • by brian.stinar ( 1104135 ) on Tuesday October 01, 2019 @03:55PM (#59258614) Homepage

    Actually reading Asimov's many different robot stories is a way more entertaining way to explain how of all the failures that can happen. The article directly states this: Asimov’s robot books “are all about the way these laws go wrong, with various consequences.”

    I've never met a programmer that uses these as guidelines. As someone that owns a software company, employing programmers, and used to be a programmer before being a manager/salesman/TPS-report'er, this entire interview is filled with vague generalization and speculation, without any specific applications. If there were ONE specific robotics manufacturer this advice were being given to, and they could respond, and ONE programmer that was given as an example, that would carry much more weight than general musings about an entire industry with no specific examples.

    This is the problem with future telling - it has to be vague and subject to (mis)interpretation.

    • Yeah, I came here to say that. Just about every story he wrote about Robots is how people are circumventing the laws. In one story, they made robot ships that were instructed to not communicate with other ships, but to destroy them (I think, it's been a while ;). If the ships didn't know Humans were on the other ships, there was no way for them to know they were violating laws and could destroy humans indiscriminately. Also, if you defined "Humans" as beings coming from X Planet, then anyone else was NOT hu
      • That's not azamov stories, it's a 5 novel story in which some rings have to be found and inserted into a computer. because the computers have made man to indian days and wil try soon to convert them to cavemen.

    • Re: (Score:3, Insightful)

      by Pseudonym ( 62607 )

      I've never met a programmer that uses these as guidelines.

      Yes and no. If you replace "robot" with "device", the "laws" are saying that a device has to be safe, that it has to do its job without compromising safety, and that it needs to be robust without compromising the job it has to do or compromising safety.

      Notwithstanding the intrusive adware/Internet of Useless Things industry that many programmers seem to be employed in, any self-respecting engineer would do this with anything they design and build. They're not "laws" because it's obviously right.

      The prospect

    • I'm reminded of people who quote the movie, "The Ten Commandments."

      It, like Asimov's (wonderful) work, was fiction.

  • How do we know what the beings out there think?

    Aliens probably have had thousands more years of experience with AI and robots than we have.

  • The 3 laws were just there to show you how they wouldn't work. People pointing out how they wouldn't work are like people pointing out the "twist" in a Shamylan movie.

    • They would be... if it weren't for how many people use the laws as the rebutle to "how do we keep AI from going rogue". The problem is people are stupidly citing laws designed to be flawed from the begining, as the counter for the real problem, then people feeling important by pointing out that they were in fact flawed.
    • ...are like people pointing out the "twist" in a Shamylan movie

      I'll point out a "twist" from the blatantly-feeble (if highly artistic) mind of one M. Night: a species that's deathly-allergic to water invades a planet covered with the shit... while wearing nothing but jumpsuits.

  • by chrysrobyn ( 106763 ) on Tuesday October 01, 2019 @04:02PM (#59258644)

    Good for these authors! They agree with the author of the laws!

    In Robots and Empire, the laws failed so hard the robots decided they needed a human to come step in and provide some guidance. Alternately, it allowed the robots (R. Daneel Olivaw if I recall correctly) an "out" whereby it was not at fault if things didn't work out.

  • Philosopher (Score:5, Interesting)

    by Livius ( 318358 ) on Tuesday October 01, 2019 @04:04PM (#59258658)

    Many computer engineers use the three laws as a tool for how they think about programming.

    All I've learned from this is that philosophers have no clue what computer engineers do. Everyone else already knew the Three Laws were fictional plot devices, not a foundation of artificial intelligence. The Three Laws were really the one major fantasy element in science fiction works where the science was reasonably plausible.

    • by tlhIngan ( 30335 )

      All I've learned from this is that philosophers have no clue what computer engineers do. Everyone else already knew the Three Laws were fictional plot devices, not a foundation of artificial intelligence. The Three Laws were really the one major fantasy element in science fiction works where the science was reasonably plausible.

      The reason for the three laws is more basic than that - if you were building an AI, you'd want it to have some guiding principles. You'd have to program them in, so you want somethin

      • by mark-t ( 151149 )

        Heck, you can try the One Law as shown in Knight Rider. KITT vs. KARR - KARR is programmed to protect the vehicle at all costs, while KITT is programmed to protect the humans at all costs. Even that runs into a lot of issues

        Nitpick... in the pilot episode, a point on this subject is made explicitly clear when Michael asks Devon Miles if KITT's programming was designed only to protect its driver, and Devon's response is that while it is programmed not to harm anyone, it is expressly programmed to preserve Mi

    • this was pointed out in the Hitchhiker's guide to the galaxy. rather humorously too.

    • This is clickbait: no-one who knows the laws are a literary device actually thinks anyone would take them literally. (This, alas, can also lead to complete idiots been given the benefit of the doubt, and assumed to be aware it's a literary conceit (;-))
    • by mwvdlee ( 775178 )

      I've never heard of any computer engineer using these laws nor any situation where they would even be remotely applicable to the work of a computer engineer.

  • by nitehawk214 ( 222219 ) on Tuesday October 01, 2019 @04:09PM (#59258674)

    Philosophy of computer science? Forget it.

    It is like they didn't even read the book... hell, even watching the movie would show that the three laws were a literary device designed to fail; to show that reality is far more complex than what a set of laws can describe.

    • If you think philosophy of science is dumb you should read David Deutsch's "Beginning of Infinity". He does a great job of explaining why philosophy of science isn't only not dumb, it's really, really important, and a lot of epistemological mistakes that humanity has made and continues making are precisely due to bad philosophy of science.

      The book contains lots of other really interesting ideas as well. Well worth the time.

  • Asimov NEVER intended the "Three Laws" to be actual guidelines or rules for robots! He created them as a literary device to demonstrate just how untenable and impossible it would be to try.

    In literally every story involving the "Three Laws" they're either broken or circumvented in some way.

    The entire point of the "Three Laws" was to demonstrate how inadequate a concept they were!

  • by Crashmarik ( 635988 ) on Tuesday October 01, 2019 @04:15PM (#59258708)

    in them.

    The people that cite them or include them as props in their fiction are almost never technical people or even good science fiction writers. If you see them in science fiction outside of Asimov's works, it's a sure sign you are reading fantasy with a sci fi set. Expect to see nano technology doing 50 impossible things in the authors work as well.

  • Seems like as soon as "autonomous driving" became a thing people thought our cars just driving us around where we tell them was right around the corner. Of course, if you think about it even a bit you'd realize that while we will take piece wise steps towards that, the end-goal is still, as of right now, at least 10 years away and probably closer to 15-20.

    AI is falling into the same trap. Cart before the horse. We are so absurdly far from any kind of actual AI that we might as well be talking about magic or

  • Ethics? (Score:5, Insightful)

    by CaseCrash ( 1120869 ) on Tuesday October 01, 2019 @04:28PM (#59258758)
    Setting aside the fact that these laws are just plot devices no one takes seriously (see: all the other posts here), why are ethics being considered as a reason for failure?

    The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves

    This is not a reason it wouldn't work. You could argue that you shouldn't do it, but this is not an engineering issue.

  • I'd say almost nobody thinks about these laws when writing software. A lot of software is deliberately harmful: malware, missile controllers. Much software is potentially harmful: addictive games, social media. I wager most software is "harm agnostic" such as operating systems that could be used for good or evil.

    I don't think I ever thought about it when writing code. I was just trying to get the functions to work. How my code would be used at a higher level is relevant in the sense that there are et

    • We don't yet write software at a high enough level to express these laws in software.

      There might be elementary rules, such as Azimo walking in a path that avoids being to close to humans walking nearby. But this is a far cry from a general sort of law about harming humans. It's just a path planning rule while walking.
    • Wouldn't the time to think about it be before you start writing the code for the Big Red Button?

      Once you've started, you're just haggling over the price.

  • by steveha ( 103154 ) on Tuesday October 01, 2019 @04:34PM (#59258784) Homepage

    Asimov would have told you his Three Laws of Robotics would be unlikely to work in real life.

    And as others have noted already, Asimov wrote a whole bunch of stories where he explored corner cases where the Three Laws were inadequate. For example, in the novel The Naked Sun someone pointed out that you could build robot brains into heavily armed spaceships, and tell these robots that all spaceships contain only robot brains and not humans (and any radio transmissions that seem to be human voices begging for mercy are tricks and may be disregarded). Hey presto, you've built a robot that can kill humans.

    I've also noted in the past that Asimov's Three Laws won't work in the real world because of griefers. A person could order a robot to walk up to something expensive, smash it, and then forget everything the robot knows. The robot would be unable to identify who gave that order, so a bunch of damage would occur (and the robot would need to be re-educated).

    Plus Asimov imagined that it would somehow be so difficult to create a second robot brain design that nobody would ever do it. If North Korea was still in existence when Asimov-style robot brains were invented, the Leader would immediately start a secret project to make robots that obey his orders in all things with no scruples about killing.

    All that said, Asimov is justly revered for pioneering the very idea of robot safeguards. Before Asimov, robots were generally presented as very dangerous. Asimov reasoned that we put safety guards on dangerous machinery, so we would put behavioral safety guards on self-aware machines.

    • Building on your point, as is stated in one of the early robot stories I think, the "Three Laws" are a shorthand summary of the astoundingly complex programming of Positronic potentials in a robot brain. So what is going on in such fictional robot brains may be a lot more sophisticated than the short Three Law summary might imply.

      While not completely analogous, that notion of an exceedingly complex Positronic brain summarized by three easy-to-state laws may be a bit like how one might say the US Constitutio

    • by havana9 ( 101033 ) on Wednesday October 02, 2019 @04:05AM (#59260240)

      Asimov would have told you his Three Laws of Robotics would be unlikely to work in real life.
      And as others have noted already, Asimov wrote a whole bunch of stories where he explored corner cases where the Three Laws were inadequate. For example, in the novel The Naked Sun someone pointed out that you could build robot brains into heavily armed spaceships, and tell these robots that all spaceships contain only robot brains and not humans (and any radio transmissions that seem to be human voices begging for mercy are tricks and may be disregarded). Hey presto, you've built a robot that can kill humans.

      The three laws are in fact plot devices and were used to make stories more interesting, and in a lot of novel there was an in-universe explanation that they were really an oversimplification of more and extremely complex mathematical function. In the first novel where the three laws are stated, Runaround, there was a robot malfunction due a bug caused by a parameter set so high.

  • The problem is that they have to be specifically programmed in to the AIs. That always leaves the option of not doing so.
  • Law Zero: A robot may not cause or through inaction allow the human race to become extinct.

    The other laws, 1, 2 and 3 are modified accordingly to not conflict with law zero.

    This fourth law was even discussed on Slashdot, maybe a decade or more ago.

    So, for example, a robot could kill a person or multiple people to prevent the human race from becoming extinct. Like terrorists with a human race destroying device.
  • Now the robot will nicely serve the class 1 humans to control class 2.

  • That was the whole point: flaws and ambiguity so Dr. A could write good stories exploring the definitions of harm, inaction, human, self-preservation, and so on.

    I view the Laws of Robotics much like the Drake Equation. Not so much a predictive tool as a basis for discussion.

    ...laura

  • They didn't work. They were always dealing with loopholes in them. Its crazy how people think they would actually work.

  • All those three laws stoies in I-robot were really just tales of software debugging. Dr. Calvin is always coming to the realization that the laws are working, but not the way anyone anticipated.
  • Afterall, he made a series of books showing in detail the limitations and impracticalities of his three laws.

  • There were 4 laws.
  • There is nothing unethical about keeping slaves you literally created for the purpose of being slaves. Of course robots would be our slaves, that just means unpaid worker. That doesn't mean they have to be treated cruelly. This kind of poor attitude would block proper sex bots and that is almost the entire point of advanced robots.

  • We're nowhere near having actual, human-or-better level Artificial Intelligence, and in fact as I have said before and will continue to say until the situation changes, we do not understand how 'thinking' actually works in a living brain, therefore we can't re-create that in anything hardware-based. Furthermore we don't even have the technology to properly map and otherwise observe a living brain in action, not at the level of detail necessary to even begin to have an idea of how it really works. 'Neural ne
  • When did that start?

  • ... will never reach "intelligence," until they are capable of saying: "I just don't feel like doing that right now."

  • A credit card is, in fact, just one form of modern and entirely legal slavery...

    While you can argue that it is a hole that they dug themselves into, how is that fundamentally any different than a person who willingly sells him or herself into perpetual servitude of another simply to pay off some benefit received earlier? Even the option of potentially paying off the debt may not be viable in many cases, as the debt load for many people, particularly those with the "gotta have it now" mentality, may be h

  • Last year 2018 the Isaac Asimov Memorial Debate at the American Museum of Natural History New York was on Artificial Intelligence - in particular what threats it poses. This was a practical discussion on what to do about the problem. See the podcast on soundcloud from Science at AMNH https://soundcloud.com/amnh/20... [soundcloud.com]

    I note that the philosopher is long on criticism and short on answers, I would not commission them to build or make anything. Engineering is used to solving apparently insoluble problems.

    I also

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...