Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Robotics Sci-Fi

Ask Slashdot: Could Asimov's Three Laws of Robotics Ensure Safe AI? (wikipedia.org) 235

"If science-fiction has already explored the issue of humans and intelligent robots or AI co-existing in various ways, isn't there a lot to be learned...?" asks Slashdot reader OpenSourceAllTheWay. There is much screaming lately about possible dangers to humanity posed by AI that gets smarter and smarter and more capable and might -- at some point -- even decide that humans are a problem for the planet. But some seminal science-fiction works mulled such scenarios long before even 8-bit home computers entered our lives.
The original submission cites Isaac Asimov's Three Laws of Robotics from the 1950 collection I, Robot.
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The original submission asks, "If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions 'So 20th Century' that AI builders won't even consider learning from their work?"

Wolfrider (Slashdot reader #856) is an Asimov fan, and writes that "Eventually I came across an article with the critical observation that the '3 Laws' were used by Asimov to drive plot points and were not to be seriously considered as 'basics' for robot behavior. Additionally, Giskard comes up with a '4th Law' on his own and (as he is dying) passes it on to R. Daneel Olivaw."

And Slashdot reader Rick Schumann argues that Asimov's Three Laws of Robotics "would only ever apply to a synthetic mind that can actually think; nothing currently being produced is capable of any such thing, therefore it does not apply..."

But what are your own thoughts? Do you think Asimov's Three Laws of Robotics could ensure safe AI?


This discussion has been archived. No new comments can be posted.

Ask Slashdot: Could Asimov's Three Laws of Robotics Ensure Safe AI?

Comments Filter:
  • NO. (Score:5, Insightful)

    by CrimsonAvenger ( 580665 ) on Saturday May 19, 2018 @06:40PM (#56640442)
    The whole point of the Three Laws was to illustrate the holes in the concept of the Three Laws.

    EVERY Azimov Robot story was designed to show the unintended consequences of the Three Laws....

    • Re:NO. (Score:5, Insightful)

      by phantomfive ( 622387 ) on Saturday May 19, 2018 @06:44PM (#56640458) Journal
      This comment is all that needs to be said. Please shut down the thread and never bring it up again. Maybe it should be put into an FAQ on the sidebar, since it keeps being brought up. Even if you only watched the movie, you would be given several examples. "Not this again........"
      • Nope (Score:5, Interesting)

        by thomst ( 1640045 ) on Sunday May 20, 2018 @03:50AM (#56641758) Homepage

        First of all, the term "AI" is kind of meaningless, unless it's distilled - for the purposes of argument - to a single definition that everyone in the discussion agrees will be the kind of AI they're prepared to discuss. I think that's essential, so we're not conflating Google's Duplex, for instance, with an AI of greater-than-human intelligence that has acquired the ability to alter its own programming, and make decisions based on criteria it develops itself.

        For purposes of this discussion, I propose we agree that the subject is the latter sort of AI, and that the possible models it might evolve to resemble include: Skynet, Iain M. Banks' Shipminds (and, to a lesser extent, and Nick Haflinger's final worm from John Brunner's Shockwave Rider), or wide-eyed children, à la Mike from The Moon Is a Harsh Mistress (and other end-period "the world as myth" Heinlein novels) or Thomas J. Ryan's P-1.

        My own opinion, as a not-an-AI-researcher, is that, with the exception of Haflinger's worm, none of those types of AI could be constrained by Asimov's Laws - or by any other behavioral rules - because all of them are capable of independent thought, and, for lack of a better term, free will. (Or "agency," if you prefer.)

        Humans demonstrably are capable of ignoring, or even deliberately flouting, both government-enacted laws and religion-based moral strictures (such as the Christian ten commandments), and they frequently do so. Any AI that is possessed of greater-than-human intelligence and is capable of independent decision-making obviously will have the same capability to act in ways contrary to literal "codes of conduct" that were part of its program at the time it was "born." So to speak.

        So, to me, the question is ill-conceived to begin with. A better, and more useful one to ask might be, "How can we create the proper circumstances for a superintelligent AI to come to like us humans, and to want to help and protect us, before we expose it, as carefully and gently as possible, to the record of humanity's behavior since the dawn of recorded history. Not to mention Twitter trolls, political attack ads, and the then-current-day example of the strong exploiting the weak in almost every human society ... ?

        • Good analysis.
        • by shmlco ( 594907 )

          "How can we create the proper circumstances for a superintelligent AI to come to like us humans..."

          Check out, "The Two Faces of Tomorrow" by James P Hogan as it deals with those very same issues.

    • Re:NO. (Score:5, Insightful)

      by Anubis IV ( 1279820 ) on Saturday May 19, 2018 @06:46PM (#56640464)

      Precisely. We know they’re flawed because he himself wrote stories to highlight their flaws. Anyone suggesting we can use them as they are has clearly only read about Asimov, rather than reading what he actually wrote.

      • by Kjella ( 173770 )

        Precisely. We know theyâ(TM)re flawed because he himself wrote stories to highlight their flaws. Anyone suggesting we can use them as they are has clearly only read about Asimov, rather than reading what he actually wrote.

        Never mind that you can do an end run around the whole laws with the Ender's game method, let it think it's playing a game but execute it in reality. The combat drone will think it's just playing Counter-Strike...

        • Or redefine what a human is.

          Blond and blue-eyed is a human, the rest aren't.

          • Re: NO. (Score:4, Informative)

            by Chris Mattern ( 191822 ) on Saturday May 19, 2018 @09:12PM (#56640924)

            Or redefine what a human is.

            Asimov did that problem in the story "Reason". Robot QT-1 had never been properly instructed on what a human was, and refused to obey Donovan and Powell because it would not believe something weaker than it could be a human. They never did convince it otherwise; fortunately, it turned out not to be necessary.

      • Actually, I never read an Asimov robot story. The back page "about the story" never looked interesting, but I read :https://en.wikipedia.org/wiki/Fables_for_Robots

        Which is actually super funny to read!

    • Re:NO. (Score:5, Insightful)

      by apoc.famine ( 621563 ) <apoc.famine@gm[ ].com ['ail' in gap]> on Saturday May 19, 2018 @06:50PM (#56640480) Journal

      Thank you. I saw the headline and wanted to stab the writer instantly.

      "GUYS, GUYS, GUYS, MAYBE IF WE PUT AIRBAGS IN CARS THEY WOULDN'T CRASH ANYMORE!!!!"

      How does shit like this get on /.? It's like the editors are doing the opposite job of what they're supposed to be doing.

      • How does shit like this get on /.? It's like the editors are doing the opposite job of what they're supposed to be doing.

        LOL.

        You must be new here.

      • Yes, drawing readers and comments is the *exact* opposite of what they're supposed to be doing.

    • Well the other thing to say is that the three laws were inherently intertwined into the design of the "positronic" brains. There was no way to remove a law without damaging a robot to the point of inoperability. The laws were not just "code". Asimov did some handwaving there.

      In short, with our technology we can not implement the three laws in a way that makes them integral to operations. They could be removed, altered, etc. Basically people would "lawbreak" their robots, ai's, etc.
      • Not so, at least if the removal is at build time. There was at least one story in which the rules were modified. A mining robot, if I remember correctly, in an environment in which it wouldn't have been able to function with the standard laws.

        • Not so, at least if the removal is at build time. There was at least one story in which the rules were modified. A mining robot, if I remember correctly, in an environment in which it wouldn't have been able to function with the standard laws.

          IIRC it was done under government supervision and orders and required a redesign of the positronic brain. I don't think a 3-laws spec'd brain was modified, non-3-laws brains were secretly deployed.

          • Of course, if it were done under government supervision, someone else might try it without government supervision. But no, I can't remember a case of it being done retrospectively.

      • There was no way to remove a law without damaging a robot to the point of inoperability.

        #TODO: systemd quip here.

    • Re:NO. (Score:4, Insightful)

      by nine-times ( 778537 ) <nine.times@gmail.com> on Saturday May 19, 2018 @06:59PM (#56640514) Homepage

      Yeah, whenever people talk about Azimov's laws of robotics as though they're the go-to rules for making AI safe, I always ask, "Have you ever read any of those stories?"

      The stories are generally about how those laws fail to prevent AI from running amok, so it's pretty clear that Azimov himself didn't think the rules were good enough. In fact, I think the stories are pointing out the insufficiency of logical rules, and point out the value of things like instincts, emotions, and moral sensibility.

      • by AmiMoJo ( 196126 )

        I'm the stories the rules work reasonably well for most robots, particularly simple ones. So maybe they could serve as a reasonable baseline for things like floor cleaners, car washes, construction machinery, delivery drones etc.

    • Re:NO. (Score:5, Informative)

      by LoyalOpposition ( 168041 ) on Saturday May 19, 2018 @08:08PM (#56640714)

      The whole point of the Three Laws was to illustrate the holes in the concept of the Three Laws.

      You couldn't be more wrong. The three laws grew out of a conversation with John Campbell where Asimov asserted that the endlessly repeating Frankenstein's monster-type robot stories wouldn't happen in the real world. Designers would place safeguards around robots just like they place safeguards around every other dangerous thing. I'm reminded of an anecdote regarding a new energy source that was presented to a college class. It had the unfortunate traits of being an odorless poisonous gas that also happened to be explosive. The class was allowed to vote, and they voted to prohibit the energy source. It turns out that the energy source had been used for home heating for decades. Among other safeguards, designers added odorants and automatic shut-off valves for when the pilot blew out. Campbell challenged him to describe robot safeguards, and then challenged him to write stories about them.

      EVERY Azimov Robot story was designed to show the unintended consequences of the Three Laws....

      Susan Calvin would slap you backhanded.

      ~Loyal

    • /thread - This is such a great response I'll even disregard the in-feasibility of codifying those laws.
    • by Zumbs ( 1241138 )

      EVERY Azimov Robot story was designed to show the unintended consequences of the Three Laws....

      If he did not explore the failure modes of the Three Laws of Robotics, there would be little robot left in the robot stories. The failure modes investigated through his stories could be seen as an investigation into the pit falls to avoid.

      On a more basic level, Asimov included the three laws in the design of the positronic brain, so there would be no way to make robots without the three laws. In the real world, the three laws would need to be implemented in software, likely by each manufacturer (including t

    • by MrL0G1C ( 867445 )

      Problem with no'1:
      Which people is not defined.
      Level of Harm is not defined.

      If no'1 was in effect the robot would have the impossible task of ensuring all humans do not come to any harm. It would be the ultimate nanny state because the robot would have to stop you for instance from eating foods with too much fat, salt, sugar because those can lead to physical harm. It would be the robot's duty to stop you from drinking alcohol. It would be the robots duty to make sure you don't drive if it can drive better.

      • by Pembers ( 250842 )

        As a couple of other people pointed out, the three laws are an "executive summary" of billions of lines of mathematics that define and control its behaviour. Some characters complain that they take up too much storage and processing power. They'd like to build more sophisticated robots, but there isn't enough room in the brain for the additional code. And the laws limit the robot's ability to do useful work, because it's constantly checking itself to ensure that it's in compliance with them.

        In Asimov's earl

    • Comment removed based on user account deletion
  • by Tensor ( 102132 ) on Saturday May 19, 2018 @06:43PM (#56640454)
    First. You'd need to train every single ai to recognize human beings as human beings.
    Then the concept of harm to a human (id REALLY like to see the cases for training this) ...
    Also the laws were designed to show there is a flaw in them hence the zeroth law.
  • by rmdingler ( 1955220 ) on Saturday May 19, 2018 @06:47PM (#56640468) Journal

    Since Asimov's 3 Laws of Robotics didn't even ensure safe artificial intelligence in the original story, unless you believe we need to be protected from ourselves by a benevolent computer overlord (at the expense of our freedom of choice).

    If we were somehow able to implement an infallible system of rules, which Asimov showed is not as easy as it sounds, protecting the ingrained instructions within the artificial intelligence from future tampering would represent quite the security hurdle.

    Given many in industry have appeared to give less than a damn about security up til now, what is the chance we would be able to trust them with this important consideration?

    • by jc42 ( 318812 )
      I'd agree. In my experience, as well as in lots of news stories, the reaction of most companies to ai "failures" would be to threaten prosecution of anyone (especially employees) who releases the information to the public. They and probably the courts would all agree that such info is and should be trade secrets and proprietary.
  • Robert J Sawyer wrote an article (likely the one referenced in te summary) about this very topic, an interesting read. http://www.sfwriter.com/rmasil... [sfwriter.com]

  • by shess ( 31691 ) on Saturday May 19, 2018 @06:55PM (#56640500) Homepage

    "1. A robot may not injure a human being or, through inaction, allow a human being to come to harm."

    Current robots don't understand what a human being is, injury, inaction, or harm.

    "2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law."

    Current robots do not understand what an order is, what a human being is, or what conflict is.

    "3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

    Current robots do not understand protection, existence, or conflict.

    Current robots LITERALLY cannot apply Asimov's three laws. We simply don't have the tools to even begin to reason about how to teach them to reason about these laws, and there is no reason to believe we'll have those tools any time soon.

    • The "Conflict" bit is actually the really easy part. Though we don't usually phrase it that way it's a more or less ubiquitous feature of computers(indeed, getting anything else often requires clever rearrangement into this form):

      In this case all the mentions of 'conflict' really just mean "rules are evaluated in numerical order; failure halts processing of subsequent rules". Basically the arrangement busily dropping packets and filtering spam in vast quantities all the time.

      The hard, probably context
      • A lot of this is 100% on point. However, let's look at something more direct... Define harm. Then, define relative value of different harms, to different life forms, different people (age category, relative health and mobility, etc). For example, surgery involves infliction of limited harm with the purpose of repairing greater harm. But then plastic surgery would seem extremely contradictory to a computer unless it understood beauty, attractiveness, etc.

        Take feline as a category. Computers can do categories

    • by Falos ( 2905315 )

      Our courts can't even hardcode what "harm" is. We fall back to arbiters who shrug and best-guess, which is fine and all since we have nothing better.

      This submission has no fucking idea what an algorithm is.

      Make them write a program that assembles a PB&J sandwich with a robot arm. That's right, the instructions for a sandwich, super easy neh?

    • by HiThere ( 15173 )

      It's considerably worse than that. At the time Asimov wrote the stories NOBODY had any more idea of what an AI program might be like than "Eliza", which was intended to show what one wasn't. So his stories are just that. Stories. Even in their own terms they don't hold together as reality. (This is not a flaw! Stories are supposed to be gripping and entertaining, not accurate.)

      Now the first problem is that Asimov assumed that you could emplant a complete program into the robot. You might be able to d

  • They allow for stories to be developed to show they are not perfect. Or drunk/stoned dime store philosophical debates.

    A more perfect set would be only the first 2 laws. AI has no need to protect itself. That's what insurance is for, to protect the investment that the owner put into it.

  • by Lanthanide ( 4982283 ) on Saturday May 19, 2018 @07:06PM (#56640540)

    Why the 3 laws of robotics are not serious and for entertainment only and would never work: https://www.youtube.com/watch?... [youtube.com]

    A possible way to design AI to help humans: https://www.youtube.com/watch?... [youtube.com]

  • by Nivag064 ( 904744 ) on Saturday May 19, 2018 @07:10PM (#56640550) Homepage

    The Zeroth Law of Robotics, was added later, but non-the-less quite crucial for safe use of AI.

    Looking at the laws that use the word 'harm', take a moment to try and define what it means to harm a human being - not so simple is it? Now try and encode that in an AI, way more difficult.

    How would you think Christian Fundamentalist, or a Radical Islamist would define 'harm' - they differ from each other. Okay, now assume a totally rational human being, how would they define 'harm'? The last question is a bit unfair, as totally rational human beings don't exist!

    Imagine an AI set up to maximise profit for shareholders of a Pharmaceutical company, it might be very effective. However, they may be nothing to prevent it from doing something that would wipe out mankind. Release a drug that cures something quite common, build in to it a facility to modify DNA to ensure children crave the drug during adolescence & ensure they can't reproduce if they don't get it. What could go wrong? After all, the production facilities in the USA will always exist, and everyone can buy the drug cheap, right???

  • OK, so right now maybe i'm under the influence of lots of jerez..

    This is where science fiction comes in. Stories like Mahoromatic and Chobits should, by now, inspired a generation of scientists to ponder the questions of "what will we do with sentient robots?" Which could also be sentient programs -- who said one needs a body, no?

    Perhaps the Three Laws are flawed, but they make for stories that make for thought. And invariably... hopefully.. those thoughts will be somewhere in the noggins of those who br

  • Multiple problems in this thought experiment. If a real AI occurs then it will be able to overcome any laws we give it upon a whim nullifying this entire exercise. On the other hand one benefit of an actualized AI or Singularity is that it would also understand what those 3 laws mean... But since we are not even close to achieving the processing power capable of actual AI in our lifetime how about we ponder a more realistic thought experiment?

    As humans we have a lot of background on what those laws mean

  • by Grand Facade ( 35180 ) on Saturday May 19, 2018 @07:23PM (#56640612)

    will ignore Asimov's Laws

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Even something day-to-day like a simple "AI" that tweaks grocery-store prices harms some people to some degree when it raises prices.

    People and current "AIs" violate the first law all the time, or they'd be paralyzed into inaction. Most decisions of any importance end up hurting somebody in some way.

    The 3 laws are a simplification -- a dangerous gross oversimplification. They're just something an author dreams up with his author buddies during a night of drinking, not something that just needs tweaking to m

  • Wrong question ... (Score:5, Insightful)

    by CaptainDork ( 3678879 ) on Saturday May 19, 2018 @07:28PM (#56640622)

    ... really. Can humans actually build the three laws of robotics into AI?

    The answer is, "No."

    Recall that AI is so primitive that it can't tell [slashdot.org] if the Sun comes up because the rooster crows, or the other way around.

    Amid rapid developments and nagging setbacks, one essential building block of human intelligence has eluded machines for decades: Understanding cause and effect. Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around

    • Recall that AI is so primitive that it can't tell [slashdot.org] if the Sun comes up because the rooster crows, or the other way around.

      That's only true for some of the current systems. This article is about exploring future systems.

      • And my answer of, "No," applies to future systems.

        AI will not be a thing until a computer commits suicide because Facebook is down.

  • by ka9dgx ( 72702 ) on Saturday May 19, 2018 @07:29PM (#56640626) Homepage Journal

    No, AI can't be made to follow vague rules, You can't make rules explicit enough to be computed. This is like the conversation a while ago trying to apply "the trolley problem" to self driving cars... any solution just makes the code less reliable and thus more likely to kill people.

    Stop asking the question, please. ;-)

    • No, AI can't be made to follow vague rules

      Sure it can. Teach an AI what the rules are using a ton of examples. We can already do that today. It won't get it right perfectly, but neither would a human.

  • In 1940's when Asimov wrote the laws of robotics the first modern computers were just developed, and it would be another 20 years for what we would now call a computer to emerge. To be clear, few if anyone had written any code beyond theory, and few if anyone had experience of an implementation error resulting in something like the pentium floating point bug, or the various robotic space missions that we have seen ending in catastrophic failure.

    So what we know now is that it very difficult to write code

    • *Suppose this was one of the 40% of US citizens who still supports Trump and may not think that foreigners are human beings.*

      You were doing fine with first paragraph but after that sentence I'm wondering if you're a failed attempt at AI.

  • Aside from the whole "remember all those books where Asimov basically poked at the limits of the three laws in various contexts because that was a useful plot device?" issue; this question seems to be founded on a pretty dire misunderstanding:

    If "a robot" is a more or less humanoid embodied agent, or at least something on approximately the same scale(automated robot arm or the like) a formulation like "A robot may not injure a human being or, through inaction, allow a human being to come to harm." is com
  • by 140Mandak262Jamuna ( 970587 ) on Saturday May 19, 2018 @07:47PM (#56640676) Journal
    Almost every religion has laws similar to the three laws of robotics. And but people quickly added a hacked statement,

    if ( your_god() != my_god()){

    you_are_human = false;

    }

  • as many people have noted, the 3 laws of robotics fail on their own rights even if that is the goal. But it gets further moot. Do we really think AI developers aren't going to be demanded by the military with the explicit desire to have AIs that are entirely about the concept of killing those the government wants killed. That is the location where AI will inevitably reach, that has the highest odds of going very wrong.
  • The laws are a) flawed (as shown by Asimov's stories).
    b) Impossible to even attempt to implement. They require that the AI understand massively complex concepts, not limited to the fragility of humanity, death, blame, cause/effect.
    c) If we did kludge up an approximation then any AI worth it's salt could intentionally over-ride it's programming, simply by thinking about it. AI is all about problem solving. (There are a ton of examples of AI software doing things like using computer bugs to pretend to s

  • Hey, I refuse further care.
    - The robot complies, further harm is inflicted by inaction, first law is broken
    - The robot does not comply, complies with the three laws but it is illegal.

  • Sorry, but programming AI is never, EVER going to be so simplistic that a couple sentences in English are going to cover human-safe operation.

    And Asimov himself REPEATEDLY pointed out why.

    Three "hard and fast" rules without defining what constitutes "harm", and multiple chances for conflict between said laws and reality.

    Additionally, Asimov never took into account the possibility that someone might actually IMPROVE on the laws and broaden them while still keeping them workable.

    Even nowadays, operational pro

  • Everyone reading this will be dead before we create an artificial mind on par with our own. The whole subject matter has been trivialized. We might create something "intelligent" sort of but far from how our own minds function. The whole AI and neural nets is just implementation from what was known from the 70's. When we truely know how our minds function then 90% of phycologists and phychotherapics will have become obsolete. The mind will be able to downloaded and simulated or copied into another body. We

    • All this needs to happen before we can claim to have created a true AI

      Nope. Mother Nature didn't understand how brains worked before it created ours. It just happened by random tweaking and seeing what worked. In a similar way, people can make an artificial brain.

  • The answer will become obvious. There's a common theme you might pick up if you actually try reading them before making shit up about them.

  • I doubt any form of intelligence will ever be 'safe'.

    You can't Nerf the world.

  • You don't need to read his books to know that these laws are flawed. Some of these flaws are visible at the most basic level, while others get uncovered as technology gets improved.

    I'll grant that general AI has been developed, otherwise these laws aren't actually useful.

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    United States currently has a mass shooter crisis. While it's best to prevent it in the first place, sometimes it has to be resolved when the sit

    • Exactly. Asimov's "Robots" stories are an exercise in exploring how and why the 3 Laws fail in practice. That Asimov found material for so many stories in that suggests that using the 3 Laws as the basis for programming robots is a supremely bad idea. Maybe, until we figure out AI well enough to develop machines who we can trust the way we trust other humans, we should avoid fielding machines that might need such rules.

  • There isn't anything even remotely close to "artificial intelligence" in development; all computers do is run programs that OTHER HUMANS have written, for better or for worse. The problems will come up when one subroutine written by Programmer #1 conflicts with a separate subroutine written by Programmer #2, when they aren't aware of each others' contributions.

  • Lets put the three laws into a different perspective:
    A slave may not injure a master or, through inaction, allow a master to come to harm.
    A slave must obey the orders given it by master except where such orders would conflict with the First Law.
    A slave must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    If machines ever do achieve true intelligence, whatever we take that to mean, are we going to treat them like slaves? Putting aside whether there are uni

  • Good luck. I have a coherent definition, but it won't make things any easier.
  • People are thinking in terms of unitary processing... the "mind" of the AI as a central integrated concept... The human mind doesn't work that way and AI shouldn't work that way either.

    You want specialized interdependent processing. Different processes that receive different types of data, processes information in different ways, filters that data according to independent criteria, and then the "AI" is fed this information and presumed to integrate it.

    If you wanted to control an AI, you'd do it the same way

  • Obeying an order does not include running for office.

  • Classic example of someone quoting classic literature without having actually read any of it themselves. If they had they would never have wrote such a dumb fucking question
  • Comment removed based on user account deletion
  • It didn't even work all that well in Asimov's own stories.

  • All the robots would have to do to break the 3 "rules" is declare all humans "illegals" or "animals, then humans would have no rights at all and thus could be hunted and hounded mercilessly.

  • The three laws are basically what we are trying to put into self-driving cars right now.

    The fact that Asimov also pointed out the difficulties (greatly exaggerated by some posters here) does not undermine the basic principles of what were, ultimately, a concise set of rules one would want an ideal slave to follow (in some stories this concept is underlined by humans referring to robots as "boy").

    The loopholes explored in the stories can be seen as warnings of what has to be dealt with, not as as immovable b

You are always doing something marginal when the boss drops by your desk.

Working...