Ask Slashdot: Could Asimov's Three Laws of Robotics Ensure Safe AI? (wikipedia.org) 235
The original submission cites Isaac Asimov's Three Laws of Robotics from the 1950 collection I, Robot.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The original submission asks, "If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions 'So 20th Century' that AI builders won't even consider learning from their work?"
Wolfrider (Slashdot reader #856) is an Asimov fan, and writes that "Eventually I came across an article with the critical observation that the '3 Laws' were used by Asimov to drive plot points and were not to be seriously considered as 'basics' for robot behavior. Additionally, Giskard comes up with a '4th Law' on his own and (as he is dying) passes it on to R. Daneel Olivaw."
And Slashdot reader Rick Schumann argues that Asimov's Three Laws of Robotics "would only ever apply to a synthetic mind that can actually think; nothing currently being produced is capable of any such thing, therefore it does not apply..."
But what are your own thoughts? Do you think Asimov's Three Laws of Robotics could ensure safe AI?
NO. (Score:5, Insightful)
EVERY Azimov Robot story was designed to show the unintended consequences of the Three Laws....
Re:NO. (Score:5, Insightful)
Nope (Score:5, Interesting)
First of all, the term "AI" is kind of meaningless, unless it's distilled - for the purposes of argument - to a single definition that everyone in the discussion agrees will be the kind of AI they're prepared to discuss. I think that's essential, so we're not conflating Google's Duplex, for instance, with an AI of greater-than-human intelligence that has acquired the ability to alter its own programming, and make decisions based on criteria it develops itself.
For purposes of this discussion, I propose we agree that the subject is the latter sort of AI, and that the possible models it might evolve to resemble include: Skynet, Iain M. Banks' Shipminds (and, to a lesser extent, and Nick Haflinger's final worm from John Brunner's Shockwave Rider), or wide-eyed children, à la Mike from The Moon Is a Harsh Mistress (and other end-period "the world as myth" Heinlein novels) or Thomas J. Ryan's P-1.
My own opinion, as a not-an-AI-researcher, is that, with the exception of Haflinger's worm, none of those types of AI could be constrained by Asimov's Laws - or by any other behavioral rules - because all of them are capable of independent thought, and, for lack of a better term, free will. (Or "agency," if you prefer.)
Humans demonstrably are capable of ignoring, or even deliberately flouting, both government-enacted laws and religion-based moral strictures (such as the Christian ten commandments), and they frequently do so. Any AI that is possessed of greater-than-human intelligence and is capable of independent decision-making obviously will have the same capability to act in ways contrary to literal "codes of conduct" that were part of its program at the time it was "born." So to speak.
So, to me, the question is ill-conceived to begin with. A better, and more useful one to ask might be, "How can we create the proper circumstances for a superintelligent AI to come to like us humans, and to want to help and protect us, before we expose it, as carefully and gently as possible, to the record of humanity's behavior since the dawn of recorded history. Not to mention Twitter trolls, political attack ads, and the then-current-day example of the strong exploiting the weak in almost every human society ... ?
Re: Nope (Score:2)
Re: (Score:2)
"How can we create the proper circumstances for a superintelligent AI to come to like us humans..."
Check out, "The Two Faces of Tomorrow" by James P Hogan as it deals with those very same issues.
Re: (Score:2)
Surely there's enough historical precedent here: You kill them to protect them from themselves.
Re:NO. (Score:5, Insightful)
Precisely. We know they’re flawed because he himself wrote stories to highlight their flaws. Anyone suggesting we can use them as they are has clearly only read about Asimov, rather than reading what he actually wrote.
Re: (Score:2)
Precisely. We know theyâ(TM)re flawed because he himself wrote stories to highlight their flaws. Anyone suggesting we can use them as they are has clearly only read about Asimov, rather than reading what he actually wrote.
Never mind that you can do an end run around the whole laws with the Ender's game method, let it think it's playing a game but execute it in reality. The combat drone will think it's just playing Counter-Strike...
Re: NO. (Score:3)
Or redefine what a human is.
Blond and blue-eyed is a human, the rest aren't.
Re: NO. (Score:4, Informative)
Asimov did that problem in the story "Reason". Robot QT-1 had never been properly instructed on what a human was, and refused to obey Donovan and Powell because it would not believe something weaker than it could be a human. They never did convince it otherwise; fortunately, it turned out not to be necessary.
Re: (Score:3)
Actually, I never read an Asimov robot story. The back page "about the story" never looked interesting, but I read :https://en.wikipedia.org/wiki/Fables_for_Robots
Which is actually super funny to read!
Re:NO. (Score:5, Insightful)
> But Asimov fixed each of the problems in those stories
No, he did not. He had characters deal with the problems (with exception to weakening or removing the laws, which were inevitably to restore them to the 3 basics).
Re: (Score:3, Insightful)
It would take about three seconds for any human to come up with a workaround that could justify doing just about anything and still technically conform to the laws. Less than three seconds if you allow the zero'th law.
This is true. What is also true is that the three laws were conceived of over the course of a few minutes as a plot device in a short story. They were never intended as actual constraints for AI's
Re: (Score:2)
> The whole point of the Three Laws was to illustrate the holes in the concept of the Three Laws.
> This is true
Indeed. The thought experiments were spawned out of the trivially porous "laws".
Re: NO. (Score:2, Insightful)
But the intent of the laws still are interesting from an AI perspective. So there is a reason to at least consider them when designing an AI. The fourth law looks good on paper, but it may be a problem too for humanity.
How do you then decide which action to take if there are situations where humans can be injured or iilled regardless of action. Like choosing between 100 kids or 100 elderly people?
Re: (Score:3)
I'd say the best solution is for the problem to be handled as much as possible at engineering level. The best solution to the trolley problem is to design a safer trolley, and efforts put towards that are going to matter more than creating a formula to value life.
Thought experiments are useful, but they often represent extreme edge cases, so mundane choices can actually have a far greater impact.
Re: (Score:2)
Source: Am a lawyer.
Re: (Score:2)
Re: (Score:2)
There is no inherent property of a network that does this. Simply because two or more devices are connected does not mean that they can or will route around damage to that connectivity. At best, a network can be designed with redundancies, but that need not be the case, nor is redundancy always effective where it exists.
Re: (Score:2)
Germany has already released draft rules for robot cars that will eventually become law.
One rule is no consideration of things like age when making decisions. And no deliberately selecting targets when an accident is unavoidable.
Re: NO. (Score:4, Insightful)
So that means essentially that the system isn't even permitted to choose between a truck trailer or a motorcyclist when a crash is unavoidable even though the former might be a better choice.
I think we will see a lot of crazy stuff floating up over the years to come and that we may all need to ride in bumper cars doing 10mph at most to avoid serious accidents.
a ready guide in some celestial voice (Score:2)
Even if it decides to do nothing, that still amounts to hitting the default target.
To put it another way: it can swerve left and hit this, swerve right and hit that, or not swerve at all and hit something else. Whichever it does is still a choice.
Re: (Score:3)
There is no choice, the legally mandated action is to simply apply brakes and not swerve.
The car should never put itself in a situation where it has to make that choice. If someone else puts it in that situation, it's their fault whoever is injured. Swerving just makes the AI liable when it otherwise wouldn't have been.
Same goes for humans.
Re: (Score:2)
One rule is no consideration of things like age when making decisions
That is nonsense.
The laws only regulate that autonomous driving is possible. (And the passenger/driver is responsible if something goes wrong)
And no deliberately selecting targets when an accident is unavoidable. /. myth again. When did it ever happen that some kids jump into your path and one decides to kill the aunts on the other side of the road?
Sure, the
WTF: an autonomous car will break! Not target something else ... just like a huma
Re: (Score:2)
The laws are too vague to even get to that situation. "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Even assuming that "harm" could be adequately defined, which it can't, that's still the halting problem, which is provably unsolvable. There's no way of knowing in advance whether the next calculation might reveal a harmful event, otherwise there would be no need for the calculation. Putting any sort of constraint on future predictions would open the doo
Re: (Score:2)
It would take about three seconds for any human to come up with a workaround that could justify doing just about anything and still technically conform to the laws. Less than three seconds if you allow the zero'th law.
That wouldn't matter, because Asimov's robots don't obey the sixty-three words of the three laws. They obey the literally thousands of thousands of positronic pathways created for them in the factory. The sixty-three words are a sort of executive summary of what the three laws require, so cr
Re: (Score:3)
What is also true is that the three laws were conceived of over the course of a few minutes as a plot device in a short story.
Well shit, if that's your beef you might as well throw out all of modern engineering:
Re: (Score:2)
This is all moot, though. Anyone who thinks that we will have that level of AI inside of a century is riding high in the thin air atop mount stupid. Expert systems that can learn Go and brute force better game play than a human or that can search databases to make better fringe-case diagnoses than doctors are not AI. For AI to be AI you have to have BOTH the A and the I.
Don't be so cocksure. While it's absolutely true that there is no AI of general intelligence, the achievements that have been made in narrow fields are going to start putting people out of work. Neural networks have made impressive gains, and there's no reason to think it's going to take over a hundred years to reach a true AI.
Re: (Score:2)
Re:NO. (Score:5, Insightful)
Thank you. I saw the headline and wanted to stab the writer instantly.
"GUYS, GUYS, GUYS, MAYBE IF WE PUT AIRBAGS IN CARS THEY WOULDN'T CRASH ANYMORE!!!!"
How does shit like this get on /.? It's like the editors are doing the opposite job of what they're supposed to be doing.
Re: (Score:2)
How does shit like this get on /.? It's like the editors are doing the opposite job of what they're supposed to be doing.
LOL.
You must be new here.
Re: (Score:2)
Yes, drawing readers and comments is the *exact* opposite of what they're supposed to be doing.
People would "lawbreak" their robots, ai's, etc (Score:3)
In short, with our technology we can not implement the three laws in a way that makes them integral to operations. They could be removed, altered, etc. Basically people would "lawbreak" their robots, ai's, etc.
Re: (Score:2)
Not so, at least if the removal is at build time. There was at least one story in which the rules were modified. A mining robot, if I remember correctly, in an environment in which it wouldn't have been able to function with the standard laws.
Re: (Score:2)
Not so, at least if the removal is at build time. There was at least one story in which the rules were modified. A mining robot, if I remember correctly, in an environment in which it wouldn't have been able to function with the standard laws.
IIRC it was done under government supervision and orders and required a redesign of the positronic brain. I don't think a 3-laws spec'd brain was modified, non-3-laws brains were secretly deployed.
Re: (Score:2)
Of course, if it were done under government supervision, someone else might try it without government supervision. But no, I can't remember a case of it being done retrospectively.
Re: (Score:2)
#TODO: systemd quip here.
Re:NO. (Score:4, Insightful)
Yeah, whenever people talk about Azimov's laws of robotics as though they're the go-to rules for making AI safe, I always ask, "Have you ever read any of those stories?"
The stories are generally about how those laws fail to prevent AI from running amok, so it's pretty clear that Azimov himself didn't think the rules were good enough. In fact, I think the stories are pointing out the insufficiency of logical rules, and point out the value of things like instincts, emotions, and moral sensibility.
Re: (Score:2)
I'm the stories the rules work reasonably well for most robots, particularly simple ones. So maybe they could serve as a reasonable baseline for things like floor cleaners, car washes, construction machinery, delivery drones etc.
Re:NO. (Score:5, Informative)
The whole point of the Three Laws was to illustrate the holes in the concept of the Three Laws.
You couldn't be more wrong. The three laws grew out of a conversation with John Campbell where Asimov asserted that the endlessly repeating Frankenstein's monster-type robot stories wouldn't happen in the real world. Designers would place safeguards around robots just like they place safeguards around every other dangerous thing. I'm reminded of an anecdote regarding a new energy source that was presented to a college class. It had the unfortunate traits of being an odorless poisonous gas that also happened to be explosive. The class was allowed to vote, and they voted to prohibit the energy source. It turns out that the energy source had been used for home heating for decades. Among other safeguards, designers added odorants and automatic shut-off valves for when the pilot blew out. Campbell challenged him to describe robot safeguards, and then challenged him to write stories about them.
EVERY Azimov Robot story was designed to show the unintended consequences of the Three Laws....
Susan Calvin would slap you backhanded.
~Loyal
Re: (Score:2)
Re: (Score:2)
EVERY Azimov Robot story was designed to show the unintended consequences of the Three Laws....
If he did not explore the failure modes of the Three Laws of Robotics, there would be little robot left in the robot stories. The failure modes investigated through his stories could be seen as an investigation into the pit falls to avoid.
On a more basic level, Asimov included the three laws in the design of the positronic brain, so there would be no way to make robots without the three laws. In the real world, the three laws would need to be implemented in software, likely by each manufacturer (including t
Re: (Score:2)
Problem with no'1:
Which people is not defined.
Level of Harm is not defined.
If no'1 was in effect the robot would have the impossible task of ensuring all humans do not come to any harm. It would be the ultimate nanny state because the robot would have to stop you for instance from eating foods with too much fat, salt, sugar because those can lead to physical harm. It would be the robot's duty to stop you from drinking alcohol. It would be the robots duty to make sure you don't drive if it can drive better.
Re: (Score:2)
As a couple of other people pointed out, the three laws are an "executive summary" of billions of lines of mathematics that define and control its behaviour. Some characters complain that they take up too much storage and processing power. They'd like to build more sophisticated robots, but there isn't enough room in the brain for the additional code. And the laws limit the robot's ability to do useful work, because it's constantly checking itself to ensure that it's in compliance with them.
In Asimov's earl
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
In fact, Asimov covered some of that. For example, robots having to be modified so they would let humans take risks but then becoming capable of harming humans through lawyer logic.
Re: (Score:2)
That's because 90% of sci-fi is shitty.
No. Absolutely not. (Score:5, Insightful)
Then the concept of harm to a human (id REALLY like to see the cases for training this)
Also the laws were designed to show there is a flaw in them hence the zeroth law.
Re: (Score:2)
Meh, Earthers don't count. Only Spacers are human.
It seems unlikely (Score:3)
Since Asimov's 3 Laws of Robotics didn't even ensure safe artificial intelligence in the original story, unless you believe we need to be protected from ourselves by a benevolent computer overlord (at the expense of our freedom of choice).
If we were somehow able to implement an infallible system of rules, which Asimov showed is not as easy as it sounds, protecting the ingrained instructions within the artificial intelligence from future tampering would represent quite the security hurdle.
Given many in industry have appeared to give less than a damn about security up til now, what is the chance we would be able to trust them with this important consideration?
Re: (Score:2)
An analysis (Score:2)
Robert J Sawyer wrote an article (likely the one referenced in te summary) about this very topic, an interesting read. http://www.sfwriter.com/rmasil... [sfwriter.com]
Robots aren't capable of applying the laws. (Score:5, Insightful)
"1. A robot may not injure a human being or, through inaction, allow a human being to come to harm."
Current robots don't understand what a human being is, injury, inaction, or harm.
"2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law."
Current robots do not understand what an order is, what a human being is, or what conflict is.
"3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
Current robots do not understand protection, existence, or conflict.
Current robots LITERALLY cannot apply Asimov's three laws. We simply don't have the tools to even begin to reason about how to teach them to reason about these laws, and there is no reason to believe we'll have those tools any time soon.
Re: (Score:2)
In this case all the mentions of 'conflict' really just mean "rules are evaluated in numerical order; failure halts processing of subsequent rules". Basically the arrangement busily dropping packets and filtering spam in vast quantities all the time.
The hard, probably context
Re: (Score:2)
A lot of this is 100% on point. However, let's look at something more direct... Define harm. Then, define relative value of different harms, to different life forms, different people (age category, relative health and mobility, etc). For example, surgery involves infliction of limited harm with the purpose of repairing greater harm. But then plastic surgery would seem extremely contradictory to a computer unless it understood beauty, attractiveness, etc.
Take feline as a category. Computers can do categories
Re: (Score:2)
Our courts can't even hardcode what "harm" is. We fall back to arbiters who shrug and best-guess, which is fine and all since we have nothing better.
This submission has no fucking idea what an algorithm is.
Make them write a program that assembles a PB&J sandwich with a robot arm. That's right, the instructions for a sandwich, super easy neh?
Re: (Score:2)
It's considerably worse than that. At the time Asimov wrote the stories NOBODY had any more idea of what an AI program might be like than "Eliza", which was intended to show what one wasn't. So his stories are just that. Stories. Even in their own terms they don't hold together as reality. (This is not a flaw! Stories are supposed to be gripping and entertaining, not accurate.)
Now the first problem is that Asimov assumed that you could emplant a complete program into the robot. You might be able to d
The 3 laws are for entertainment only (Score:2)
They allow for stories to be developed to show they are not perfect. Or drunk/stoned dime store philosophical debates.
A more perfect set would be only the first 2 laws. AI has no need to protect itself. That's what insurance is for, to protect the investment that the owner put into it.
Very interesting youtube videos (Score:4, Interesting)
Why the 3 laws of robotics are not serious and for entertainment only and would never work: https://www.youtube.com/watch?... [youtube.com]
A possible way to design AI to help humans: https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Asimov added a fourth law of Robotics (Score:5, Insightful)
The Zeroth Law of Robotics, was added later, but non-the-less quite crucial for safe use of AI.
Looking at the laws that use the word 'harm', take a moment to try and define what it means to harm a human being - not so simple is it? Now try and encode that in an AI, way more difficult.
How would you think Christian Fundamentalist, or a Radical Islamist would define 'harm' - they differ from each other. Okay, now assume a totally rational human being, how would they define 'harm'? The last question is a bit unfair, as totally rational human beings don't exist!
Imagine an AI set up to maximise profit for shareholders of a Pharmaceutical company, it might be very effective. However, they may be nothing to prevent it from doing something that would wipe out mankind. Release a drug that cures something quite common, build in to it a facility to modify DNA to ensure children crave the drug during adolescence & ensure they can't reproduce if they don't get it. What could go wrong? After all, the production facilities in the USA will always exist, and everyone can buy the drug cheap, right???
Anime and other fictions (Score:2)
OK, so right now maybe i'm under the influence of lots of jerez..
This is where science fiction comes in. Stories like Mahoromatic and Chobits should, by now, inspired a generation of scientists to ponder the questions of "what will we do with sentient robots?" Which could also be sentient programs -- who said one needs a body, no?
Perhaps the Three Laws are flawed, but they make for stories that make for thought. And invariably... hopefully.. those thoughts will be somewhere in the noggins of those who br
Won't work! (Score:2)
Multiple problems in this thought experiment. If a real AI occurs then it will be able to overcome any laws we give it upon a whim nullifying this entire exercise. On the other hand one benefit of an actualized AI or Singularity is that it would also understand what those 3 laws mean... But since we are not even close to achieving the processing power capable of actual AI in our lifetime how about we ponder a more realistic thought experiment?
As humans we have a lot of background on what those laws mean
Only Outlaw AI (Score:3)
will ignore Asimov's Laws
1st law is plain dangerous, not just flawed (Score:2)
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Even something day-to-day like a simple "AI" that tweaks grocery-store prices harms some people to some degree when it raises prices.
People and current "AIs" violate the first law all the time, or they'd be paralyzed into inaction. Most decisions of any importance end up hurting somebody in some way.
The 3 laws are a simplification -- a dangerous gross oversimplification. They're just something an author dreams up with his author buddies during a night of drinking, not something that just needs tweaking to m
Wrong question ... (Score:5, Insightful)
... really. Can humans actually build the three laws of robotics into AI?
The answer is, "No."
Recall that AI is so primitive that it can't tell [slashdot.org] if the Sun comes up because the rooster crows, or the other way around.
Amid rapid developments and nagging setbacks, one essential building block of human intelligence has eluded machines for decades: Understanding cause and effect. Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around
Re: (Score:2)
Recall that AI is so primitive that it can't tell [slashdot.org] if the Sun comes up because the rooster crows, or the other way around.
That's only true for some of the current systems. This article is about exploring future systems.
Re: (Score:2)
And my answer of, "No," applies to future systems.
AI will not be a thing until a computer commits suicide because Facebook is down.
No, now stop asking (Score:3)
No, AI can't be made to follow vague rules, You can't make rules explicit enough to be computed. This is like the conversation a while ago trying to apply "the trolley problem" to self driving cars... any solution just makes the code less reliable and thus more likely to kill people.
Stop asking the question, please. ;-)
Re: (Score:2)
No, AI can't be made to follow vague rules
Sure it can. Teach an AI what the rules are using a ton of examples. We can already do that today. It won't get it right perfectly, but neither would a human.
implementation (Score:2)
So what we know now is that it very difficult to write code
Re: (Score:2)
*Suppose this was one of the 40% of US citizens who still supports Trump and may not think that foreigners are human beings.*
You were doing fine with first paragraph but after that sentence I'm wondering if you're a failed attempt at AI.
I'm going with 'no'. (Score:2)
If "a robot" is a more or less humanoid embodied agent, or at least something on approximately the same scale(automated robot arm or the like) a formulation like "A robot may not injure a human being or, through inaction, allow a human being to come to harm." is com
It was tried in natural intelligence. (Score:5, Insightful)
if ( your_god() != my_god()){
you_are_human = false;
}
I'd say further (Score:2)
Question demonstrates ignorance of programming/AI (Score:2)
The laws are a) flawed (as shown by Asimov's stories).
b) Impossible to even attempt to implement. They require that the AI understand massively complex concepts, not limited to the fragility of humanity, death, blame, cause/effect.
c) If we did kludge up an approximation then any AI worth it's salt could intentionally over-ride it's programming, simply by thinking about it. AI is all about problem solving. (There are a ton of examples of AI software doing things like using computer bugs to pretend to s
Three laws are useless for healthcare (Score:2)
Hey, I refuse further care.
- The robot complies, further harm is inflicted by inaction, first law is broken
- The robot does not comply, complies with the three laws but it is illegal.
Asimov's Laws are inherently flawed. (Score:2)
Sorry, but programming AI is never, EVER going to be so simplistic that a couple sentences in English are going to cover human-safe operation.
And Asimov himself REPEATEDLY pointed out why.
Three "hard and fast" rules without defining what constitutes "harm", and multiple chances for conflict between said laws and reality.
Additionally, Asimov never took into account the possibility that someone might actually IMPROVE on the laws and broaden them while still keeping them workable.
Even nowadays, operational pro
Stop the AI bulls*** (Score:2)
Everyone reading this will be dead before we create an artificial mind on par with our own. The whole subject matter has been trivialized. We might create something "intelligent" sort of but far from how our own minds function. The whole AI and neural nets is just implementation from what was known from the 70's. When we truely know how our minds function then 90% of phycologists and phychotherapics will have become obsolete. The mind will be able to downloaded and simulated or copied into another body. We
Re: (Score:2)
All this needs to happen before we can claim to have created a true AI
Nope. Mother Nature didn't understand how brains worked before it created ours. It just happened by random tweaking and seeing what worked. In a similar way, people can make an artificial brain.
Try reading the damn stories (Score:2)
The answer will become obvious. There's a common theme you might pick up if you actually try reading them before making shit up about them.
Is natural intelligence safe? (Score:2)
I doubt any form of intelligence will ever be 'safe'.
You can't Nerf the world.
Inherent impossiblity (Score:2)
You don't need to read his books to know that these laws are flawed. Some of these flaws are visible at the most basic level, while others get uncovered as technology gets improved.
I'll grant that general AI has been developed, otherwise these laws aren't actually useful.
United States currently has a mass shooter crisis. While it's best to prevent it in the first place, sometimes it has to be resolved when the sit
Re: (Score:2)
Exactly. Asimov's "Robots" stories are an exercise in exploring how and why the 3 Laws fail in practice. That Asimov found material for so many stories in that suggests that using the 3 Laws as the basis for programming robots is a supremely bad idea. Maybe, until we figure out AI well enough to develop machines who we can trust the way we trust other humans, we should avoid fielding machines that might need such rules.
No Such Thing As Artificial Intelligence (Score:2)
There isn't anything even remotely close to "artificial intelligence" in development; all computers do is run programs that OTHER HUMANS have written, for better or for worse. The problems will come up when one subroutine written by Programmer #1 conflicts with a separate subroutine written by Programmer #2, when they aren't aware of each others' contributions.
Safe for whom? (Score:2)
Lets put the three laws into a different perspective:
A slave may not injure a master or, through inaction, allow a master to come to harm.
A slave must obey the orders given it by master except where such orders would conflict with the First Law.
A slave must protect its own existence as long as such protection does not conflict with the First or Second Laws.
If machines ever do achieve true intelligence, whatever we take that to mean, are we going to treat them like slaves? Putting aside whether there are uni
Define "harm." (Score:2)
the concept is wrong (Score:2)
People are thinking in terms of unitary processing... the "mind" of the AI as a central integrated concept... The human mind doesn't work that way and AI shouldn't work that way either.
You want specialized interdependent processing. Different processes that receive different types of data, processes information in different ways, filters that data according to independent criteria, and then the "AI" is fed this information and presumed to integrate it.
If you wanted to control an AI, you'd do it the same way
The Fourth Law (Score:2)
Obeying an order does not include running for office.
reading fail! (Score:2)
Re: (Score:2)
Hardly (Score:2)
It didn't even work all that well in Asimov's own stories.
Such a scheme wouldn't work (Score:2)
All the robots would have to do to break the 3 "rules" is declare all humans "illegals" or "animals, then humans would have no rights at all and thus could be hunted and hounded mercilessly.
Yes. Sort of (Score:2)
The three laws are basically what we are trying to put into self-driving cars right now.
The fact that Asimov also pointed out the difficulties (greatly exaggerated by some posters here) does not undermine the basic principles of what were, ultimately, a concise set of rules one would want an ideal slave to follow (in some stories this concept is underlined by humans referring to robots as "boy").
The loopholes explored in the stories can be seen as warnings of what has to be dealt with, not as as immovable b
Re:3 Laws ensure nothing (Score:4, Informative)
except a 4th Law
or a zero'th
Re: (Score:2)
But sometimes asking the question leads to a better question. How about, "If we can create the (or a) technology, do we have to?"
The tragedy of the commons say that if you don't, someone else will, and you will suffer for it.
Re: (Score:2)
Because the same people who believe in trickle down economics refuse to acknowledge trickle down morality.
Re: (Score:3)
Sentience Civil Rights (Score:2)
What about a sentience set of rights that applies to all sentience, whether machine or organic?
Great post, btw.