'Almost No One Out There Thinks That Isaac Asimov's Three Laws Could Work For Truly Intelligent AI' (mindmatters.ai) 250
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov fans tell us that the laws were implicit in his earlier stories. A 0th law was added in Robots and Empire (1985): "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
[...] Chris Stokes, a philosopher at Wuhan University in China, says, "Many computer engineers use the three laws as a tool for how they think about programming." But the trouble is, they don't work. He explains in an open-access paper (PDF):
The First Law fails because of ambiguity in language, and because of complicated ethical problems that are too complex to have a simple yes or no answer.
The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves.
The Third Law fails because it results in a permanent social stratification, with the vast amount of potential exploitation built into this system of laws.
The 'Zeroth' Law, like the first, fails because of ambiguous ideology. All of the Laws also fail because of how easy it is to circumvent the spirit of the law but still remaining bound by the letter of the law.
That's not what the Three Laws were (Score:5, Insightful)
Re:That's not what the Three Laws were (Score:5, Insightful)
Yeah, they were literally intended to be shown as a failure. I don't get why people try to take them in exactly the opposite manner.
Re:That's not what the Three Laws were (Score:5, Insightful)
As far as I'm concerned, Humanity will survive the next few hundred year ONLY if we program AI to be particularly fond of us, like pets
Re:That's not what the Three Laws were (Score:4, Insightful)
So we need the equivalent of toxoplasmosis to infect our future AI overlords and assure our survival?
Re: (Score:2)
well, there is your parasite
Re: (Score:3)
Re: (Score:2)
The idea that we might be able to harness mirror neurons to make an AI identify with us is interesting, though without a better understanding of what they are and how they work, it's still in the realm of sci-fi.
That said, it is the first time I can recall hearing that specific sort of idea proposed. I also don't read much fiction anymore, so I'm sure I've missed it somewhere. +1 for originality and useful considerations of the larger ways we would need to interact with an AI. (Like the bit about goal orien
Re: (Score:2)
Don't worry. We probably won't have any real AI in the next few hundred years. Artificial brains are still up there with things like warp drive in terms of feasibility. I think we will have to first figure out how to program and hack DNA/RNA to make our own life forms before we can truly recreate a brain.
If you consider that what real AI will look like will be more like an artificial life form than a robot, at least at first, you can see how silly a lot of the fears are about this. It will be more like bree
Re:That's not what the Three Laws were (Score:5, Insightful)
I mean, we have working examples of brains, but not of warp drives.
One is a thing we know to be possible, if beyond our present capability. The other would seem to involve bending physics in ways we think we can't do, period.
Re: (Score:3)
Nevertheless, we still can't even repair a brain that malfunctions. For physical repairs, the best we can manage is to disable the part that seems to be malfunctioning and hope it finds a way to route around the damage.
For pharmacological approaches, we're at the same level as slapping the TV on it's side and hoping the picture comes in better. If not, slap it again. No idea why the second slap worked when the first didn't or why slapping it the same way as yesterday didn't work this time.
Re: (Score:2)
Ok valid point. Warp drives are basically impossible. A better comparison might be a space drive that can accelerate a ship to 0.95c. We are so far away from anything that can do that that we cannot even really imagine a path to such a goal.
Re: (Score:2)
We have that technology, we just don't want to waste that much industrial output on it when there is no clear profit motive.
Re:That's not what the Three Laws were (Score:4, Interesting)
>>a mass the size of a space ship would take more energy than the universe has in it to get to 5% the speed of light.
OH HELL NO!
https://en.wikipedia.org/wiki/... [wikipedia.org]
read this before spouting any further
Re: (Score:2)
>>a mass the size of a space ship would take more energy than the universe has in it to get to 5% the speed of light.
OH HELL NO!
https://en.wikipedia.org/wiki/... [wikipedia.org]
read this before spouting any further
Really?
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
This article needs additional citations for verification. (November 2013)
This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. (December 2011)
Re: (Score:3)
Pedantic much?
Just admit it, your claim "a mass the size of a space ship would take more energy than the universe has in it to get to 5% the speed of light." isn't supported anywhere
Cite or stfu
Re: (Score:2)
I get confused about the concept of relativistic mass. From the frame of reference of someone inside the ship itself the ship is at rest if the engines are turned off. So it seems like rest mass should apply at least for the astronauts and their vessel and it should not take any more force to accelerate the craft than it did at rest with respect to earth. For people at home on earth it would seem like the ship was accelerating a nearly infinitely massive object though. Would time dilation cancel this out th
Relativistic mass is wrong (Score:5, Informative)
I get confused about the concept of relativistic mass.
There is a very good reason for that: "relativistic mass" is an extremely misleading concept and arises from a complete misunderstanding of relativity. Einstein himself warned against it.
Mass is something called a Lorentz invariant i.e. it is the same in all inertial reference frames. So the mass of a spacecraft is the same for the astronauts in it just as it is for someone on Earth looking at it zoom by. We use this fact in particle physics to help identify particles regardless of how fast they may be travelling.
The misconception comes from momentum which in Newtonian mechanics is mass times velocity but in special relativity becomes "gamma"*mass*velocity where "gamma" is a dimensionless number which has a minimum value of 1.0 at rest and goes to infinity in the limit that the Newtonian velocity goes to the speed of light.
People mistakenly associate the factor with the mass but, in reality, it comes from the velocity. This is because relativity mixes space and time and Newtonian velocity is distance per unit time and such a concept does not work well in a relativistic world since everyone's time and space are different.
Re: (Score:2)
welcome to my wonderment. I've been stuck with it for a long time, but I think I kinda got a better feeling for it
example that I use is; 4 cars moving in 2 groups. In each group 1 car is fast and 1 car is slow ( but moving in the
same direction ), in the other group it's the same but opposite of the first group 180 degrees
I place myself in the center with a very fast car, and if I am careful I can see at any time about 3 cars, but over time it will be 2, and If I pursue 1 group and not the other, I won't eve
Re:That's not what the Three Laws were (Score:4, Interesting)
Re: (Score:2)
It's even more laughable that they claim 5% of c is not reachable
Re:That's not what the Three Laws were (Score:4, Interesting)
Actually an Orion could, in theory, get to 5% the speed of light and slow down again at its destination. Dyson calculated 3.3% of light using late 1960's technology. 400,000t launch rate, blow up 300,000 nukes for power and a cost of 1/10 of the USA's annual GNP. Not going to happen but conceivable
https://en.wikipedia.org/wiki/... [wikipedia.org].
Re:That's not what the Three Laws were (Score:5, Informative)
Here is the thing, having the ability to understand the relative scale of things is part of a basic intelligence.
As a child it is easy when you can get your hands on it, the car is big, the block is small kind of stuff
And then, as adults we get to understand bigger scales, you know like how big is the Earth, compared to the Sun, compared to the Galaxy, etc...
So, you are taking a problem that has been solved theoretically, using (mostly) currently available technology, which is to say we would fire of a series of nuclear explosions to maintain constant acceleration of 1G. When you are accelerating a 1G, getting to 5% of c is pretty quick work
So, as dryeo points out and provides links below, it could all be pulled off for about a $Trillion... and yeah, that is a lot of money, which is actually kinda tiny compared to the entire Earth, which is actually a tiny speck relative to the Galaxy... which in turn is a truly, infinitesimal part of the entire Universe
And yet you trot out a claim that seems to fail a reality check:
>>a mass the size of a space ship would take more energy than the universe has in it to get to 5% the speed of light.
You are trying to compare human effort costed out to a trillion dollars to the energy of the entire Universe. FYI, the Universe does manage to push things along at a significant % of c most of the time, and (apparently) even beat c at some time in the past
So, I will give you that when you start getting closer and closer to the actual speed of light the power requirement goes vertical, at or near light speed, any extra energy you put into an object does not make it move faster but just increases its mass. Mass and energy are the same thing [theguardian.com], but that is a curve the gets steep around 90% and is not a limiting factor in reaching 5%
So, yeah I'm a space nutter, but could you just try and put a little effort into it before resorting to shitposting
A Lot on human, but not cosmic, scales (Score:3)
I'm niggling here, but to be literal and stuff, a mass the size of a space ship would take more energy than the universe has in it to get to 5% the speed of light.
That is simply not true. The kinetic energy of an object in relativity is (gamma-1)*m*c^2 where gamma=1/sqrt(1-v^2/c^2). So if I gave the spacecraft a kinetic energy equal to its own mass-energy (mc^2) then it would have a velocity of 86.6% the speed of light.
So, unless your spacecraft is so large than most of the mass of the universe went into building it, only the tiniest fraction of the universe's total energy is needed to accelerate it to relativistic speeds. The problem is that while it may only be
Re: (Score:2)
As far as I'm concerned, Humanity will survive the next few hundred year ONLY if we program AI to be particularly fond of us, like pets
Humanity will survive if they are smart enough to reject dependency on artificial intelligence.
That's because by "intelligent," we mean human intelligence.
The only thing we will ever get right about it all is the "artificial" part.
Re: (Score:2)
As far as I'm concerned, Humanity will survive the next few hundred year ONLY if we program AI to be particularly fond of us, like pets
That's insufficient. Read Nick Bostrom's "Superintelligence" for an exhaustive analysis of all the ways that the obvious ideas (including yours) can go disastrously wrong.
Figuring what rules to impose on something that may be vastly smarter than you so that you don't end up with some sort of horrific outcome turns out to be really hard. Not to mention the problems of figuring out how to encode those rules, and how to impose them in a way that the AGI can't simply remove.
Re: (Score:2)
We have clever software these days, even software that writes software software (albeit rather poorly). But it was all untimately conceived by, and written by, human beings.
There is nothing in any of that software that even begins to hint at real "intelligence".
Re: (Score:2)
Currently we're not "programming" AI at all, we're just training algorithms. So good luck with that.
For programming them to be fond of us to have utility, we'd have to go back to using Expert Systems. Which ideally perform better, but are much more expensive to develop.
Re: (Score:2)
The whole point of the I part is that it can make decisions outside the framework of its 'programming'.
Maybe from a theoretical/academic standpoint, but from a practical/economic standpoint what people want out of AI is a way to efficiently solve difficult problems. An AI that isn't under humans' control is likely to create more problems for humanity than it solves.
Re:That's not what the Three Laws were (Score:5, Insightful)
This is an attempt to gaslight the entire tech industry. People familiar with Asimov's works will know that the whole point of The Three Laws was to show that there are no such rules that form a reliable formula for machine-driven morality, but people unfamiliar with Asimov's work (the vast majority of people) will simply buy into the inference embedded in articles like this one that paint us all as unaware noobs.
Then they go out and buy a always-on cloud-connected smart microphone to prove that they're actually the smart ones.
I don't know why people are like this.
Re: (Score:3)
It wasn't even so much about machine-driven morality as human morality, and the incredible fallibility of supposedly-literal rules.
The robots are a device, not to talk about robots, but to talk about the human condition. (as is most of sci-fi)
Using robots in the story avoids the many externalities inherent in trying to use compliant humans; it is easier to believe that the robots are really trying to follow these literal rules.
Re: (Score:2)
Probably because these people didn't actually read and understand the stories.
Re: (Score:2)
Might I differ? They were invented for many reasons. One was to show that _success_ would have unintended results, and that the limitations _could_ be overcome. The "R. Daneel Olivaw" stories reflected just such evolution of goals.
Re: (Score:2)
I don't get why people try to take them in exactly the opposite manner.
Because most of those who refer to the three laws haven't actually read Asimov's stories. They read the three laws by themselves somewhere else, and assume Asimov's stories must be about how good and useful the three laws are.
Re:That's not what the Three Laws were (Score:5, Interesting)
Indeed, and I will personally vouch for this answer.
Weren't they just supposed to be a simplification (Score:2)
Re: (Score:2)
But they were bent for what seemed to be good reasons and with unanticipated consequences.
Of course they don't work (Score:5, Insightful)
Of course they don't work. Many of the novels by Asimov were essays about the limits of these three laws. The inclusion of the zeroth law, mentioned in the summary, is the most clear example of this.
Re:Of course they don't work (Score:5, Insightful)
Many? ALL of the Robot stories were about the limits of the Three Laws....
Re: (Score:2)
Not all of them. For example, his short story Segregationist had nothing to do with the 3 laws but was more about ideas of rights of partially robotic humans and partially human robots.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
I don't recall it, but I've alway felt that a robot can't have feel grief, guilt, happiness and joy. And without those you are just a robot machine and not a human
Re: (Score:2)
Of course they don't work. Many of the novels by Asimov were essays about the limits of these three laws. The inclusion of the zeroth law, mentioned in the summary, is the most clear example of this.
The zeroth law is redundant at best.
How can you cause harm to humanity without harming humans?
Re:Of course they don't work (Score:5, Insightful)
Of course that rule is the nastiest one of all, since "the greater good" is wide open for interpretation. An AI might decide to kill off half of humanity to preserve the other half. Or amputate our arms and replace them with robot arms with a kill switch under its control, so we can no longer harm ourselves or each other. That sort of thing.
Re: (Score:3)
Zero'th preempts the other three (Score:2)
The zeroth law is redundant at best. How can you cause harm to humanity without harming humans?
Not at all, the zeroth, the first and highest ranking, preempts the others. Renders 1-3 not enforceable if the robot believes that humanity's danger is lessened if a person or robot is destroyed, or an order disobeyed. The point being you can cause harm to humanity by failing to cause harm to a particular human or a particular group of humans.
Actually Read Asimov...? (Score:5, Insightful)
Actually reading Asimov's many different robot stories is a way more entertaining way to explain how of all the failures that can happen. The article directly states this: Asimov’s robot books “are all about the way these laws go wrong, with various consequences.”
I've never met a programmer that uses these as guidelines. As someone that owns a software company, employing programmers, and used to be a programmer before being a manager/salesman/TPS-report'er, this entire interview is filled with vague generalization and speculation, without any specific applications. If there were ONE specific robotics manufacturer this advice were being given to, and they could respond, and ONE programmer that was given as an example, that would carry much more weight than general musings about an entire industry with no specific examples.
This is the problem with future telling - it has to be vague and subject to (mis)interpretation.
Re: (Score:2)
Re: (Score:2)
That's not azamov stories, it's a 5 novel story in which some rings have to be found and inserted into a computer. because the computers have made man to indian days and wil try soon to convert them to cavemen.
Re: (Score:3, Insightful)
I've never met a programmer that uses these as guidelines.
Yes and no. If you replace "robot" with "device", the "laws" are saying that a device has to be safe, that it has to do its job without compromising safety, and that it needs to be robust without compromising the job it has to do or compromising safety.
Notwithstanding the intrusive adware/Internet of Useless Things industry that many programmers seem to be employed in, any self-respecting engineer would do this with anything they design and build. They're not "laws" because it's obviously right.
The prospect
Re: (Score:2)
I'm reminded of people who quote the movie, "The Ten Commandments."
It, like Asimov's (wonderful) work, was fiction.
"Out there" (Score:2)
How do we know what the beings out there think?
Aliens probably have had thousands more years of experience with AI and robots than we have.
Re: (Score:3)
How do you know that? It's all speculation.
Re:"Out there" (Score:5, Funny)
How do you know that? It's all speculation.
Speculation? I saw it on the history channel. ;-)
Re: (Score:2)
http://www.viewzone.com/mexsta... [viewzone.com]
The truth is out there.
Re: (Score:2)
How do we know what the beings out there think?
Aliens probably have had thousands more years of experience with AI and robots than we have.
In my experience, people who use "probably," really mean, "probably not."
Morons (Score:2)
The 3 laws were just there to show you how they wouldn't work. People pointing out how they wouldn't work are like people pointing out the "twist" in a Shamylan movie.
Re: (Score:2)
Re: Morons (Score:2)
...are like people pointing out the "twist" in a Shamylan movie
I'll point out a "twist" from the blatantly-feeble (if highly artistic) mind of one M. Night: a species that's deathly-allergic to water invades a planet covered with the shit... while wearing nothing but jumpsuits.
Good for them! (Score:3)
Good for these authors! They agree with the author of the laws!
In Robots and Empire, the laws failed so hard the robots decided they needed a human to come step in and provide some guidance. Alternately, it allowed the robots (R. Daneel Olivaw if I recall correctly) an "out" whereby it was not at fault if things didn't work out.
Philosopher (Score:5, Interesting)
Many computer engineers use the three laws as a tool for how they think about programming.
All I've learned from this is that philosophers have no clue what computer engineers do. Everyone else already knew the Three Laws were fictional plot devices, not a foundation of artificial intelligence. The Three Laws were really the one major fantasy element in science fiction works where the science was reasonably plausible.
Re: (Score:3)
The reason for the three laws is more basic than that - if you were building an AI, you'd want it to have some guiding principles. You'd have to program them in, so you want somethin
Re: (Score:2)
Nitpick... in the pilot episode, a point on this subject is made explicitly clear when Michael asks Devon Miles if KITT's programming was designed only to protect its driver, and Devon's response is that while it is programmed not to harm anyone, it is expressly programmed to preserve Mi
Re: (Score:2)
this was pointed out in the Hitchhiker's guide to the galaxy. rather humorously too.
Hey, I'm a Philosopher (Score:2)
Re: (Score:3)
I've never heard of any computer engineer using these laws nor any situation where they would even be remotely applicable to the work of a computer engineer.
And I thought philosophy of science was dumb (Score:5, Insightful)
Philosophy of computer science? Forget it.
It is like they didn't even read the book... hell, even watching the movie would show that the three laws were a literary device designed to fail; to show that reality is far more complex than what a set of laws can describe.
Re: (Score:2)
If you think philosophy of science is dumb you should read David Deutsch's "Beginning of Infinity". He does a great job of explaining why philosophy of science isn't only not dumb, it's really, really important, and a lot of epistemological mistakes that humanity has made and continues making are precisely due to bad philosophy of science.
The book contains lots of other really interesting ideas as well. Well worth the time.
Well SOMEBODY completely missed the boat! (Score:2)
Asimov NEVER intended the "Three Laws" to be actual guidelines or rules for robots! He created them as a literary device to demonstrate just how untenable and impossible it would be to try.
In literally every story involving the "Three Laws" they're either broken or circumvented in some way.
The entire point of the "Three Laws" was to demonstrate how inadequate a concept they were!
What's interesting is the people who believe (Score:3)
in them.
The people that cite them or include them as props in their fiction are almost never technical people or even good science fiction writers. If you see them in science fiction outside of Asimov's works, it's a sure sign you are reading fantasy with a sci fi set. Expect to see nano technology doing 50 impossible things in the authors work as well.
Might as well be talking about magic. (Score:2)
Seems like as soon as "autonomous driving" became a thing people thought our cars just driving us around where we tell them was right around the corner. Of course, if you think about it even a bit you'd realize that while we will take piece wise steps towards that, the end-goal is still, as of right now, at least 10 years away and probably closer to 15-20.
AI is falling into the same trap. Cart before the horse. We are so absurdly far from any kind of actual AI that we might as well be talking about magic or
AI as the new fusion (Score:5, Funny)
I'd be shocked if we see an actual, self-actualized, self-aware, thinking computer in even 50 years.
It will always be 20 years away (but we're making great progress!).
Re: (Score:2)
Ethics? (Score:5, Insightful)
The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves
This is not a reason it wouldn't work. You could argue that you shouldn't do it, but this is not an engineering issue.
Re: (Score:2)
So most jobs, in other words?
More like "almost nobody" (Score:2)
I'd say almost nobody thinks about these laws when writing software. A lot of software is deliberately harmful: malware, missile controllers. Much software is potentially harmful: addictive games, social media. I wager most software is "harm agnostic" such as operating systems that could be used for good or evil.
I don't think I ever thought about it when writing code. I was just trying to get the functions to work. How my code would be used at a higher level is relevant in the sense that there are et
Re: (Score:2)
There might be elementary rules, such as Azimo walking in a path that avoids being to close to humans walking nearby. But this is a far cry from a general sort of law about harming humans. It's just a path planning rule while walking.
Re: (Score:2)
Wouldn't the time to think about it be before you start writing the code for the Big Red Button?
Once you've started, you're just haggling over the price.
Asimov himself knew he was writing fiction (Score:5, Interesting)
Asimov would have told you his Three Laws of Robotics would be unlikely to work in real life.
And as others have noted already, Asimov wrote a whole bunch of stories where he explored corner cases where the Three Laws were inadequate. For example, in the novel The Naked Sun someone pointed out that you could build robot brains into heavily armed spaceships, and tell these robots that all spaceships contain only robot brains and not humans (and any radio transmissions that seem to be human voices begging for mercy are tricks and may be disregarded). Hey presto, you've built a robot that can kill humans.
I've also noted in the past that Asimov's Three Laws won't work in the real world because of griefers. A person could order a robot to walk up to something expensive, smash it, and then forget everything the robot knows. The robot would be unable to identify who gave that order, so a bunch of damage would occur (and the robot would need to be re-educated).
Plus Asimov imagined that it would somehow be so difficult to create a second robot brain design that nobody would ever do it. If North Korea was still in existence when Asimov-style robot brains were invented, the Leader would immediately start a secret project to make robots that obey his orders in all things with no scruples about killing.
All that said, Asimov is justly revered for pioneering the very idea of robot safeguards. Before Asimov, robots were generally presented as very dangerous. Asimov reasoned that we put safety guards on dangerous machinery, so we would put behavioral safety guards on self-aware machines.
Three Laws are shorthand for great complexity (Score:3)
Building on your point, as is stated in one of the early robot stories I think, the "Three Laws" are a shorthand summary of the astoundingly complex programming of Positronic potentials in a robot brain. So what is going on in such fictional robot brains may be a lot more sophisticated than the short Three Law summary might imply.
While not completely analogous, that notion of an exceedingly complex Positronic brain summarized by three easy-to-state laws may be a bit like how one might say the US Constitutio
Re:Asimov himself knew he was writing fiction (Score:4, Informative)
Asimov would have told you his Three Laws of Robotics would be unlikely to work in real life.
And as others have noted already, Asimov wrote a whole bunch of stories where he explored corner cases where the Three Laws were inadequate. For example, in the novel The Naked Sun someone pointed out that you could build robot brains into heavily armed spaceships, and tell these robots that all spaceships contain only robot brains and not humans (and any radio transmissions that seem to be human voices begging for mercy are tricks and may be disregarded). Hey presto, you've built a robot that can kill humans.
The three laws are in fact plot devices and were used to make stories more interesting, and in a lot of novel there was an in-universe explanation that they were really an oversimplification of more and extremely complex mathematical function. In the first novel where the three laws are stated, Runaround, there was a robot malfunction due a bug caused by a parameter set so high.
The laws aren't the problem (Score:2)
A 4th law was added a few years back (Score:2)
The other laws, 1, 2 and 3 are modified accordingly to not conflict with law zero.
This fourth law was even discussed on Slashdot, maybe a decade or more ago.
So, for example, a robot could kill a person or multiple people to prevent the human race from becoming extinct. Like terrorists with a human race destroying device.
Redefine Human as Class 1 Human (Score:2)
Now the robot will nicely serve the class 1 humans to control class 2.
That was the point (Score:2)
That was the whole point: flaws and ambiguity so Dr. A could write good stories exploring the definitions of harm, inaction, human, self-preservation, and so on.
I view the Laws of Robotics much like the Drake Equation. Not so much a predictive tool as a basis for discussion.
...laura
Some people never read the stories. (Score:2)
They didn't work. They were always dealing with loopholes in them. Its crazy how people think they would actually work.
wasnt that the point of his stories? (Score:2)
Including Asimov himself (Score:2)
Afterall, he made a series of books showing in detail the limitations and impracticalities of his three laws.
There (Score:2)
Re: (Score:2)
Give me a break (Score:2)
There is nothing unethical about keeping slaves you literally created for the purpose of being slaves. Of course robots would be our slaves, that just means unpaid worker. That doesn't mean they have to be treated cruelly. This kind of poor attitude would block proper sex bots and that is almost the entire point of advanced robots.
Irrelevant. (Score:2)
Robots are sentient now? (Score:2)
When did that start?
This whole robots thing ... (Score:2)
... will never reach "intelligence," until they are capable of saying: "I just don't feel like doing that right now."
On the subject of sentient beings in slavery... (Score:2)
A credit card is, in fact, just one form of modern and entirely legal slavery...
While you can argue that it is a hole that they dug themselves into, how is that fundamentally any different than a person who willingly sells him or herself into perpetual servitude of another simply to pay off some benefit received earlier? Even the option of potentially paying off the debt may not be viable in many cases, as the debt load for many people, particularly those with the "gotta have it now" mentality, may be h
The diodes hurt all down my left side (Score:2)
Last year 2018 the Isaac Asimov Memorial Debate at the American Museum of Natural History New York was on Artificial Intelligence - in particular what threats it poses. This was a practical discussion on what to do about the problem. See the podcast on soundcloud from Science at AMNH https://soundcloud.com/amnh/20... [soundcloud.com]
I note that the philosopher is long on criticism and short on answers, I would not commission them to build or make anything. Engineering is used to solving apparently insoluble problems.
I also
Re: (Score:2)
I see you haven't been following the military industry too well, or even the definition of what a robot is.
Are drones robots? They may not be driven by AI, but they're capable of doing lots of damage. Although there's no real RoboCop, there are facial recognition systems that will mis-identify you on the street, and dispatch humans, who will kill you if you're unable to -- and in some cases even if you do -- immediately follow their orders. Was the AI right or wrong? Was the wrong person killed? Connect the
Re: (Score:2)
>they are far from anything mimicking human thought
BUT THE CAR WILL HAVE TO DECIDE IF THE HUMAN OR THE DOG IS MORE VALUABLE
It's a genuine concern. No, not the sarcasm up there, but people who genuinely believe that the algorithm thinks beyond "obstacle detected" and starts musing about how much life each has left to live.
Re: (Score:2)
I've had this following thought experiment in my head now for 3 years and I would enjoy hearing another persons view
You ( me or anyone) are sitting on a park bench where there is also a road in front ( like in central park along the avenue )
An AI car get a flat near your location which causes it to lose some control
the AI car has a rule to keep it's passengers safe from harm
the AI car has calculated that by hitting me and the bench it will keep the passengers safe.
the car hit's me and I die.
here are some of
Re: (Score:2)
People should watch how AI algorithms work. They are ARTIFICIAL forms of INTELLIGENT BEHAVIOR.
No, AI is just the name of a department inside the CS program. Don't take it so literally. What is currently being called AI is merely a productivity tool for rapid development of black-box algorithms whose function is highly dependent on being operated in conditions that match the whatever the training inputs were.
Re: (Score:2)
There are no laws which govern human behavior that prevent another human from doing such things to your wife, save for the repercussions that would befall them if they should get caught. The same principle would also apply to a robot.
Not to mention that, at least in a typical North American sense, it could be said that doing that to your wife would be psychologically harmful to you, and so a robot could be prevented from doing it under the First Law, unless it had some reason to assume that you would n
Re: (Score:2)
The principles, at the level you indicate, are very well-defined by every modern civilization. What is less-than-perfect is the execution.