Ethical Questions For The Age Of Robots 330
balancedi writes "Should robots eat? Should they excrete? Should robots be like us? Should we be like robots? Should we care? Jordan Pollack, a prof in Comp Sci at Brandeis, raises some unusual but good questions in an article in Wired called 'Ethics for the Robot Age.'"
Ethical Questions (Score:4, Insightful)
In fact, a lot of the potential problems he alludes to seem to stem from human fears about things humans can or have done to each other in the past. I think that what we really need to be concerned about is creating a new form of "life" that is too much like us without the knowledge we've gained so far.
Think about it. We build this system that can do the thinking of 5000 human years in a day, but he doesn't have the KNOWLEDGE to necessarily back it up. What then? We've got a brand new self-interested lifeform that just evolved 1.5 million years in thirty seconds. I mean, Mr. Roboto may come to the logical conclusion that xyz group needs to be euthanized because it's interfering with abc group without, it would appear, any benefit. For example, if you have all these people in southeast asia who might get dangerously ill and spread disease to otherwise healthy people, isn't the most logical conclusion to either quarantine them and let them die, or to euthanize them so they don't suffer.
Well.. sort of, but that doesn't go well with human motivations and desires, something the robot may not have taken into consideration because it lacks the knowledge of human history that's shaped us to this point and caused us to come to the conclusion that it's best to HELP them, not rid the world of them.
I think machines ought to be barred from rapid critical human thinking until we have stepped through the process with them. The problem might become that the computer can outthink humans by so many orders of magnitude that we can't error check the process in development because there's too much data coming out for humans to walk through.
All that said, perhaps the future lies in alleviating some of the bottle necks to human thinking and expanding our capabilities in new ways by merging with machines. In that way, the human can throttle the computer, and the computer can tap the human's experiences and knowledge in order to come up with a wider range of "logical" conclusions than might otherwise be possible within the limited scope of programming directives.
The real questions (Score:3, Insightful)
The real questions we should be asking are: is it ethical to make people believe they need to work harder than their parents to get less when physical products are easier than ever to produce? Is it ethical for both parents to work so much that they never see their kids?
Market (Score:2, Insightful)
Now considering past market characteristics, that is either a good thing or a bad thing dependant on your point of view.
Should a hammer have ethics (Score:3, Insightful)
A robot is a tool. Any attempt to insist that they should have ethics is anthropomorhising them far beyond what they are or will ever be. Asking if a robot should have ethics is like asking if a hammer should have ethics.
Dumb Dumb Dum Dum (Score:4, Insightful)
This is so silly it numbs my mind. If future roboticists use internal combustion engines on their robots, they are morons. Fuel cells, solar cells, rechargable batteries
what? (Score:2, Insightful)
I hate to tell you Mr. University Professor, but any robot that does something uses energy, and that energy comes from somewhere. Whether my tin-man friend eats its energy via food or gets it from a battery, it's still competing for resources with me. This is a dumb question to ask, unless you want to make a point about anthromorphizing robots.
Dammit, I want a professorship... my job is too hard... wait I'm just reading
Re:3 laws (Score:2, Insightful)
The real question . . . (Score:5, Insightful)
This is a nice simple article on some interesting questions, but it barely scratches the surface of all the concerns we're likely to face in the next 50 years. A few alone:
When is someone responsible for a machine that functions independently, but that they configured?
What resources will be affected by robotic production. Do we really NEED these robots?
When a human and a robot work together on something, who gets the blame for failure?
Of course anyone here can come up with more.
The problem is that as technology improves around us, more people aren't asking these questions, and even less are coming up with useable answers.
The future is coming. I wish we weren't watching "Who's your Daddy" while it approaches.
Re:ethics vs good manners (Score:3, Insightful)
Legal Affairs also considers AI Rights (Score:2, Insightful)
Should robots... (Score:3, Insightful)
Undimensional Ethical Systems (Score:3, Insightful)
The answer is in having a multidimensional ethical system. One such previously published system suggested these dimensions (paraphrased)
The situation re: the tsunami is easily resolved as the many contributions are pro-survival on a pan-tribal level, and there are few if any political quandries tied into the situation.
Working with robots raises interesting questions because here we are dealing with creatures who have the potential to be our equals, or possibly our superiors. This is scary to folks who normally are used to handling people and things on a commodity basis. what is the things they dispose of start fighting back? See this Calvin and Hobbes Cartoon [ucomics.com]
Re:Should a hammer have ethics (Score:2, Insightful)
Not that the article was about robots themselves having ethics really though. It was more about how we should apply ethics to creating robots.
Thinking it through (Score:3, Insightful)
Lord knows we've done the opposite with computers -- making it up as we go along, screwing each other with IP, DRM, shoddy software and locked-into architecture for the maximized benefit (profit) of a few.
How does any rational person see us proceding with robots/cyborgs any differently?
I foresee patents, robots running on Windows (you'll know, because they have to be rebooted frequently, are infested with parasites(virii/worms), regularly patrol their environment doing things they shouldn't (whether defective, under guidance by software vendor or cracker, you'll not know) and need to download pest scanning/diagnostics/patches on a daily basis), Linux (two distros duking it out in the parking lot while a debian one waits to fight the winner) and having to upgrade and service on a basis that'll make your checkbook spin.
Seriously, how altruistic does anyone expect robot manufacturing to be?
Stupid questions (Score:3, Insightful)
Should robots excrete?
Stupid questions. Unless someone invents a 100% efficient perpetual-motion machine, robots, like any system, will have to consume energy and will produce waste byproducts.
Duh.
Re:The real questions (Score:5, Insightful)
Sorry, but he seems more like a wannabe academic-wanker who wishes he were in an ivory tower. Believe me, I've known some academic wankers in ivory towers, and he's not qualified.
Considering "should robots eat?" as some sort of a deep or important ethical question is absurd. Why on earth *would* they eat? "Should they excrete?"?! Excrete what?! Why even speculate about the possible byproducts of 'robots' which don't exist yet?
How are these issues of ethics, rather than an engineering issue? And should 'robots' be given patents? WTF?!
It sounds like this guy is a little out of his element here. Ethics is a complicated subject. So is engineering. Predicting how the introduction of technology will impact the environment and political climate on a global scale is no easy matter, but apparently some CS professor from Brandeis thinks he's got a real handle on it.
The whole article sounds like a 10 year old talking about, "In the future, we might create giant robots who would fly and shoot people, but if we did this, we can only assume they would poop a previously-unknown and highly toxic material. So, we might want to be careful about making flying super-robots." Great. Glad he's on the case.
Re:Ethical Questions (Score:3, Insightful)
The author talks about robots manning call centers. This, IMHO, is an absurd use of humanoid robots. It would be infinitely more practical to make an "intelligent" telephone or EPABX than it is to employ a humanoid robot to answer phones all day long. The same holds true for most other cases. Even if you take a hazardous job such as mining, i'm sure that specialized machines with specific domain intelligence in mining will easily outperform and be more cost effective than a humanoid robot.
The way i see it, humanoid robots will always be a novelty, and will not serve any practical purpose. I know i run the risk of making a "640kb RAM should be enough for anybody" kind of comment. However, in real terms, i don't see humanoid robots (costing millions) substituting specialized machines or even human beings anytime soon, if ever. A more practical scenario, in my opinion, would be an integration of man with robot. Medical prosthetics has already made significant progress in this regard. This will only continue and someday move upwards until it reaches the brain.
Another thing is that the root cause for this discussion is not the robots themselves but the AI driving the robots. My guess is that human beings would rather integrate AI into their own brain and rely on the AI to augment their own knowledge and thinking power. Comapared to this, using AI-enabled humanoid robots to clean one's dishes simply does not make sense.
Not even worth asking (Score:3, Insightful)
We, as humans, should stop trying to play god to create sentient beings. Robots as tools are much more useful to us. But, you say, you want something that can independently think and do stuff for us. What you are looking for here are "slaves". Beings that can do their own things but still obey you.
Why do we even bother with all of this? If you don't make a super intelligent robot that can learn and independently think, then you don't need the 3 laws. You don't need to worry about the robots killing all humans and taking over the world. All of these problems that sci-fi say we will be afflicted with because we want to play god and be lazy.
We are doomed.
Re:Best? For whom? (Score:4, Insightful)
It is, if you happen to be a homo sapien.
Re:Dumb Dumb Dum Dum (Score:2, Insightful)
Re:Ethical Questions (Score:3, Insightful)
Re:Best? For whom? (Score:1, Insightful)
Is it really? If you have children, effectively are they not replacing you when you die?
Besides, if we design the machines to be more humane than us humans have managed, and they are truly superior, I would hope that it would not be so much of a genocide as an assisted co-evolution. Certainly, in my opinion, there are enhancements I would embrace if they were available: better vision (nearsighted and profoundly colorblind), better memory, perhaps a math co-processor. As we age, bits fail; how much the better to upgrade than to die (at least for the individual.)
One of the reasons death is not necessarily to be feared is that the world changes, and those set in their ways and holding onto old, outmoded beliefs with the power to prevent change will eventually die. To prevent death is to prevent change -- unless we can upgrade to the new standard. How much more wonderful life would be if we could truly see and enjoy the changes that are going to be coming our way over the next decades and centuries - if we can live long enough, and change with the times.
Unless human beings are an evolutionary dead end, there will come a time when we are replaced. I would rather it be by a race we designed and evolved with and into than another sentient species hostile to us, or worse that we simply disappear from the universe with no progency.
--doug