Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Science

Ethical Questions For The Age Of Robots 330

balancedi writes "Should robots eat? Should they excrete? Should robots be like us? Should we be like robots? Should we care? Jordan Pollack, a prof in Comp Sci at Brandeis, raises some unusual but good questions in an article in Wired called 'Ethics for the Robot Age.'"
This discussion has been archived. No new comments can be posted.

Ethical Questions For The Age Of Robots

Comments Filter:
  • Ethical Questions (Score:4, Insightful)

    by the_mad_poster ( 640772 ) <shattoc@adelphia.com> on Wednesday January 12, 2005 @12:14PM (#11335891) Homepage Journal
    I don't think he's really asking questions that haven't been asked before in other mediums.

    In fact, a lot of the potential problems he alludes to seem to stem from human fears about things humans can or have done to each other in the past. I think that what we really need to be concerned about is creating a new form of "life" that is too much like us without the knowledge we've gained so far.

    Think about it. We build this system that can do the thinking of 5000 human years in a day, but he doesn't have the KNOWLEDGE to necessarily back it up. What then? We've got a brand new self-interested lifeform that just evolved 1.5 million years in thirty seconds. I mean, Mr. Roboto may come to the logical conclusion that xyz group needs to be euthanized because it's interfering with abc group without, it would appear, any benefit. For example, if you have all these people in southeast asia who might get dangerously ill and spread disease to otherwise healthy people, isn't the most logical conclusion to either quarantine them and let them die, or to euthanize them so they don't suffer.

    Well.. sort of, but that doesn't go well with human motivations and desires, something the robot may not have taken into consideration because it lacks the knowledge of human history that's shaped us to this point and caused us to come to the conclusion that it's best to HELP them, not rid the world of them.

    I think machines ought to be barred from rapid critical human thinking until we have stepped through the process with them. The problem might become that the computer can outthink humans by so many orders of magnitude that we can't error check the process in development because there's too much data coming out for humans to walk through.

    All that said, perhaps the future lies in alleviating some of the bottle necks to human thinking and expanding our capabilities in new ways by merging with machines. In that way, the human can throttle the computer, and the computer can tap the human's experiences and knowledge in order to come up with a wider range of "logical" conclusions than might otherwise be possible within the limited scope of programming directives.
  • The real questions (Score:3, Insightful)

    by Anonymous Coward on Wednesday January 12, 2005 @12:15PM (#11335909)
    Sorry, but the questions this guy is asking tell me he's an academic wanker in an ivory tower somewhere.

    The real questions we should be asking are: is it ethical to make people believe they need to work harder than their parents to get less when physical products are easier than ever to produce? Is it ethical for both parents to work so much that they never see their kids?

  • Market (Score:2, Insightful)

    by frankthechicken ( 607647 ) on Wednesday January 12, 2005 @12:20PM (#11335983) Journal
    Let's be honest here, the robots mankind wind up making will be the robots that sell the best.

    Now considering past market characteristics, that is either a good thing or a bad thing dependant on your point of view.
  • by Silver Sloth ( 770927 ) on Wednesday January 12, 2005 @12:21PM (#11336000)

    A robot is a tool. Any attempt to insist that they should have ethics is anthropomorhising them far beyond what they are or will ever be. Asking if a robot should have ethics is like asking if a hammer should have ethics.

  • Dumb Dumb Dum Dum (Score:4, Insightful)

    by auburnate ( 755235 ) on Wednesday January 12, 2005 @12:24PM (#11336029)
    Pollack says "Imagine the pollution levels if we add hundreds of millions of robots powered by internal combustion engines."

    This is so silly it numbs my mind. If future roboticists use internal combustion engines on their robots, they are morons. Fuel cells, solar cells, rechargable batteries ... etc


  • what? (Score:2, Insightful)

    by j1bb3rj4bb3r ( 808677 ) on Wednesday January 12, 2005 @12:25PM (#11336044)
    4. Should robots eat? There are proposals to allow robots to gain energy by combusting biological matter, either food or waste items. If this mode of fuel becomes popular, will we really want to compete for resources against our own technological progeny?

    I hate to tell you Mr. University Professor, but any robot that does something uses energy, and that energy comes from somewhere. Whether my tin-man friend eats its energy via food or gets it from a battery, it's still competing for resources with me. This is a dumb question to ask, unless you want to make a point about anthromorphizing robots.
    Dammit, I want a professorship... my job is too hard... wait I'm just reading /. all day. Nevermind.
  • Re:3 laws (Score:2, Insightful)

    by un1xl0ser ( 575642 ) on Wednesday January 12, 2005 @12:28PM (#11336090)
    That's fine and dandy, but what about following laws of a country, state or other moral guidelines that most humans find to be acceptable. If I tell a robot to grow pot for me and then program him to lie to the law about who gave him the orders, should that be in their code? What about illegal search and seasures and robot instruction data et cetera? Should robots be programmed to give any information required to the proper authorities. Should they be able to recognize warants et cetera. I've got to assume there is some book by Stanislaw Lem that covers something like this, but I haven't read it yet. Still working ont he Futurological Congress.
  • by Badgerman ( 19207 ) on Wednesday January 12, 2005 @12:29PM (#11336115)
    . . . is why aren't we asking more of these questions and why aren't they in the public eye.

    This is a nice simple article on some interesting questions, but it barely scratches the surface of all the concerns we're likely to face in the next 50 years. A few alone:

    When is someone responsible for a machine that functions independently, but that they configured?

    What resources will be affected by robotic production. Do we really NEED these robots?

    When a human and a robot work together on something, who gets the blame for failure?

    Of course anyone here can come up with more.

    The problem is that as technology improves around us, more people aren't asking these questions, and even less are coming up with useable answers.

    The future is coming. I wish we weren't watching "Who's your Daddy" while it approaches.
  • by leinhos ( 143965 ) on Wednesday January 12, 2005 @12:35PM (#11336187) Homepage Journal
    At best these questions are ones of economics. The ethical questions start when the robots begin requesting/demading rights as living beings. If your Roomba wants to leave your house to pursue a career as a Segway (or to clean another person's home), are you ethically/morally obligated to let It?
  • by buddhaunderthetree ( 318870 ) on Wednesday January 12, 2005 @12:36PM (#11336206)
    Cheack out this article [legalaffairs.org], in the currently Legal Affairs to see some thoughts on the what rights should AI's be granted.
  • Should robots... (Score:3, Insightful)

    by exp(pi*sqrt(163)) ( 613870 ) on Wednesday January 12, 2005 @12:41PM (#11336292) Journal
    What does 'should' mean? There are groups of people: workers, company owners, geeks, consumers, not necessarily mutually exclusive, all of whom have their own different interests. You can never answer the question 'should' without knowing whose interests you are talking about. For a manual worker robots shouldn't take their jobs. For a company owner maybe they should. If he isn't prepared to even touch on this issue how can this guy think his article about what robots should and shouldn't do have any value?
  • by Alien54 ( 180860 ) on Wednesday January 12, 2005 @12:42PM (#11336300) Journal
    The questions you raise arise typically from a unidimensional ethical system.

    The answer is in having a multidimensional ethical system. One such previously published system suggested these dimensions (paraphrased)

    1. personal self interest/survival
    2. sexuality
    3. family
    4. tribal/group/national
    5. ecological/cross species
    6. expressive/artistic
    This list is incomplete. Feel free to add others as desired. Working out the formulas for balancing the parameters and vectors in order to achieve the highest overall and most positive result is left as an exercise for the interested reader.

    The situation re: the tsunami is easily resolved as the many contributions are pro-survival on a pan-tribal level, and there are few if any political quandries tied into the situation.

    Working with robots raises interesting questions because here we are dealing with creatures who have the potential to be our equals, or possibly our superiors. This is scary to folks who normally are used to handling people and things on a commodity basis. what is the things they dispose of start fighting back? See this Calvin and Hobbes Cartoon [ucomics.com]

  • by KillerDeathRobot ( 818062 ) on Wednesday January 12, 2005 @12:43PM (#11336306) Homepage
    If we develop a hammer that can think (so to speak) and act independently, I strongly hope that we do instill some ethics into the thing. I'd hate to be victim to a crazy hammer rampage.

    Not that the article was about robots themselves having ethics really though. It was more about how we should apply ethics to creating robots.
  • by ackthpt ( 218170 ) * on Wednesday January 12, 2005 @12:46PM (#11336349) Homepage Journal
    I think machines ought to be barred from rapid critical human thinking until we have stepped through the process with them.

    Lord knows we've done the opposite with computers -- making it up as we go along, screwing each other with IP, DRM, shoddy software and locked-into architecture for the maximized benefit (profit) of a few.

    How does any rational person see us proceding with robots/cyborgs any differently?

    I foresee patents, robots running on Windows (you'll know, because they have to be rebooted frequently, are infested with parasites(virii/worms), regularly patrol their environment doing things they shouldn't (whether defective, under guidance by software vendor or cracker, you'll not know) and need to download pest scanning/diagnostics/patches on a daily basis), Linux (two distros duking it out in the parking lot while a debian one waits to fight the winner) and having to upgrade and service on a basis that'll make your checkbook spin.

    Seriously, how altruistic does anyone expect robot manufacturing to be?

  • Stupid questions (Score:3, Insightful)

    by Rick Genter ( 315800 ) <rick@genter.gmail@com> on Wednesday January 12, 2005 @12:56PM (#11336494) Homepage Journal
    Should robots eat?
    Should robots excrete?

    Stupid questions. Unless someone invents a 100% efficient perpetual-motion machine, robots, like any system, will have to consume energy and will produce waste byproducts.

    Duh.
  • by nine-times ( 778537 ) <nine.times@gmail.com> on Wednesday January 12, 2005 @01:08PM (#11336677) Homepage
    Sorry, but the questions this guy is asking tell me he's an academic wanker in an ivory tower somewhere.

    Sorry, but he seems more like a wannabe academic-wanker who wishes he were in an ivory tower. Believe me, I've known some academic wankers in ivory towers, and he's not qualified.

    Considering "should robots eat?" as some sort of a deep or important ethical question is absurd. Why on earth *would* they eat? "Should they excrete?"?! Excrete what?! Why even speculate about the possible byproducts of 'robots' which don't exist yet?

    How are these issues of ethics, rather than an engineering issue? And should 'robots' be given patents? WTF?!

    It sounds like this guy is a little out of his element here. Ethics is a complicated subject. So is engineering. Predicting how the introduction of technology will impact the environment and political climate on a global scale is no easy matter, but apparently some CS professor from Brandeis thinks he's got a real handle on it.

    The whole article sounds like a 10 year old talking about, "In the future, we might create giant robots who would fly and shoot people, but if we did this, we can only assume they would poop a previously-unknown and highly toxic material. So, we might want to be careful about making flying super-robots." Great. Glad he's on the case.

  • by asliarun ( 636603 ) on Wednesday January 12, 2005 @01:10PM (#11336706)
    While your points on ethics is valid, what's the practical use of humanoid robots anyway?

    The author talks about robots manning call centers. This, IMHO, is an absurd use of humanoid robots. It would be infinitely more practical to make an "intelligent" telephone or EPABX than it is to employ a humanoid robot to answer phones all day long. The same holds true for most other cases. Even if you take a hazardous job such as mining, i'm sure that specialized machines with specific domain intelligence in mining will easily outperform and be more cost effective than a humanoid robot.

    The way i see it, humanoid robots will always be a novelty, and will not serve any practical purpose. I know i run the risk of making a "640kb RAM should be enough for anybody" kind of comment. However, in real terms, i don't see humanoid robots (costing millions) substituting specialized machines or even human beings anytime soon, if ever. A more practical scenario, in my opinion, would be an integration of man with robot. Medical prosthetics has already made significant progress in this regard. This will only continue and someday move upwards until it reaches the brain.

    Another thing is that the root cause for this discussion is not the robots themselves but the AI driving the robots. My guess is that human beings would rather integrate AI into their own brain and rely on the AI to augment their own knowledge and thinking power. Comapared to this, using AI-enabled humanoid robots to clean one's dishes simply does not make sense.
  • by DeepDarkSky ( 111382 ) on Wednesday January 12, 2005 @01:16PM (#11336806)
    Depends on your definition of robot. I think these questions are only applicable to "sentient" robots or robots with advanced artificial intelligence. Most "robots" as well call them today do not qualify, so none of these questions are applicable.

    We, as humans, should stop trying to play god to create sentient beings. Robots as tools are much more useful to us. But, you say, you want something that can independently think and do stuff for us. What you are looking for here are "slaves". Beings that can do their own things but still obey you.

    Why do we even bother with all of this? If you don't make a super intelligent robot that can learn and independently think, then you don't need the 3 laws. You don't need to worry about the robots killing all humans and taking over the world. All of these problems that sci-fi say we will be afflicted with because we want to play god and be lazy.

    We are doomed.
  • Re:Best? For whom? (Score:4, Insightful)

    by Cro Magnon ( 467622 ) on Wednesday January 12, 2005 @01:23PM (#11336900) Homepage Journal
    If homo sapiens is replaced by silicon sapiens, is it really such a bad thing?


    It is, if you happen to be a homo sapien.
  • by vector_prime ( 575757 ) on Wednesday January 12, 2005 @02:52PM (#11338188)
    I think the author was making a point. When people started using ICEs in early automobiles, noone imagined that the exhaust fumes would become a problem in later years. If we're not careful, when there's millions or billions of aging robots that use some new fuel source with a negligible output we could have yet another environmental crisis.
  • by ArsonSmith ( 13997 ) on Wednesday January 12, 2005 @04:25PM (#11339450) Journal
    That's what always pissed me off about Spock from Star Trek. If he was logical he would have studied the human psycie and he wouldn't be as supprised by human emotion. Hell, he was suppose to be half human, yet he was continously caught off guard by human emotion and desire.
  • Re:Best? For whom? (Score:1, Insightful)

    by Anonymous Coward on Wednesday January 12, 2005 @05:19PM (#11340166)
    "It[being replaced by silicon sapiens] is [a bad thing], if you happen to be a homo sapien."

    Is it really? If you have children, effectively are they not replacing you when you die?

    Besides, if we design the machines to be more humane than us humans have managed, and they are truly superior, I would hope that it would not be so much of a genocide as an assisted co-evolution. Certainly, in my opinion, there are enhancements I would embrace if they were available: better vision (nearsighted and profoundly colorblind), better memory, perhaps a math co-processor. As we age, bits fail; how much the better to upgrade than to die (at least for the individual.)

    One of the reasons death is not necessarily to be feared is that the world changes, and those set in their ways and holding onto old, outmoded beliefs with the power to prevent change will eventually die. To prevent death is to prevent change -- unless we can upgrade to the new standard. How much more wonderful life would be if we could truly see and enjoy the changes that are going to be coming our way over the next decades and centuries - if we can live long enough, and change with the times.

    Unless human beings are an evolutionary dead end, there will come a time when we are replaced. I would rather it be by a race we designed and evolved with and into than another sentient species hostile to us, or worse that we simply disappear from the universe with no progency.

    --doug

All great discoveries are made by mistake. -- Young

Working...