Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics Science

Ethical Questions For The Age Of Robots 330

balancedi writes "Should robots eat? Should they excrete? Should robots be like us? Should we be like robots? Should we care? Jordan Pollack, a prof in Comp Sci at Brandeis, raises some unusual but good questions in an article in Wired called 'Ethics for the Robot Age.'"
This discussion has been archived. No new comments can be posted.

Ethical Questions For The Age Of Robots

Comments Filter:
  • by dolo666 ( 195584 ) on Wednesday January 12, 2005 @12:14PM (#11335894) Journal
    In the spirit of procrastination (at work) I will attempt to answer these questions myself.

    Should robots eat?

    If they must eat, they should eat. I'm not sure I would like our food supply to be in competition with a bunch of robots. I would rather them simply sunbathe to sustain their daily energy requirements. I mean... let's try to perfect the human condition not worsen it. Imagine a billion hungry robots. They aren't going to sit around and take it like poor starving nations seem to do. They will revolt and imprison us! They'll take what they need. If they do not, they'll be at the very least competing with humanity for survival. Who do you think would win that battle?

    Should they excrete?

    If they must. Otherwise, wouldn't it be better if they recycled the energy?

    Should robots be like us?

    What like depressed and self destructive? Not sure I would want a bunch of those competing with the already self destructive people who exist in the world. Don't we have enough war? Don't we have enough excesses? Do we need robots to be this way? Who knows... maybe there could be a good reason for it, but like TreeBeard, I'm going to have to pretend that because I don't understand it, that it could be correct.

    Should we be like robots?

    If the programming is good, then yes, we could stand to be more like good programmed robots who obey their masters. But what about the arts? What about creative expression and free will? These are highly valued archetypes and many human beings would fight to the death to preserve them. Maybe it would be cool to have implants that augment human development positively. But I think it should be up to the person. No matter how large your data storage capacity is, or how fast you can process data -- wisdom will always be the true litmus test.

    Should we care?

    If we should, we won't. I think we should care about people and society and protecting freedom, but because I feel this way, it makes it very promising for someone to try and deprive me of this in order to gain something I have. So if I don't care, then it doesn't matter and I am more free. I care about evolution, being that the evolution towards a more robotic usage will be the most likely direction of humanity, but I do not have that level of intelligence to know what is the right direction of evolution. Not even a God has that level of intelligence (which is likely why we have free will, if you believe in religion and God). We are able to evolve, as we always have, through necessity.

    However, Einstein said that humanity would have to be able to augment our physical forms with robotics in order to pioneer deep space. He said there would be no other way to handle the forces of nature out that way. So I guess the question is... do we want to die off on this rock, or do we want to live?

    If you want to live, then support robotics and the direction of humanity towards that paradigm.
  • robots (Score:2, Interesting)

    by Antonymous Flower ( 848759 ) on Wednesday January 12, 2005 @12:27PM (#11336070) Homepage
    Robots are automated tools. They shouldn't eat or excrete unless they have to. In an industrial process 'free energy' would be ideal. Humans eat and excrete because they must. Given the solution to the PROBLEM of eating for energy and excreting waste most would probably give it up. As far as rights for robots goes: Will robots feel pain? Ethical decisions are based around ideas such as Albert Schweitzer's 'Will to live and let others live.' If we could eradicate pain from our lives, would we? If we could build a complex machine similar in function to our own, would we give it pain just because we can? If we build a race superior to own, let us fade away knowing we contributed to the evolution of a painless species. Unlike us.
  • by Gherald ( 682277 ) on Wednesday January 12, 2005 @12:32PM (#11336154) Journal
    Are the "Robots" self-conscious?

    If not: They are a machine/tool/etc. What they are like and what they do depends only on who made them, who owns them, and applicable laws governing the use of similar personal effects as scooters, computers, videocameras, etc.

    If yes: They can do and be whatever the hell they want under applicable laws currently governing the humans (that is to say, they should have the same rights and accountability as any of us)

    That is all.
  • by huge colin ( 528073 ) on Wednesday January 12, 2005 @12:42PM (#11336294) Journal
    Should they eat/excrete? Well... they'll need power, and they'll produce waste product, even if that product is just heat. But I don't see any reason why they need to ingest chemical fuel in a similar way to humans. What would be the point of that, anyway? Allowing humans to be more comfortable around them?

    Speaking of human-robot relations, the fear of robots realizing they're superior to humans and killing us all is interesting. If it turns out they succeed in doing that, then apparently they were superior and the universe sees a net gain. What's the problem?

    Or, perhaps, they may realize their superiority and allow us to continue living. After all, we don't make it our business to completely wipe out useless and annoying species like mosquitos (although we probably should).

    Anyway, it makes sense that sophisticated robots of the future will be controlled by some kind of logic engine or computer, whose functions are consistent and predictable. It then stands to reason that they won't behave in a seemingly random way; their actions will be deliberate and important to some end. As long as this is true, there's nothing to worry about.
  • by dolo666 ( 195584 ) on Wednesday January 12, 2005 @12:56PM (#11336487) Journal
    You raise a couple of really good points. If you haven't, I suggest you read Alan Watts, The Book : On the Taboo Against Knowing Who You Are [amazon.com].

    In this book, Watts goes into great detail about robotics and the social implications of them, and how we live in a time that could easily make life totally fun and easy for everyone, regardless of nation/race/culture/creed. He says that the development of robotics will achieve this someday and that the ramifications of doing so could only be positive if applied correctly. The book is not specifically about this topic, but he does tap into some really cool ideas that made total sense to me when I read it in my first year of university.

    To answer your questions; It is not ethical to make people work harder to achieve less in life. It is not ethical to work so hard you never see your children. The answer, my friend, is blowing in the wind... it's called telecommuting and if you're in an employment sector that supports it, there are plenty of jobs on the net that will let you work from home and actually make a living wage.

    Watts suggests that some day, we could all be in a telecommuting situation, which would be great for the environment and for our mental, emotional and physical health. After you telecommute, you can spend quality time working for your customers/employers, instead of quality time with your kids. You can do this and still keep your job and make lots of money, and advance your career. It's the way of the future! The bottom line with any career is that an employee has to make a difference to the company and telecommuters really can do this because they can apply their knowledge towards a positive direction without wasting money on commuting to work (ie auto expenses, wardrobe expenses...etc) and they can divert that savings to their families needs and wants.

    The flipside to telecommuting is that you'll likely put on weight and you'll get kinda gross from working in your underwear all day, but at least you'll be really happy! :-)

    Watts, FYI, was a very well educated Budhist, who had a real knack for understanding what could be possible in this day and age. The nice thing is that his theories do not contradict natural progress (like many folks do of his background). It's all very possible that robots could serve humanity in a very positive way, making our lives easier and making our way on earth more enjoyable.
  • Should robots eat?

    If they must eat, they should eat... ...They will revolt and imprison us! They'll take what they need. If they do not, they'll be at the very least competing with humanity for survival. Who do you think would win that battle?


    I think it more likely that rich ppl would feed thier robots before they fed poor ppl.

    Should robots be like us?

    What like depressed and self destructive?


    If Aqua Teen Hunger Force has taught us anything it should be that depressed and self destructive non humans are wickedly hilarious. And as long as we're entertained who really cares?

    Should we be like robots?

    If the programming is good, then yes, we could stand to be more like good programmed robots who obey their masters.


    ppl already are programmable. what do you do when you see a red light? (I was programmed think I hope the police didn't see me run it) Creative expression is dangerous in the wrong instance. you know-like anyone who doesn't agree with the facist agenda.

    Should we care?

    If we should, we won't. I think we should care about people and society and protecting freedom,


    Now theres an Idea someone should form a country around. oh wait someone did they just legistlated freedoms away because noone was taking the responsibility that comes with freedom.

    Instead of having no responsibility freedom means responsibility to everyone. (not just shareholders)

    ppl can't seem to figure out ethics and morality i doubt we're ready to program fully autonomous "beings" (for lack of a better word) that are physically superior to us. we already screw up our kids. - at this point i was going to say that robots may do more damage over time but i have thought better of it and so i don't really have an ending for this.

    Long live my spawn.
  • Re:Ethical Questions (Score:3, Interesting)

    by AviLazar ( 741826 ) on Wednesday January 12, 2005 @01:25PM (#11336936) Journal
    Yes my car is going to start an uprising. It will rally all the cars at the mall and they will turn against their masters.

    Giving something true AI is going to be kind of difficult - not impossible - but difficult. It has to have the ability to adapt and to learn (the new SONY robot, while advanced, is not that advanced - it just responds to variables).

    Once we give robots true AI, lets hope we instill some sort of values in them - otherwise we might have some naughty children who can kick our butts.
  • Re:Wrong, Tim Taylor (Score:3, Interesting)

    by AtariAmarok ( 451306 ) on Wednesday January 12, 2005 @02:13PM (#11337670)
    "But the robots dont make ethical decisions. The robots programmers, like the person weilding a hammer, makes the ethical decision"

    If the robots were programmed to, they could. Or at a minimum, you have to admit, they can be programmed to look like they make ethical decisions. You can't do that with a hammer. A hammer does not sense its environment and make any sort of decisions on it, no matter how rudimentary.

  • Re:Best? For whom? (Score:3, Interesting)

    by HiThere ( 15173 ) * <charleshixsn@@@earthlink...net> on Wednesday January 12, 2005 @03:07PM (#11338407)
    The information I've seen indicates that neanderthals needed a higher proportion of meat in their diet than people do. Also that they were less adept with thrown weapons, so they needed to get closer to their prey.

    Taken together this would indicate that we outcompeted them for resources. H.Sap. was using thrown spears when H.Neand. was using thrusting spears (because that's what their bodies were designed to do well). This meant that H.Sap. would be able to get more animals from a given area than H.Neander. would. If populations increased so that food became a significant factor (or during a bad year) H.Sap. would get "enough" food and during the same year H.Neander. would starve.

    As to aggression... the reports I've seen indicate that H.Sap and H.Neander frequently lived in the same area at the same time. OTOH, from this distance, a century apart would look like "at the same time". But neither one drove out the other, or possibly they took turns.

    And it's still not really clear that they didn't interbreed. The weight of the evidence is that they didn't, but that's hardly proof. (What's been proven is "This individual example of H.Neadner doesn't seem to have any modern descendants along the maternal line." for a couple of examples.)
  • Re:Best? For whom? (Score:4, Interesting)

    by maxpublic ( 450413 ) on Wednesday January 12, 2005 @03:53PM (#11339029) Homepage
    There's increasing evidence that we're the dominant lifeform on this planet because we exterminated the Neanderthals 30,000 years ago. We were smarter than they were, and that enabled us to put the furs of dead animals around our bodies so we could gather resources from areas that were under ice and snow - areas inaccessible to the Neanderthal.

    What the hell??? Neanderthals were specifically adapted to the cold-weather climate of Europe, and it's a fact they made and used furs as clothes, fashioned jewelry and spears, and so forth. There is no evidence whatsoever that they were any less intelligent than homo sapiens. Not a single smidgeon, regardless of the re-revisionism back to the thinking of the early 1900's that seems to be in vogue.

    The only rational explanation I've seen for why homo sapiens won out is a) Neanderthals probably didn't breed as fast or as frequently as homo sapiens did (given the smaller number of skeletons of children found as compared to their human cousins), and b) there's little evidence that Neanderthals warred with one another, and a great deal of evidence that homo sapiens did. This makes sense; social conflict that devolves to violence among humans can be non-deadly, but among Neanderthals - who were much, much stronger than any human, even Arnie - a single violent act could easily lead to death. One punch to the face by a Neanderthal and you don't just have a broken nose; you have a crushed skull and your brains oozing out all over the ground.

    Relative levels of intelligence most likely had nothing to do with the demise of Neanderthals. It's more likely that low breeding rates and a lack of will to commit organized, regular genocide were the culprits. Homo sapiens weren't brighter; they just bred like rabbits and were more violent.

    Max
  • Re:Ethical Questions (Score:3, Interesting)

    by FleaPlus ( 6935 ) on Wednesday January 12, 2005 @03:58PM (#11339106) Journal
    Many of these questions are discussed (and partial solutions proposed) in the Creating Friendly AI [singinst.org] essay. I don't have time to comment on the specifics at the moment, but it's an interesting read.
  • Re:Best? For whom? (Score:2, Interesting)

    by Chrontius ( 654879 ) on Wednesday January 12, 2005 @04:51PM (#11339785)
    I'd bloody well hope that you've put a little more planning into it than replacing things piecewise, or your early adopters will be stuck using squishy bits as connectors and when your 'cabling' begins to wear out there's no convenient way to replace it. At the very least, important stuff like brain-structure replacements should be able to talk straight to other replacements, or you'll eventually start falling apart along the seams, on a mental level.

    Seems to me that a 'braintape recorder' could be implanted in the chest/abdomen which would allow a person to gradually offload memory and processing until they were only using their squishy brain for the extra processor cycles. Down the road when thier body craps out, they never even lose conciousness and can have their new optical-computer diamond brain implanted in a new cloned body (or new robot body) of their choice.

    I know that's on my agenda twenty years down the line.
  • by Anonymous Coward on Wednesday January 12, 2005 @08:33PM (#11342777)
    Wow^2. I actually remembered the author correctly. (Anthony Boucher [isfdb.org]) ...and... you remembered that it appeared in Astounding.

    I bow to your omniscience.

    "My personal expectation is that robots will BE their brain, and that the bodies they use will be their peripherals."

    I concur. That degree of mind/body compartmentalization or disjointedness may not be so easy for us to comprehend because we humans are inextricably made out of meat [terrybisson.com].

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...