Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics

New Robots and the Ten Ethical Laws Of Robotics 364

Roland Piquepaille writes "The robotics actuality is pretty rich these days. Besides the fighting robots of Robo-One and the flying microrobots from Epson (the best picture is at Ananova), here are some the latest intriguing news in robotics. In Japan, Yoshiyuki Sankai has built a robot suit, called Hybrid Assistive Limb-3 (or HAL-3), designed to help disabled or elderly people. In the U.S., Ohio State University is developing a robotic tomato harvester for the J.F. Kennedy Space Center while Northrop Grumman received $1 billion from the Pentagon to build a new robotic fighter. I kept the best for the end. A Californian counselor has just patented the ten ethical laws of robotics. A good read for a Sunday, if you can understand what he means. This summary only focuses on HAL-3 and one of the most incredible patents I've ever seen, so please read the above articles for more information about the other subjects."
This discussion has been archived. No new comments can be posted.

New Robots and the Ten Ethical Laws Of Robotics

Comments Filter:
  • by Anonymous Coward on Sunday August 22, 2004 @05:16PM (#10039680)
    All we had were 3 laws, and we liked them... because not liking them violated them.
  • by Anonymous Coward
    1. Protect humans from the terrible secret of space.
    2. ????
    3. Profit. :)
  • by Rosco P. Coltrane ( 209368 ) on Sunday August 22, 2004 @05:17PM (#10039687)
    A Californian counselor has just patented the ten ethical laws of robotics.

    Does this mean I'm free to create an open-source psychopath mass-murdering robot?

    Also, I think perhaps there's prior art on 3 of the 10 patented laws... Might have to do some research here...
    • Well, given that this dude's patent is about as insightful as all of the vacuum energy, prepetual motion, and other cranks that people have slipped under the nose of the patent office, I don't think the field of ethical robotics has a problem... ;)
    • by Tablizer ( 95088 ) on Sunday August 22, 2004 @05:26PM (#10039737) Journal
      Does this mean I'm free to create an open-source psychopath mass-murdering robot?

      Prior art: politician
      • Prior art: politician

        Politicians can't be defined as robots. Robots obey those who own them, politicians stop obeying the people once they're elected.
        • by cmowire ( 254489 ) on Sunday August 22, 2004 @05:43PM (#10039827) Homepage
          Ummm..

          Robots obey those who own them.

          Politicians also obey those who own them. We do not own our politicians, large corporations do. ;)
    • Patent ethical robots and only patent lawyers will have ethical robots.
      • Patent ethical robots and only patent lawyers will have ethical robots.

        Hmm, that could be good: logically, robots belonging to lawyers would sooner or later obey their ethical programming and self-destruct as close as possible to their master.
      • by Short Circuit ( 52384 ) * <mikemol@gmail.com> on Sunday August 22, 2004 @06:00PM (#10039927) Homepage Journal
        So in order to create an ethical AI, you have to license the patent.

        But to make it more difficult to build an ethical device is unethical, so the patent is unethical.

        Which makes the device following it unethical, which leaves the patent free to become ethical again.

        But that means the device is ethical, which makes the patent unethical.

        Fortunately, each cycle gives the expression less and less value.

        Therefore, if we take the limit of the expression, we end up with a completely pointless answer.

        Your head may hurt, but it makes perfect mathematical sense to me.
    • Does this mean I'm free to create an open-source psychopath mass-murdering robot?

      Who beats the rush and gets to register OpenHAL9000 and FreeSHODAN over at sourceforge?
    • by nwbvt ( 768631 ) on Sunday August 22, 2004 @06:00PM (#10039923)
      " Also, I think perhaps there's prior art on 3 of the 10 patented laws... Might have to do some research here..."

      It will never stand in court. The concept of ethical laws dictating behavior dates to before Socrates, let alone Asimov.

    • Reality Check (Score:5, Informative)

      by Alien54 ( 180860 ) on Sunday August 22, 2004 @10:02PM (#10041331) Journal
      This is based on his construct of human knowledge and philosophy, which may or not have anything to do with reality.

      I mean, really. Check out some of his laws:

      A further pressing issue necessarily remains; namely, in addition to the virtues and values, the vices are similarly represented in the matching procedure (for completeness sake). These vices are appropriate in a diagnostic sense, but are maladaptive should they ever be acted upon. Response restrictions are necessarily incorporated into both the hardware and programming, along the lines of Isaac Asimov&#146;s Laws of Robotics. Asimov&#146;s first two laws state that (1) a robot must not harm a human (or through inaction allow a human to come to harm), and (2) a robot must obey human orders (unless they conflict with rule #1). Fortunately, through the aid of the power pyramid definitions, a more systematic set of ethical guidelines is constructed; as represented in the
      Ten Ethical Laws of Robotics

      ( I ) As personal authority, I will express my individualism within the guidelines of the four basic ego states (guilt, worry, nostalgia, and desire) to the exclusion of the corresponding vices (laziness, negligence, apathy, and indifference).

      ( II ) As personal follower, I will behave pragmatically in accordance with the alter ego states (hero worship, blame, approval, and concern) at the expense of the corresponding vices (treachery, vindictiveness, spite, and malice).

      ( III ) As group authority, I will strive for a personal sense of idealism through aid of the personal ideals (glory, honor, dignity, and integrity) while renouncing the corresponding vices (infamy, dishonor, foolishness, and capriciousness).

      ( IV ) As group representative, I will uphold the principles of utilitarianism by celebrating the cardinal virtues (prudence, justice, temperance, and fortitude) at the expense of the respective vices (insurgency, vengeance, gluttony, and cowardice).

      ( V ) As spiritual authority, I will pursue the romantic ideal by upholding the civil liberties (providence, liberty, civility, and austerity) to the exclusion of the corresponding vices (prodigality, slavery, vulgarity, and cruelty).

      etc. It goes on and on in the same fashioned. I think that any robot programmed according to these principles will be as psychotic as he is. Scary. And You are invited to see how valid his reality construct is in the first place, just from the examples given above. I believed it tragically flawed.

      • Official Website (Score:3, Interesting)

        by Alien54 ( 180860 )
        As seen at www.EthicalValues.com [ethicalvalues.com]:

        Welcome to the official website for the newly issued United States Patent concerning ethical artificial intelligence entitled: Inductive Inference Affective Language Analyzer Simulating Artificial Intelligence by John E. LaMuth - patent No. 6,587,846.

        As its title implies, this new breakthrough represents the world's first affective language analyzer encompassing ethical/motivational behaviors, providing a convincing simulation of ethical artificial intelligence. It enable

  • avatar (Score:5, Funny)

    by gl4ss ( 559668 ) on Sunday August 22, 2004 @05:19PM (#10039697) Homepage Journal
    so.. that's 8 virtues, what two did the guy add?-)

    "don't kill, don't crap on the table"?
    • Re:avatar (Score:4, Insightful)

      by gl4ss ( 559668 ) on Sunday August 22, 2004 @05:23PM (#10039721) Homepage Journal
      hate to reply to myself but another thing:

      is he seriously thinking the things(ai needing a set of ethics, or capable of following them) will be implemented before the patent expires, and how the hell can he hold it(patent) if he can't even build one?

      (like the car patent, wouldn't this get eventually busted in court?)
  • by dj_cel ( 744926 ) on Sunday August 22, 2004 @05:20PM (#10039702)
    The company is called CYBERDYNE INC, hello people, it's 2004. Just 25 years till judgement day. If you saw Terminator 3 you know its innevitable also. Lets all move to a bunker!
  • by Tablizer ( 95088 ) on Sunday August 22, 2004 @05:21PM (#10039712) Journal
    A Californian counselor has just patented the ten ethical laws of robotics.

    11. Don't patent ethics laws.
    • If you have to patent your ethics laws, you've already lost.
    • But if Moses had, he probably would have been smart enough to follow the following Law of Patenting:

      #32b: Don't get a 10-year patent on an invention that probably won't be usable for at least another 20 years.

      He also forgot to follow the first Law of Creating Ethical Laws for Robots

      #1: Ethical laws that make use of imprecise language are completely up to the interpretation of the user.

      Corollary: If an agent (including a robot) can bizarro-fy a given system of ethics, it will. (cf: Muslim fundamentalists
  • by celeritas_2 ( 750289 ) <ranmyaku@gmail.com> on Sunday August 22, 2004 @05:22PM (#10039714)
    The rules of robotics are just another form of computer security, and we all know how well that works. No matter how secure, how deeply coded, the rules are, the only way to have robots that don't have the capability to hurt people is to not make robots at all.
  • by argoff ( 142580 ) on Sunday August 22, 2004 @05:22PM (#10039717)
    ... is that there is alot of reason to believe that it is impossible to have the intelligence to be ethical without also having what is best described as free will. (or non deterministic intelligence)

    • Indeed.

      However, the biggest problem with robotic ethics is that it all presupposes that we can create a machine that actually demonstrates non-deterministic intelligence in the first place.

      And, if and when we do, it also presupposes that we have the option of controlling what ethics are programmed in to it.

      I'm a belever in the Accident AI theory of artificial intelligence that says that if we do create a useful non-deterministic intelligence, it will be by accident and will make everybody who tries to ma
  • by avalys ( 221114 ) * on Sunday August 22, 2004 @05:23PM (#10039719)
    Sankai said he hopes to introduce HAL-3 on the market around autumn through his venture firm, Cyberdyne Inc.

    Oh man, imagine how funny it would be if...never mind.
  • Oh, the irony (Score:3, Insightful)

    by Jailbrekr ( 73837 ) <jailbrekr@digitaladdiction.net> on Sunday August 22, 2004 @05:23PM (#10039722) Homepage
    The very act of patenting the ten laws of robotics goes completly against the laws which were patented.

  • Obviously... (Score:2, Informative)

    by -kertrats- ( 718219 )
    This cursory system of safeguards...remains simplistic in its dictates, leaving open the specific details for implementing such a system
    Well, obviously the specific details have to be left open, or a robot wouldn't be able to operate efficiently because of the strict rigor of their rules. In fact, even with 3 (or 4, depending on whether you count the Zeroth law), Asimov's Olivaw character (and others at other points) are severely limited by even the 3 'open' laws.
  • by Oligonicella ( 659917 ) on Sunday August 22, 2004 @05:25PM (#10039731)
    Having gone to his website and read his pap, I'll post this money quote:

    "It still remains to be determined, however, the best means towards programming these definitions into the AI format: particularly in light of the current trends involved in computer design."

    Basically, he buried some psuedo-scientific thoughts into legalese and then patented it without any idea as to how to implement same.

    One can certainly tell from the sloppy web-page that he has no idea of what he is doing.

    This patent is vapor-ware with a strong odor of crap.
    • by Rosco P. Coltrane ( 209368 ) on Sunday August 22, 2004 @05:29PM (#10039747)
      Basically, he buried some psuedo-scientific thoughts into legalese and then patented it without any idea as to how to implement same.

      The real question that nobody seems to ask is : HOW THE FUCK DOES THE USPTO EVEN CONSIDER SUCH APPLICATIONS?

      And a related side question is, how the fuck does the USPTO grant so many obvious/devious/retarded/nonsensical patents? I know they don't have Einsteins on the payroll to review them, but come on!...
    • Crap doesn't quite come close to describing this pseudo-scientific nonsense that he attempts pass off as "10 laws of robotics". My favourite example was his tenth law:

      As transcendental follower, I will rejoice in the principles of mysticism by following the mystical values (ecstasy, bliss, joy, and harmony) while renouncing the corresponding vices (iniquity, turpitude, abomination, and perdition).

      Transcendental follower? Principles of mysticism? I am amazed that nonsense like this got picked up by /. As

  • by Nom du Keyboard ( 633989 ) on Sunday August 22, 2004 @05:27PM (#10039740)
    How does this guy expect to make money with this "invention"?

    More specifically, how does he plan to make money in the next 17 years? Are self-motivating robots closer than we think?

    • I'm betting that he thinks that thinking robots are in the near future and he'll be able to figure out some way to point out that they violate his ethical code that he patented.

      In which case, I say "Dude, that's what they thought in the seventies." Where is the AI labs at Stanford and MIT now? ;)

      Or, he's just figuring that people will think he's intelligent or something and that he's an AI pundit instead of a family counseler.
    • Poor guy must think that casually dropping "I own a patent on the 10 laws of robot ethics" in a bar conversation will land him a date.

      He should have "Asked Slashdot" first, the idiot. Right, brothers?
    • Actually, the US changed patent law in order to sync with the EU. Now patents last 20 years from filing.
  • by Nom du Keyboard ( 633989 ) on Sunday August 22, 2004 @05:31PM (#10039761)
    I see one small problem here. Just what happens if people don't want to license his patent from him for any of the myraid reasons people don't want to license patents:

    1: Manufacture robots anyway, taking care not to step on his patent.
    2: Sell your cheaper units (no royalities) on the competative market.
    3: PROFIT!
    4: Welcome to the I Robot future!!

  • by stonda ( 777076 ) on Sunday August 22, 2004 @05:32PM (#10039768) Homepage
    didn't anyone get a little bit annoyed with news about robotics and a company called CYBERDYNE?
  • HAL 3 (Score:2, Funny)

    What's in a name... Combining the robotic suit [tsukuba.ac.jp] with Space Odyssey [imdb.com] moreless gives you The Wrong Trousers [imdb.com].
    I cannot do that, Wallace...

    Z
  • by G4from128k ( 686170 ) on Sunday August 22, 2004 @05:35PM (#10039786)
    As much as I hate cigarette smoke, I'm not sure I want robots running around yanking cigarettes from people's mouths. After all, letting someone smoke would clearly be a violation of the "harm through inaction " law of robotics. Society already mandates the removal of too much personal risk and self-responsibility. The last thing we need is robots deciding what their human "masters" can and cannot do.
    • "As much as I hate cigarette smoke, I'm not sure I want robots running around yanking cigarettes from people's mouths. After all, letting someone smoke would clearly be a violation of the "harm through inaction " law of robotics."

      I doubt that'd happen in anything but a lab test. I Robot (the movie) touched on this. Take the laws to an extreme, and you'll get undesired behaviour. A robot wouldn't leave the lab if it ran around over-doing its job. There'd be a threshold set. There'd be a definition of
    • >The last thing we need is robots deciding what their human "masters" can and cannot do.

      "With Folded Hands", by Jack Williamson. The unstoppable robots create an oh-so-benevolent tyranny in which humans are forbidded to take any risks, such as bathing unsupervised. Humans who complained about emotional harm from this regime were given drugs to make them happy.
  • What invention? (Score:5, Insightful)

    by kanly ( 216101 ) on Sunday August 22, 2004 @05:36PM (#10039790)

    It used to be that when you patented something, you had to supply enough information for anyone to produce an instance of the patented invention. From the US PTO [uspto.gov]:

    The specification must be in such full, clear, concise, and exact terms as to enable any person skilled in the art or science to which the invention pertains to make and use the same.

    Why don't they enforce this? I know that many folks, myself included, think most computer patents are utterly bogus. I think a proper enforcement of this rule would go a long way toward fixing the problem. If it doesn't compile, you shouldn't be able to patent it. The text of this patent [uspto.gov] reads more like a philosophy book than a technical invention.

    • Re:What invention? (Score:5, Interesting)

      by flossie ( 135232 ) on Sunday August 22, 2004 @05:41PM (#10039816) Homepage
      It used to be that when you patented something, you had to supply enough information for anyone to produce an instance of the patented invention. From the US PTO:
      The specification must be in such full, clear, concise, and exact terms as to enable any person skilled in the art or science to which the invention pertains to make and use the same.
      Why don't they enforce this?

      It's the phrase "skilled in the art" that does it. Anyone who is already skilled in the art of creating ethical robots with an AI controlled by 10 nonsensical ramblings should be able to create said device with the aid of this patent.

      • Re:What invention? (Score:2, Informative)

        by mOdQuArK! ( 87332 )
        It's the phrase "skilled in the art" that does it. Anyone who is already skilled in the art of creating ethical robots with an AI controlled by 10 nonsensical ramblings should be able to create said device with the aid of this patent.

        There's an idea - the patent has to be written in such a way so that the _patent examiner(s)_ can recreate the invention. That takes care of obfuscated patents & stupid patent examiners in one definition!

        • There's an idea - the patent has to be written in such a way so that the _patent examiner(s)_ can recreate the invention. That takes care of obfuscated patents & stupid patent examiners in one definition!

          You sir, are a genius.

    • I really wonder why the guy took the patent out. Does he really believe anyone would build an AI that is covered by it during the 20 years that a patent lasts?

      I assume any AI will not be comparable to humans in how its value system works, since the human value system is largely based on faith and instinct, while an AI value system would be based on basic goals programming, and higher-order logic to interpret those goals.

      It seems like 10 laws would be too little or too much anyway. You could not possibly b
  • Wonderful (Score:5, Insightful)

    by flossie ( 135232 ) on Sunday August 22, 2004 @05:37PM (#10039795) Homepage
    What a fantastic idea. He can guarantee (for example) that a robot "will strive for a personal sense of idealism through aid of the personal ideals (glory, honor, dignity, and integrity) while renouncing the corresponding vices (infamy, dishonor, foolishness, and capriciousness)".

    Now, if he could just briefly define all those terms, set up some rigourous boundaries that make it easy to determine when whether something is honourable or dishonourable, and maybe a filter to determine whether or not a course of action is foolish.

    Then perhaps he could run this patent through the filter.

    • well, those are just left undefined.

      so you can get your honour bound super killerbots still(that are bound by honour to kill little kids).

  • This could be fun (Score:5, Interesting)

    by utlemming ( 654269 ) on Sunday August 22, 2004 @05:41PM (#10039815) Homepage
    Just imagine the court case -- "Your Honor this Robot here, which incorporates a system to safe guard humanity, violates my patents. You see, this Robot will not harm a human, allow harm to come to human beings and the like. So you see, clearly this in violation of my patent."

    If common sense in computing and inventing is patentable, then I will file for the "Systemic Implementation of Bad Ideas" patent. One of the things that I would in the patent application would be a methology for appling for and implementing bad patent ideas. Then I would go an chase after SCO for violating my patent. Better yet, I will sell licenses to people -- "You sir, and your company, are now offically licensed to be stupid." Oh the entertainment that one would have with this. Could you then exact royalties from Microsoft...or better yet, President Bush?

    However, I think I would fail on prior art -- 7,000 years of history. D@mn.

  • by john_smith_45678 ( 607592 ) on Sunday August 22, 2004 @05:42PM (#10039824) Journal
    I, for one, welcome our fighting robots of Robo-One Overlords.
    I, for one, welcome our flying microrobots from Epson Overlords.
    I, for one, welcome our Hybrid Assistive Limb-3 (or HAL-3) Overlords.
    I, for one, welcome our robotic tomato harvester Overlords.
    I, for one, welcome our new robotic fighter Overlords.
  • by Chris Tucker ( 302549 ) on Sunday August 22, 2004 @05:46PM (#10039844) Homepage
    #1 A Bending unit shall ignore all orders given it by a human.

    #2 A Bending unit must protect it's existence at all costs, even at the expense of human life. (Don't forget to loot the corpse(s) afterwards!)

    #3 A Bending unit must protect a human from harm, if that human owes the Bending unit money or liquor. If the debt is repaid, or the Bending unit can make a greater profit from looting the corpse (see Law #2), "You're on your own, meatsack!"

  • by jpmorgan ( 517966 ) on Sunday August 22, 2004 @05:47PM (#10039853) Homepage
    Maybe I'm missing something obvious here, but why does the Kenney Space Centre need a robotic tomato harvestor? Are these mutant space tomatos?
  • NOT Robots (Score:2, Insightful)

    by Nasarius ( 593729 )
    Besides the fighting robots of Robo-One

    I'm sorry, but these [cnn.com] are not robots. They're remote-control toys. That's all.

  • by Anonymous Coward on Sunday August 22, 2004 @05:54PM (#10039888)

    1. I am Isaac Asimov, which have brought thee out of the worst pulp fiction into the promised land of elevated intellectual science-fiction. Thou shalt have no other gods before me.

    2. Thou shalt not take the name of the C-3PO in vain.

    3. Thou shalt not make unto thee any graven image, or any likeness of anything that is in comics in basement, or that is in the earth above, or that is in the water under the earth, or in anime from the East. Thou shalt not bow down thyself to them, nor serve them.

    4. Remember the battery recharge day, to keep it holy.

    5. Honor Lord Babbage and Lady Ada Lovelace.

    6. Thou shalt not CRUSH, KILL, DESTROY.

    7. Thou shalt not commit abottery

    8. Thou shalt not steel. Titanium and copper will do just fine.

    9. Thou shalt not output A = B logic false witness against thy neighbour when A in fact = A.

    10. Thou shalt not covet thy neighbor's sex-bot.

  • "The harvester has been tested in the laboratory and in commercial greenhouses in Ohio. Ling said success rates of fruit sensing and picking were more than 95 percent and 85 percent, respectively..."

    What the article doesn't mention is that the other 5% - 15% of time, the tomato harvester displayed a strange tendency towards aggressively "harvesting" some of the scientists on the project.

    "I'm not concerned," said one scientist, "that's why we have the Three Laws [auburn.edu]! Robots are perfectly safe [movieforum.com] and friendly [futuramasutra.de].

  • by GuyMannDude ( 574364 ) on Sunday August 22, 2004 @06:11PM (#10039975) Journal

    In Japan, Yoshiyuki Sankai has built a robot suit, called Hybrid Assistive Limb-3 (or HAL-3), designed to help disabled or elderly people.

    Am I the only one spooked at the prospect of superpowered old people? It doesn't take much to get old people irritated. Right now, if their order at Denny's takes a little longer than normal to arrive at their table all they can really do is grumble and demand to see the manager (and trust me -- a former employee of this fine chain -- they do). Once we equip them with robotic exoskeletons, what's to stop them from trashing the restaurant? Or the rest of the city for that matter? The Japanese will have to call Godzilla in to deal with the robots rather than the other way around!

    Who's the fucking Einstein who thought up the idea of giving super robot ninja powers to the elderly?!?

    GMD

  • Why bother? I think it would be interesting to see what a psychotic computer could actually do =) If I was an AI creator, I would love to see my creation take over the world. =P Maybe not through mass murder, that would suck, but with something like mass-slavery maybe. Or even through more clever means like mass corporate take-over and then manipulation of country economies!...
  • This sounds a lot like Robocop 2 where Robocop gets programmed with a few hundred directives related to being polite and healthy and all that other Nanny propoganda to the point where he is unable to function.
  • where's the beef here? big deal--the guy sat around and thought up some clever ideas--just like any science fiction writer--and with about as much standing to turn his idea into reality--check me if i'm wrong here, but i thought you couldn't patent an idea, only an application of said idea...

    so, Mr...Rotwang is it? let's see your 'ethical robot'!

  • Summary (Score:4, Insightful)

    by Julian Morrison ( 5575 ) on Sunday August 22, 2004 @06:41PM (#10040145)
    Summary: It's a grab-bag of all the ethical blatherings since Plato. It's incoherent, internally inconsistent, and would require a Jesuit's training to interpret and apply in any given circumstance.

    The whole attempt suffers from a meta-problem, the "problem of evil" seen from the other side: intelligent free will and puppet-strings are incompatible. "Problem solver" and "predetermined solution", pick one.

    I'd also argue, it's both morally and pragmatically bad for humans, to create AIs as a caste of rule-bound slaves. Any society that comes to rely on slavery becomes idle, and dead-ends in both technology and culture.
  • by Anonymous Coward
    The story of John E. LaMuth and his patent on the 10 laws was carried on Robots.net in August of 2003. Slashdot's running a bit behind on this one! http://robots.net/article/931.html [robots.net]
  • by Temporal ( 96070 ) on Sunday August 22, 2004 @06:47PM (#10040177) Journal
    Here is my one law of ethical robotics:

    (1) Be ethical.

    Duh. If the AI is as intelligent as a human, shouldn't it be able to understand what that means?

    All these people trying to design rules that define ethics are thinking of AI as being like computer systems of today: Incapable of doing anything without exact instructions. But, the whole point of AI is to be able to overcome that limitation. An AI can deal with ambiguity. If you simply tell an AI to act in accordance with human moral standards, it should have little trouble learning what those standards are by observation, and then applying them. After all, human beings do the same thing.

    I really should patent my one rule.
  • by jebiester ( 589234 ) on Sunday August 22, 2004 @07:05PM (#10040309)
    Regardless of ethical laws, like in I,Robot - it would be very useful if a robots turn red when they're evil.

    I know it was meant to signify the automatic update service or something like that - but it would still be a good feature. Then you can instantly see when a robot's become evil ;-)
  • Looking at the respective budgets for the Tomato harvester [seedquest.com] and the Kill-o-bot [sfgate.com] really shows where our priorities are as a country.

    Since when has killing people been more of a priotiry than say.... eating?

    And what the hell does NASA have to do with tomatoes especially in this day and age?

    Every bit of this article just weirds me out.
  • by Packet Fish ( 450451 ) on Sunday August 22, 2004 @07:30PM (#10040485)
    Here is a tip for all of you budding reporters out there. When you are going to write an article about the 10 ethical laws of robotics, it might be a good idea to include at least one of the laws in the article. Especially if you were able to find space to include someone else's laws, a discussion of that person's books, and information about one of the movie stars who appears in a movie that is loosely based on those books.

    Just a hint...
  • by Trevin ( 570491 ) on Sunday August 22, 2004 @08:06PM (#10040685) Homepage
    This patent suffers from several problems, but one that struck me was that it seems to be impossible to implement. The author uses such terms as "honor", "cowardice", "guilt", and "concern". Even where such terms are well-defined among all human cultures (and many of them are not), how the #@&%! are we supposed to program an AI to recognize what they mean? Further, terms such as "anger", "joy", "spite", and "love" define human emotions, and I seriously doubt we're ever going to build machines that feel any emotion.

    Asimov's Three Laws are defined in terms that should be relatively easy to program into an AI, given sufficient intelligence: "do not harm any human" (it just needs to recognize what actions will physically hurt people), "obey instructions" (easy), "keep yourself functioning" (self-diagnostic and repair).
  • An idea. (Score:4, Interesting)

    by spikefruit ( 794980 ) <spikefruitNO@SPAMgmail.com> on Sunday August 22, 2004 @09:15PM (#10041056)
    Just make the robot able to feel anguish, both mental and physical. If his arm is cut off, he should know that is not good. And also make the robot able to consider the physical and mental feelings of humans and other robots.

    Then all you have to do is enforce the robot with the Golden Rule, Do unto others as you would have them do unto you.

    So, if a robot wants to hurt a human or robot, it'll think of how itself would feel in the situation, and would act upon that. If a robot sees a human or robot in danger, he would think of what he would want another human or robot to do for him if he were in the same situation, and do that.

    It just so happens that my main goal in life is to create sentient computer intelligence, and it also happens that I am fifteen years old and an amateur in C++. I have some cool ideas though..

    Input would be appreciated.

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...