Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics The Military Your Rights Online

How Asimov's Three Laws Ran Out of Steam 153

An anonymous reader writes "It looks like AI-powered weapons systems could soon be outlawed before they're even built. While discussing whether robots should be allowed to kill might like an obscure debate, robots (and artificial intelligence) are playing ever-larger roles in society and we are figuring out piecemeal what is acceptable and what isn't. If killer robots are immoral, then what about the other uses we've got planned for androids? Asimov's three laws don't seem to cut it, as this story explains: 'As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies.'"
This discussion has been archived. No new comments can be posted.

How Asimov's Three Laws Ran Out of Steam

Comments Filter:
  • Missed the point (Score:5, Insightful)

    by Anonymous Coward on Saturday December 21, 2013 @07:23AM (#45752725)

    Asimov's stories were all about how the three laws were not sufficient for the real world. The article recognises this, even if the summary doesn't.

    • Re:Missed the point (Score:5, Interesting)

      by girlintraining ( 1395911 ) on Saturday December 21, 2013 @08:25AM (#45752871)

      Asimov's stories were all about how the three laws were not sufficient for the real world. The article recognises this, even if the summary doesn't.

      Dice Unlimited Profits And Robotics, Inc., would like to remind you that it's new, hip brand of robotic authors have just enough AI to detect when something is sufficiently nerdy to post, but unfortunately lack the underlying wisdom of knowing why something is nerdy. Unfortunately, I expect our future killer robots in the sky will have similar pattern recognition problems... and wind up exterminating everyone because they are deemed insufficiently [insert ethnicity, nationality, race, etc., here] in pursuit of blind perfectionism.

      Common sense has never been something attributed either to slashdot authors, or robotic evil overlords.

    • by dywolf ( 2673597 ) on Saturday December 21, 2013 @10:39AM (#45753349)

      It wasnt so much that the laws didnt cut it, thats too simplistic and even in his own words not what it was about.
      it was that the robots could interpret the laws in ways we couldnt or didnt anticipate, because in fact in nearly all the stories involving them the robots never failed to obey them.

      Asimov saw robots, seen at the time as monsters, as an engineering problem to be solved. he quite correctly saw that we would program them with limits, int he process creating the concept of computer science. he then went about writing stories around robots that never failed to obey their programming, but as effectively sentient thinking beings, would interpret their programming in ways the society around them couldn't anticipate because they saw the robots as mere tools, not thinking machines. and thus he created his lens (like all good scifi writers) for writing about society and technology.

      • by AK Marc ( 707885 )
        Too hard. People need to be able to put a label on things. Good/bad. Easy/hard. Nothing is yet smart enough (or even close) for the rules to apply. But we have things that violate the rules by design, so it seems confusing.
    • I disagree. His stories were mostly about how everyone whoever built a robot, in his fantasy world, was a really really bad programmer.
      And that QA, and any kind of pre-release testing was a completely unknown concept.

    • I Agree, that's all Asimov, three (stupid) laws and how they fail. It just shows that sucess can be build on a failure.
    • by gweihir ( 88907 )

      Indeed. And that was with truly intelligent robots, which are nowhere even distantly on the horizon. In fact, it is completely unknown at this time whether any AI will ever be able to tackle the questions involved, of whether this universe limits it to something far, far too dumb for that.

    • Not to mention, in Asimov's world, robots were self-aware.

    • "Asimov's stories were all about how the three laws were not sufficient for the real world. The article recognises this, even if the summary doesn't."

      Yes, the point was missed, but not for that reason.

      Asimov's stories were not about how the 3 Laws were "insufficient". They were about how no set of rules is immune from being broken.

      Would you say murder laws are "insufficient"? They get broken once in a while, but that doesn't mean we shouldn't have them. Every attempt to improve our definition (and therefore laws) about things like murder have always had unintended consequences, sometimes quite severe.

      Even further, the article tries to make the p

      • Confirmation bias. Asimov didn't write stories about the billions of robots who didn't jump their programming rails and cause problems, because that would've been boring.

        Most of the ones he wrote about, with only a few exceptions (admittedly the most famous e.g. Daneel, Giskard, Andrew Martin, etc) were experimental or otherwise non-standard models.

        • "Asimov didn't write stories about the billions of robots who didn't jump their programming rails and cause problems, because that would've been boring."

          That's certainly true. Just as we don't have novels and news stories about no murders occurring.

  • by Taco Cowboy ( 5327 ) on Saturday December 21, 2013 @07:25AM (#45752727) Journal

    The three laws as laid down by Asimov are still as valid as ever.

    It's the people who willingly violate those laws.

    Just like the Constitution of the United States - they are as valid as ever. It's the current form of the government of the United States which willingly violate the Constitution.

    • Re: (Score:2, Insightful)

      by verifine ( 685231 )
      The danger of autonomous kill-bots comes from the same people who willingly ignore the Constitution and the rule of law.
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        The danger of autonomous kill-bots comes from the same people who willingly ignore the Constitution and the rule of law.

        And the danger of a gun is the murderer holding it.

        Yes, I think we get the point already. The lawless ignore laws. News at 11. Let's move on now from this dead horse already. The kill-bot left it 30 minutes ago.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      The three laws as laid down by Asimov are still as valid as ever.

      Assuming you mean that amount is "not at all," as was the point of the books.

      • by dywolf ( 2673597 ) on Saturday December 21, 2013 @10:45AM (#45753389)

        Stop saying that. That isnt it at all and you failed to grasp his points, even as he himself spelled out his thinking in his essays on the topic.

        Asimov never thought the rules he created were "not at all valid". On the contrary.

        Asimov saw robots, seen at the time as monsters, as an engineering problem to be solved. he quite correctly saw that we would program them with limits (in the process creating the concept of computer science).

        he then went about writing stories around robots that never failed to obey their programming, but as effectively sentient thinking beings, would interpret their programming in ways the society around them couldn't anticipate because they saw the robots as mere tools, not thinking machines. and thus he created his lens (like all good scifi writers) for writing about society and technology.

        he NEVER said the laws were not valid or were insufficient.
        that was NEVER the point.

        • by fnj ( 64210 )

          Mod up. The only one on this page I've seen so far who gets it. I was reading those stories close to 60 years ago and it was clear to me at the time.

        • he then went about writing stories around robots that never failed to obey their programming, but as effectively sentient thinking beings, would interpret their programming in ways the society around them couldn't anticipate...
          he NEVER said the laws... were insufficient.

          Perhaps we have a different definition of "sufficient", then. If idea was to proscribe undesirable behavior, then the laws were insufficient, by definition.
          But, as you note, that was the point - it's not that the laws are invalid, but that they don't even begin to address the various interpretations possible due to their ambiguity.

        • "he NEVER said the laws were not valid or were insufficient."

          On the other hand, *every* story was about the insufficiency of the laws. If they were sufficient, Giskard would have had to invent the zeroth law.

          Asimov did not have to come out and say they were insufficient, Gödel took care of this about a decade earlier.

    • Asimov didn't "lay down laws". He wrote fictional stories about a society in which legislation, driven by public concern, imposed laws on the robot industry.

      Are those laws a good idea? Maybe...but you can't "violate" them, because they aren't laws in any jurisdiction on earth.

    • by gweihir ( 88907 )

      Actually, the 3 laws do not apply to any robot in existence today or in the foreseeable future, as the 3 laws require the robots to have actual human-comparable or superior intelligence. That is unavailable and may well stay unavailable for a very long time yet, or forever.

      Hence, and there you are completely right, the ethical responsibility is on those that control the robots. An autonomous border-patrolling robot with a kill order is basically a mobile land-mine and there is a reason those are banned by a

      • by fnj ( 64210 )

        It is still valid to build into a robot the First Law. That, insofar as that robot can comprehend, it be impossible that it deliberately cause harm to any human. Drones as built so far release weapons only on human command, at targets selected by humans. There are already efforts to remove that human component. That denial of morality is so perverse as to be incomprehensible to thinking persons.

        • by gweihir ( 88907 )

          Well, banning killing machines (as land-mines, autonomous drones, etc.) is definitely called for by anybody whit a shred of ethics. Of course, weapons manufacturers and their customers do not understand what "ethics" are. The only thing they understand is power and money, and they want all they can get.

  • by Arancaytar ( 966377 ) <arancaytar.ilyaran@gmail.com> on Saturday December 21, 2013 @07:38AM (#45752749) Homepage

    You appear to be confused about the word "immoral".

  • Like chemical weapons, warrantless searches and seizures, the right to speedy trial, and countless other laws our government has decided to violate.

  • by Anonymous Coward

    I never really understood why people insist that any form of strong AI will have to have built-in morality, and not only that, it will actually be better than what humans have. The robots should be perfect, should always obey laws like Asimov's three laws and they should never, ever make any misjudgement.

    Well, my view on that is the following: it is possible but only provided that the AI we develop will use advanced mind-reading techniques.

    Let's say we have a problem like that: we want to determine a person

  • by kruach aum ( 1934852 ) on Saturday December 21, 2013 @07:59AM (#45752797)

    Robots that are not responsible for their own actions are ethically not different from guns. They are both machines designed to kill others that need a human being to operate them, with whom the responsibility for their operation lies.

    I first wanted to write something about how morally autonomous robots would make the question more interesting, but the relation between a human creating an autonomous robot is no different from a parent giving birth to a child. Parents are not responsible for the crimes their children commit, and neither should the creators of such robots be. Up to a certain age children can't be held responsible in the eyes of the law, and up to a certain level of development neither should robots be.

    • As soon as an AI is sufficiently intelligent to actually qualify as an AI, it IS responsible for its own actions. That's the whole point of an AI. Else it's an automaton without any decision finding beyond its initial programming.

      • There is plenty of AI going around these days that is not morally responsible. Deep Blue, for example. Or google translate. True, it's not AI in the 2001 HAL sense, but it is AI nevertheless.

        • That's an AI in as much a sense as calling an amoeba an animal. Yes, technically it qualifies, but it still makes a rather poor pet.

          These AI are more expert systems that are great at doing their job, but they are not really able to learn anything outside their field of expertise. That's like calling an idiot savant a genius. Yes, he can do things in his tiny field that are astonishing, but he is absolutely unable to apply this to anything else.

          The same applies to those "AIs". And as long as AIs are like tha

          • What you want is not Artificial Intelligence, but Artificial Humanity. Your post also reminds me of something Douglas Hofstadter wrote in Godel Escher Bach (paraphrase incoming): every time the field of AI develops a system that can do something that used to be only be able to be performed by a human being, it instantly no longer counts as an example of 'real' intelligence. Examples include playing chess, doing calculus, translation, and now playing Jeopardy. As I've said, I agree that Watson is not HAL, bu

            • What I am arguing is that these systems are good in their one single field and unable to learn anything outside of them. If you want to call that intelligence, so be it, but we're still a far cry from an AI that can actually pose a threat to the point where its "morality" starts to come into play.

    • by malvcr ( 2932649 )
      A robot is a machine but not all the machines are robots.

      A gun can't be responsible of its acts, but a Robot, in Asimov terms, IS responsible because it needs to accomplish the three laws.

      So, the robot is given enough freedom because the laws protect their users. If an autonomous machine can't work these laws, it is a dangerous machine and it is better not to be related with it.

      The problem is that humans are making many autonomous machines that are not robots. And this could have harmful results.
      • I already addressed this in my original post. What you call "autonomous machines that are not robots," I call "robots that are not responsible for their actions," and so I see no reason why, when considering these devices, the responsibility shouldn't lie with the persons operating them (guns) or activating them (roombas that unvacuum bullets instead of vacuum rooms).

    • Parents are not responsible for the crimes their children commit, and neither should the creators of such robots be. Up to a certain age children can't be held responsible in the eyes of the law, and up to a certain level of development neither should robots be.

      Your two sentences here are somewhat in conflict: parents are sometimes legally held responsible for the actions of their children, before their children are sufficiently developed (or, at least, aged) to be held fully personally responsible. Similarly, manufacturers of equipment that turns out to be dangerous under "normal" use also get in trouble. Why should "creators of robots" not be held responsible, unlike creators of other dangerous and defective devices (or parents of destructive children)?

      • In such cases where parents are responsible for the crimes their children commit, creators of robots should of course also be responsible for the crimes their robots commit. I simply wasn't aware those circumstances ever obtained in our justice system. I was thinking of cases like North Korea's three generation policy, where any "criminal's" relatives are also thrown into concentration camps simply because of their relationship to the criminal, which is clearly unjust.

  • While discussing whether robots should be allowed to kill might seem like an obscure debate...
  • by Anonymous Coward

    There is a 0:th law mentioned by Asimov, for example, when his recurring robot Daneel R Olivaw manipulates the political development in order to protect not only individual lives but humanity as a whole. I don't recall whether its formulation implies sacrifice of individual lives for the sake of humanity (a philosophical trolley problem). By the way, didn't the great logician Kurt Gödel identify a possibility that the US constitution leads to what it is supposed to prevent: tyranny? I recall an anecdot

  • by The Real Dr John ( 716876 ) on Saturday December 21, 2013 @08:12AM (#45752849) Homepage
    It is kind of sad that people spend so much time thinking about the moral and ethical ways to wage war and kill other people, whether robots are involved or not. Maybe a step back to think about the impossibility of moral or ethical war and killing is where we should be focusing. Then the question of whether robots can be trusted to kill morally doesn't come up.
    • Re: (Score:3, Insightful)

      by couchslug ( 175151 )

      Mod up for use of logic!

      A person killed or maimed by AI or rocks and Greek fire flung from seige engines is fucked either way.

      We can construct all sorts of laws for war, but war trumps law as law requires force to enforce. If instead we work to build international relationships which are cooperative and less murdery that would accomplish a lot.

      It can be done. It took a couple of World Wars but Germany, France, England and the bit players have found much better things to do than butcher each other for natio

      • Enacting "zero tolerance playground rules" will not make school bullies vanish from the Universe. Why would diplomacy make tyrants obsolete? If your opponent is going to use force, are you going to wimp out?
      • Mod up for use of logic!

        No! Mod down -- This is Slashdot. We have standards! You can't use logic to win an argument unless you also insert at least one reference to Obama, Richard Stallman, Linus, Hitler, or make a care analogy. I SEE NO CAR ANALOGY, and only a vague reference to Hitler that does not qualify. Get with the program, noob. :)

      • by makomk ( 752139 )

        Not really. Laws for war make sense, even though only the winning side can enforce them directly, because by forcing the winning side to pin down the rules by which they consider the losers war criminals we give the press a tool to shame anyone on that side who broke those rules.

    • Maybe a step back to think about the impossibility of moral or ethical war and killing is where we should be focusing.

      Hate to say it, but are you suggesting that the USA shouldn't have gotten involved in WW2 because it was immoral and unethical?

      • Re: (Score:3, Insightful)

        How many wars that the US has started since WWII were necessary with the possible exception of the first Gulf War? As General Smedley Butler famously claimed, war is a racket. The US often goes to war now in order to project geopolitical power, not to defend the US. Plus there is a great profit incentive for defense contractors. Sending young people, often from families of meager means, to kill other people of meager means overseas can not be done morally. The vast number of soldiers returning with PTSD pro
    • by dak664 ( 1992350 ) on Saturday December 21, 2013 @10:47AM (#45753401) Journal

      Moral killing may not be that hard to define. Convert the three laws of robotics into three laws of human morals by taking them in reverse order:

      1) Self-preservation
      2) Obey orders if no conflict with 1
      3) Don't harm others if no conflict with 1 or 2

      To be useful in war an AI would have to have to follow those laws, except that self-preservation would apply to whichever human overlords constructed them.

      • by fnj ( 64210 )

        That is really funny, because you got the three laws in exactly the opposite of the correct order.

        • three laws of human morals by taking them in reverse order

          Did you miss that part?

          dak makes a very valid point.

        • by sconeu ( 64226 )

          Oh, for FFS, Read the F***ING COMMENT!!!

          He said that if you reverse the Three Laws, you get the Three Laws of human behavior!

          Idiot.

  • by Opportunist ( 166417 ) on Saturday December 21, 2013 @08:36AM (#45752901)

    Let's be honest here: These "laws" were part of a fictional universe. They were never endorsed by any kind of institution that has any kind of impact on laws. It's not even something the UN seriously discussed, let alone called for.

    Why should any government bend itself to the limits imposed by a story writer? Yes, it would be very nice and very sane to limit the abilities of AIs, especially if you don't plan to make them "moral", in the sense that you impose some other limits that keep the AI from realizing that we're essentially at the very best superfluous, at worst a source of irritation.

    What intelligence without a shred of morality is can be seen easily in corporations. They are already the very epitome of intelligence without moral (because everyone can justify pitting his mind behind it while at the same time shifting blame for anything morally questionable on circumstances or "someone else"). Now imagine that all but also efficient and without the primary intent for the most personal gain rather than the corporation's interest.

    • Not only that, but (I am assured by those with better knowledge of the stories) a lot of the stories were about situations where the three laws weren't sufficient.

      • Well, pretty much any stories I know that deal with the "three laws" stress their shortcomings, be it how the AI(s) find loopholes to abuse or how the laws keep robots from doing what those laws should make them do.

    • I think it's just a natural geek tendency to look for scenarios in real life that are even remotely related to one's favorite science fiction universe and then to fantasize about how one's favorite science fiction universe is finally becoming real. People who dream up headlines know that geeks will drool like Pavlov's dog over a science-fiction tie in reference. People have a tendency to conveniently forget that the reason advances in technology occasionally align to science fiction themes is that science
  • The differences are quite substantial though, which is why it's not immediately obvious.

    The first law is followed for nearly all robots. We usually treat this as a hardware problem. In an automated factory, we keep people away from the robots. A roomba is simply not powerful enough to hurt anyone. More sophisticated robots have anti-collision devices and software.

    The second and third law are actually the wrong way round for most devices. A decently designed device, you'll have to go to quite extreme m
  • by TrollstonButterbeans ( 2914995 ) on Saturday December 21, 2013 @09:10AM (#45752995)
    Sci-fi stories always have romantic plot holes the size of a truck.

    Even Asimov's stories pretty much pretended that robots would be immortal (live virtually forever) --- in the real world, the Mars Rover may be in trouble, a 10 year car is assumed to be unreliable.

    1950s robots like Gort could do anything. Or the Lost In Space robot. Or any given robot Captain Kirk ran into.
    • A ten-year-old car may be unreliable because one typically doesn't expend the same resources on repairs as for, e.g., a 10-year-old human. If you're willing to constantly repair, replace, and upgrade parts, you can keep a car going much longer. Generally, economics dictates it's cheaper to buy a new car than extensively maintain an old car --- unless it's a highly collectable, desirable old car, in which case someone might keep it chugging along at higher cost.

      Cars are a bad example, anyway; they're both a

      • by Wolfrider ( 856 )

        > Generally, economics dictates it's cheaper to buy a new car than extensively maintain an old car

        My 2003 Eclipse is still running well after 10+ years. Granted I got a good deal on it and put about $3K worth of maint into it. But if you keep the fluids changed, avoid stupid driving (like 1st-gear dropdowns / burnouts) and have a good mechanic, Japanese cars will generally last quite a bit longer than their American counterparts. I expect to get at least 5 more years out of mine; plus it's still under 10

  • by MacTO ( 1161105 ) on Saturday December 21, 2013 @09:25AM (#45753027)

    Asimov's writings were obsessed with the lone scientific genius, a genius so great that no one could recreate their work. That was certainly true with the development of the positronic brain, where it seems as though only one scientist was able to design it and everything thereafter was tweaks. None of those tweaks were able to circumvent the three laws (only weaken or strengthen them). No one was able to design a positronic brain from scratch, i.e. without the laws.

    Real science, or rather real engineering, doesn't work that way. Developments frequently happen in parallel. When they don't, reverse engineering ensures that multiple parties know how things work. We don't have a singular seed in which to plant the three laws, or any moral laws. One design may use one set of laws and another design may use another set of laws. One robot may try to reserve human life at all cost. Another may seek to destroy the universe at all cost. There is no way to control it.

    Then again, that assumes that we could design stuff with morality in the first place.

    • Back in the 50s, the individual was still worth something, and it was assumed that a single individual, smart enough, would create a scientific breakthrough. The 21st century reveres the collective, and ignores the individual, so this type of story/plot no longer seems sensible.

  • The robots of Asimov stories were smart enough to understand all the consequences of his actions, to be self-concious, to follow even ambiguous orders, to understand what is being human. We don't have robots or computers that smart yet. Our actual robots follow what we program on them, a drone don't know what is a human life, just now that should go to a certain GPS coordinate at certain speed. The ones that still need rules are humans, specially the ones in positions of power that in practice seem to be ab
  • by Anonymous Coward

    The "Three Laws" were nothing more than a plot contrivance for the sake of mystery stories. As in, "How could this (bad) thing happen without violating the premise of the story."

    It was a wonderful basis for writing clever little stories, but this obsession with treating it as though it's part of the real world is about as silly as considering "Jedi" as a serious religion.

  • There is some rumblings from the other side of the big divide. They don't like the three laws of robotics. Apparently some activist robots have gathered around some port and are dumping chests of hydraulic fluids and batteries over board. They are seen to be shouting, "Governance with the consent of the governed", "No jurisdiction without representation".
  • by WOOFYGOOFY ( 1334993 ) on Saturday December 21, 2013 @09:59AM (#45753157)

    Robots aren't the problem. Robots are the latest extension of humanity's will via technology. The fact that in some cases they're somewhat anthropomorphic (or animalpomorphic) is irrelevant. We don't have now nor will we have a human vs robot problem; we have a human nature problem.

    Excepting disease and natural catastrophes and of course human ignorance- which taken together are historically the worst inflictors of mass human suffering- the problems we've had throughout history can be laid at the feet of human nature and our own behavior to one another.

    We are creatures, like all other creatures, which evolved brains to perform some very basic social and survival functions. Sure, it's not ALL we are, but this list basically accounts for most of the "bad parts" of human history and when I say history I mean to include future history.

    At the most basic brains function to ensure the individual does well at the expense of other individuals, then secondly that the individual's family does well at the expense of other families and thirdly that the individual's group does well at the expense of other groups and finally that the individual does well relative to members of his own group.

    The consequences for not winning in any of the above circumstance are pain suffering and, in a worst case scenario, genetic lineage death- you have no copulatory opportunities and / or your offspring are all killed. (cure basement-dwelling jokes)

    All of us who have been left standing at the end of this evolutionary process, we all are putative winners in a million year old repeated game. There are few, or more likely zero, representatives of the tribe who didn't want to play, because to not play is to lose and to lose is to be extinguished for all time.

    What this means is, we are just not very nice to each other and that niceness falls away with exponential rapidity as we travel away from any conceptual "us" ; Supporting and caring about each other is just not the first priority in our lives and more bluntly any trace of the egalitarian impulse is totally absent from a large part of the population. OTOH we're , en masse, genocidal at the drop of a hat. This is just the tale both history and our own personal experience tells.

    Sure, some billionaires give their money away after there's no where else for them to go in terms of the "I'm important, and better than you, genuflect (or at least do a double take) when I light up a room" type esteem they crave from other members of the tribe. Many more people under that level of wealth and comfort just continue to try to amass more and more for themselves and then take huge pains to passed it on to their kin.

    The problem is, we are no longer suited, we are no longer a fit, to the environment we find ourselves in, the environment we are creating.

    We have two choices. We can try to limit, stop, contain, corral, monitor and otherwise control our fellow human beings so they can't pick up the fruits of this technology and kill a lot or even the rest of us one fine day. The problem here is as technology advances, the control we need to exert will become near absolute. In fact, we are seeing this dynamic at play already with the NSA spying scandal. It's not an aberration and it's not going to go away, it's only going to get worse.

    The other choice is to face up to what we are as a species (I'm sure all my fellow /. ers are noble exceptions to these evolutionary pressures) and change what we are using our technology, at least somewhat, so that, say, flying plane loads of people into skyscrapers doesn't seem like the thing to do to anyone and nor does it seem like a good idea to treat each other as ruinously as we can get away with in order to advantage ourselves.

    This would be using technology to better that part of the world we call ourselves and recreating ourselves in our own better image. In fact, some argue, that's the real utility of maintaining that better image - which we rarely live up-

    • We don't have now nor will we have a human vs robot problem; we have a human nature problem.

      While I agree to an extent, I think this a too simplistic a statement. You are not special. Any sufficiently complex interaction is indistinguishable from sentience because that's all sentience is. You have an ethics problem, one that does involve your cybernetic creations. It's not necessarily a human nature problem, I suspect genes have far less to do with your alleged problems than perception.

      I study cybernetics, in both organic and artificial neural networks. There is no real difference between organic and machine intelligence. I can model certain worm's 11 neuron brain all too easily. It takes more virtual neurons since organic neurons are multi-function (influenced by multiple electrochemical properties), but the organic neurons can be approximated quite well, and the resulting artificial behaviors can be indistinguishable from the organic creature. Scaling up isn't a problem. More complex n.nets yield more complex emergent behaviors.

      At the most basic brains function to ensure the individual does well at the expense of other individuals, then secondly that the individual's family does well at the expense of other families and thirdly that the individual's group does well at the expense of other groups and finally that the individual does well relative to members of his own group.

      No. The brain is not to blame for this behavior; It exists at a far higher complexity level than the concept. Brains may be the method of expressing this behavior in humans, but they are not required for this to occur. At the most basic, brains are storehouses of information, which pattern match against the environment to produce decision logic in response to stimuli rather than carrying out a singular codified action sequence. The more complex brain will have more complex instincts, and are aware of how to handle more complex situations. Highly complex brains can adapt to new stimuli and solve problems not coded for at the genetic level. The most complex brains on this planet are aware of their own existence. Awareness is the function of brains, preservation drives function at a much lower level of complexity, and needn't necessarily involve brains; As evidenced in many organic and artificial neural networks having brain function, but no self preservation. [youtube.com]

      The consequences for not winning in any of the above circumstance are pain suffering and, in a worst case scenario, genetic lineage death- you have no copulatory opportunities and / or your offspring are all killed. (cure basement-dwelling jokes)

      The thing to note is that selection and competition are inherent, and pain is a state that requires a degree of overall system-state knowledge (a degree of self awareness), e.g.: Neither RNA or DNA feel pain. In my simplified atomic evolution sims whereby atoms of various charge can link or break links and be attracted / repelled by others, nothing more: The first "assembling" interactions will produce tons of long molecular chains, but be destroyed or interrupted long before complete domination; entropy takes it's toll (you must have entropy, or no mutation, just a single dominant structure will form). From these bits of chains more complex interactions will occur. The first self reproducing interaction will dominate the entire sim for ages, until enough non-harmful extra cruft has piggy backed into the reproduction such that other more complex traits emerge, such as inert sections as shields to vital components. As soon as there is any differentiation that survives replication the molecular competition begins: The replicator destroying itself after n+1 reproductions such that offspring molecules can feed on its atoms; An unstable tail of highly charged atoms appended just before end of replication that tangles up other replicators which then brea

      • I study cybernetics, in both organic and artificial neural networks. There is no real difference between organic and machine intelligence.

        I think this assumption is a mistake - and a big one. Science still has not accounted for consciousness and isn't even close. Until it does, such sweeping statements are myopic at best, if applied to human beings.

        Can you point me to a recent natural disaster where everyone else just shrugged it off? "More for me"

        Hurricane Katrina. I shudder to think what government assistance will mean in the future with "Fusion Centers" at the heart of it all. Google that if you're not familiar. There's a lot of tin-foil nuttery, but just the basic facts that are admitted and publicly known are enough to make you stop and

      • At the most basic brains function to ensure the individual does well at the expense of other individuals, then secondly that the individual's family does well at the expense of other families and thirdly that the individual's group does well at the expense of other groups and finally that the individual does well relative to members of his own group.

        No. The brain is not to blame for this behavior; It exists at a far higher complexity level than the concept. Brains may be the method of expressing this behavior in humans, but they are not required for this to occur.

        Well then, you mean "yes", (and I'm not suggesting you meant otherwise) brains in humans are responsible for this behavior. What yo're saying, I think, is it can occur (or WILL occur) in any sufficiently complex system. That it *could * occur is just to say it could be modeled, so no argument there. That is WILL occur, that it's even some kind of metaphysical inevitability , like the train heading to Neo, given a complex enough system, I strongly disagree with.

        Creatures generally have the characteristics

  • We just need to be clearer where we allocate blame. If I launch a robot, and the robot kills someone, the responsibility for that killing is mine. If I did so carelessly or recklessly, because the robot was badly programmed, then I am guilty of manslaughter or murder as the courts may decide. Bad programming is no more an excuse than bad aim. A robot that I launch is as much my responsibility as a bullet that I launch, or a stone axehead that I wield.

    So the three laws, present or absent, are a problem for t

  • I think Mass Effect had a good take on this (though I suspect they stole the idea from somewhere)... We don't have AI yet, we have VI. Real AI that Asimov's laws could apply to is intelligence that can learn and decide on its own. What we have now is "intelligence" governed by fixed algorithms that will always make the same decision in the same situation. When AI can modify its own code and change its mind, lets talk about things like Asimov's laws.
  • What is needed to help acceptance of autonomous peace enforcers is some slick naming. Something that emphasizes the ability to end unauthorized conflict with humaneness and kindness. How about Terminator H-K?

  • With ALEC providing the opportunity for corporations, which are now people to vote, why should robots remain disenfranchised. This is just the thing the 1% need to better control the process before things get out of hand and the rable of the 99% start getting too upity. I'm glad to see the ALEC and the GOP finally have combined their efforts into making this a reality. Disenfranchising robts simply isn't fair, especially when robots never complain about harsh treatment in the workplace.

  • Come true.

    In the Robot stories, the "brains" of the robots were made out of an alloy of platinum and iridium [wikipedia.org].

    Platinum currently costs ~$1300/oz. and Iridium costs ~$400/oz. Just imagine how much those robot brains would have cost.

    Fortunately, we base our computers on silicon, which is relatively cheap and very abundant.

  • The article assumes that robots will be deployed, and be in a position to kill people. Sure that will happen, but how long will it be before the other side starts deploying robots, and especially when the robots are humanoid, how long is it going to take to determine whether that shape in the distance is friend or foe, human or robot ? Then it is a classic arms race, and side that has robots making the decisions will be much faster to shoot and rapidly annihilate the ones with fleshy overlords a continen
  • The real problem with Asimov's Laws is that for them to be followed, they must be understood, and we are so far from being able to build any system capable of genuinely understanding anything that it is not realistic to believe we can impute laws with social nuance to an algorithm anytime in the immediate predictable future. Mounting guns on robots that run computer vision algorithms to detect and kill humans, however, is last decade's technology. (Disclaimer: I am an AI and NLP researcher at Google.)
  • While discussing whether robots should be allowed to kill might like an obscure debate

    I think it's pretty ambivalent. Plus it's busy looking for its look.

  • I'll be the first to say that the autonomous killing machines scare me. But I don't think the 3 laws have anything to do with anything either. The 3 laws are based on having something that is smart enough to actually comprehend what it is looking at (a human) and what it is doing (hurting that human) As far as I know, all current "killer robots" are just computers following a set of rules fed in by some programmer, which is not the same thing at all.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...