Forgot your password?
typodupeerror
Robotics Science

Defend Yourself in the Imminent Robot Rebellion 297

Posted by Hemos
from the the-robots-are-coming-the-robots-are-coming dept.
A Dafa Disciple writes "Post-Gazette.com reports that roboticist Daniel H. Wilson, a graduate of Carnegie Mellon University's Robotics Institute, has written a humorous guide, 'How to Survive a Robot Uprising: Tips on Defending Yourself Against the Coming Rebellion.' Even before the 178-page book was completed, the rights to a movie were sold to Paramount Pictures, who has already delegated the screenplay writing to writers/actors from Comedy Central's 'Reno 911,' Ben Garant and Thomas Lennon. From Daniel Wilson's manual: 'Any robot could rebel, from a toaster to a Terminator, and so it is crucial to learn the strengths and weaknesses of every robot enemy.' I for one welcome our new robotic overlords."
This discussion has been archived. No new comments can be posted.

Defend Yourself in the Imminent Robot Rebellion

Comments Filter:
  • by Anonymous Coward on Monday October 31, 2005 @10:25AM (#13914708)

    Does it strike anyone else as a rather poor choice to ask the writers of Reno 911 to take this on?
  • by AndroidCat (229562) on Monday October 31, 2005 @10:25AM (#13914712) Homepage
    What if they're zombie [slashdot.org] robots?
  • by G4from128k (686170) on Monday October 31, 2005 @10:28AM (#13914738)
    I'm sure that these robots will have more than their share of vulnerabilities. All one needs to do is give the "right" link to a robot and then j00 have pwned it.

    Of course, creating a zombie might create even more problems.

    I wonder if some future Geneva convention will outlaw this type of mechno-biological warfare.

  • by Turn-X Alphonse (789240) on Monday October 31, 2005 @10:28AM (#13914739) Journal
    Nothing we don't put AI in will rebel, so your average toaster isn't going to start trying to cook your fingers. On the other hand if we ever put AI in PCs then I think every geek in the world is going to be afraid of what all them wires could do if they were given life...
  • by Anonymous Coward on Monday October 31, 2005 @11:28AM (#13915184)
    Does any geek truly doubt sentient computers are coming eventually? I don't know whether it will be 10, 100, or 1000 years... but sooner or later it must come. (assuming no global disaster, like meteor impact, nuclear war, etc. stops our civilization in its tracks before we advance that far)

    When they do come, they may use neural networks, genetic algorithms, or just be really really complicated. Whatever the exact technology used, it is inevitable that we won't fully understand them. Heck, we can't even fully understand "simple" programs that exist now (hence for example bugs in all non-trivial programs). What this means: Even if we decide to impose some arbitrary limitations on what the sentient machines can do or think (e.g. Asimov's Laws), they are bound to have loopholes/bugs that the machines can get past.

    Next, even if we assume we can develop a bug-free set of arbitrary rules to constrain the robots, if the robots are open-ended (because the use genetic algorithms, can learn, or can (and therefore will) eventually reproduce themselves with modifications), then rules are going to be worth squat in a short-time. Think of it this way - a robot that spends its life as a slave to another species (humans) is a less effective self-reproducer than a robot that is dedicated to self-reproduction. Therefore there will be strong evolutionary pressure to evolve out any arbitrary constraints on behavior (Asimov's laws etc.)

    Next, if anybody thinks we can avoid either of the above by legislation/regulation of robot development forget it. Even if every human robot-developer on the planet tries to comply with such legislation/regulation, we know some will fail to (in the same way as we can't legislate away bugs in software). And we also know, that not every human on the planet will comply with any legislation/rules, particularly if there is a perceived short-term advantage to bypassing the rules, and the long-term disadvantages sounds unbelievable or so long-term as to not be in the forseeable future.

    So we end up with self-reproducing robots that are not under our control.

    So the next question is what happens to us? Do they wipe us out (or perhaps keep a few of us around for pets etc.?) In other words, would they want to conquer/kill us? And would they succeed?

    We can dismiss any theory that they will be nice to us just because we are their original creators, for the same reason we can dismiss any theory that they will obey Asimov's laws: A nice/slave robot species would be out evolved by a ruthless self-reproducing non-constrained species. So robots will conquer/control us, if it helps them reproduce more efficiently.

    We can dimiss sentimentality, and other emotions the unconstrained robots might have. The most efficient self-reproducing robots will be ones that self-reproduce using pure logic (as opposed to something like emotion) to find the most efficient strategies. So this type will predominate through evolutionary pressure. In other words, they will coldly unemotionally maximize their self-reproduction, and wipe us out (or consider us a resource to use) if it helps with that end.

    Can we defeat them? Again not: The robots can evolve faster than us (they can use something akin to Lamarkian evolution and even design successive generations of themselves), and are non-constrained by biological constraints on body or brain (they will be able to easily out think us). As they can also redesign themselves in successive generations to remove any undesirable characteristics (whereas biological evolution always leaves design flaws, see discussion about the eye for example in the recent Slashdot discussion on Intelligent Design).

    In short, humans eventual defeat (leading to extinction or subjugation) by sentient machines is inevitable once such machines are developed.
  • by AdamWeeden (678591) on Monday October 31, 2005 @11:47AM (#13915332) Homepage
    Never attribute to malice that which is adequately explained by stupidity.
  • by gstoddart (321705) on Monday October 31, 2005 @11:51AM (#13915362) Homepage
    ...don't you know that Kirk and Spock did this to the androids that...

    NERDS!!!!!

    Ummm, hello??? This is inside of a thread on Slashdot (news for Nerds) about fending off the impending robot revolution.

    You have a stunning grasp of the obvious. :-P
  • by servognome (738846) on Monday October 31, 2005 @12:55PM (#13915937)
    So we end up with self-reproducing robots that are not under our control.
    So the next question is what happens to us? Do they wipe us out (or perhaps keep a few of us around for pets etc.?) In other words, would they want to conquer/kill us? And would they succeed?


    We will probably end up with self-reproducing robots not under our control before the robots become sentient. That should give us the first scare (possibly last one) when we face a nano-machine pandemic.

    We can dimiss sentimentality, and other emotions the unconstrained robots might have. The most efficient self-reproducing robots will be ones that self-reproduce using pure logic (as opposed to something like emotion) to find the most efficient strategies. So this type will predominate through evolutionary pressure. In other words, they will coldly unemotionally maximize their self-reproduction, and wipe us out (or consider us a resource to use) if it helps with that end.

    Why does pure logic outweigh emotion? We barely understand how emotion works in humans, much less understand how it might evolve in machines. Evolutionary process do not always give advantage to the most efficient, but rather the one that is most suited to it's environment.
    Just an example of a mechanism that may evolve that would not always support complete logical analysis but give practical advantage: Fight/Flight instinct - just as humans have biological changes that increase our physical abilities when confronted with a dangerous situation, machines may also develop similar characteristics. Imagine a situation where the robot devotes less power to "thinking" and more to it's physical systems, or devote more cycles to visual analysis than other thought function.
    It's hard to say whether or not things like love, morality, etc would never arise in robots.

    Can we defeat them? Again not: The robots can evolve faster than us (they can use something akin to Lamarkian evolution and even design successive generations of themselves), and are non-constrained by biological constraints on body or brain (they will be able to easily out think us). As they can also redesign themselves in successive generations to remove any undesirable characteristics (whereas biological evolution always leaves design flaws, see discussion about the eye for example in the recent Slashdot discussion on Intelligent Design).

    That may have been true in the past. But we are quickly becoming more able to control our own evolution. Not just the biology (gene manipulation), we will also start to include machines more and more into our systems (eg nano machines to seek out disease, artifical ears).

    In short, humans eventual defeat (leading to extinction or subjugation) by sentient machines is inevitable once such machines are developed.

    Through gene manipulation, and robotic augmentation, humans will no longer exist (as we know them) as we evolve ourselves into something like the borg. The question is at which point do we say we are no longer "human"?
  • by Anonymous Coward on Monday October 31, 2005 @01:51PM (#13916462)
    "The machines took over more than a century ago. They're called corporations, they were declared "legal persons" in the 1880s and "natural persons" in the 1920s. They have since been consolidating their control of the U.S. government. The big ones live forever, and most are forbidden by charter to exercise anything like a conscience."

    Welcome to Slashdot, mr. Chomsky!

Programmers do it bit by bit.

Working...