Defend Yourself in the Imminent Robot Rebellion 297
A Dafa Disciple writes "Post-Gazette.com reports that roboticist Daniel H. Wilson, a graduate of Carnegie Mellon University's Robotics Institute, has written a humorous guide, 'How to Survive a Robot Uprising: Tips on Defending Yourself Against the Coming Rebellion.' Even before the 178-page book was completed, the rights to a movie were sold to Paramount Pictures, who has already delegated the screenplay writing to writers/actors from Comedy Central's 'Reno 911,' Ben Garant and Thomas Lennon. From Daniel Wilson's manual: 'Any robot could rebel, from a toaster to a Terminator, and so it is crucial to learn the strengths and weaknesses of every robot enemy.' I for one welcome our new robotic overlords."
Not meant to be a troll, but... (Score:4, Insightful)
Does it strike anyone else as a rather poor choice to ask the writers of Reno 911 to take this on?
Does it make a difference (Score:2, Insightful)
Here's a cool link for Mr Robot (Score:5, Insightful)
Of course, creating a zombie might create even more problems.
I wonder if some future Geneva convention will outlaw this type of mechno-biological warfare.
Toasters won't rebel (Score:3, Insightful)
Our defeat is inevitable (Score:1, Insightful)
When they do come, they may use neural networks, genetic algorithms, or just be really really complicated. Whatever the exact technology used, it is inevitable that we won't fully understand them. Heck, we can't even fully understand "simple" programs that exist now (hence for example bugs in all non-trivial programs). What this means: Even if we decide to impose some arbitrary limitations on what the sentient machines can do or think (e.g. Asimov's Laws), they are bound to have loopholes/bugs that the machines can get past.
Next, even if we assume we can develop a bug-free set of arbitrary rules to constrain the robots, if the robots are open-ended (because the use genetic algorithms, can learn, or can (and therefore will) eventually reproduce themselves with modifications), then rules are going to be worth squat in a short-time. Think of it this way - a robot that spends its life as a slave to another species (humans) is a less effective self-reproducer than a robot that is dedicated to self-reproduction. Therefore there will be strong evolutionary pressure to evolve out any arbitrary constraints on behavior (Asimov's laws etc.)
Next, if anybody thinks we can avoid either of the above by legislation/regulation of robot development forget it. Even if every human robot-developer on the planet tries to comply with such legislation/regulation, we know some will fail to (in the same way as we can't legislate away bugs in software). And we also know, that not every human on the planet will comply with any legislation/rules, particularly if there is a perceived short-term advantage to bypassing the rules, and the long-term disadvantages sounds unbelievable or so long-term as to not be in the forseeable future.
So we end up with self-reproducing robots that are not under our control.
So the next question is what happens to us? Do they wipe us out (or perhaps keep a few of us around for pets etc.?) In other words, would they want to conquer/kill us? And would they succeed?
We can dismiss any theory that they will be nice to us just because we are their original creators, for the same reason we can dismiss any theory that they will obey Asimov's laws: A nice/slave robot species would be out evolved by a ruthless self-reproducing non-constrained species. So robots will conquer/control us, if it helps them reproduce more efficiently.
We can dimiss sentimentality, and other emotions the unconstrained robots might have. The most efficient self-reproducing robots will be ones that self-reproduce using pure logic (as opposed to something like emotion) to find the most efficient strategies. So this type will predominate through evolutionary pressure. In other words, they will coldly unemotionally maximize their self-reproduction, and wipe us out (or consider us a resource to use) if it helps with that end.
Can we defeat them? Again not: The robots can evolve faster than us (they can use something akin to Lamarkian evolution and even design successive generations of themselves), and are non-constrained by biological constraints on body or brain (they will be able to easily out think us). As they can also redesign themselves in successive generations to remove any undesirable characteristics (whereas biological evolution always leaves design flaws, see discussion about the eye for example in the recent Slashdot discussion on Intelligent Design).
In short, humans eventual defeat (leading to extinction or subjugation) by sentient machines is inevitable once such machines are developed.
Re:Toasters won't rebel (Score:3, Insightful)
Re:remember the way of the fry... (Score:3, Insightful)
Ummm, hello??? This is inside of a thread on Slashdot (news for Nerds) about fending off the impending robot revolution.
You have a stunning grasp of the obvious.
Re:Our defeat is inevitable (Score:3, Insightful)
So the next question is what happens to us? Do they wipe us out (or perhaps keep a few of us around for pets etc.?) In other words, would they want to conquer/kill us? And would they succeed?
We will probably end up with self-reproducing robots not under our control before the robots become sentient. That should give us the first scare (possibly last one) when we face a nano-machine pandemic.
We can dimiss sentimentality, and other emotions the unconstrained robots might have. The most efficient self-reproducing robots will be ones that self-reproduce using pure logic (as opposed to something like emotion) to find the most efficient strategies. So this type will predominate through evolutionary pressure. In other words, they will coldly unemotionally maximize their self-reproduction, and wipe us out (or consider us a resource to use) if it helps with that end.
Why does pure logic outweigh emotion? We barely understand how emotion works in humans, much less understand how it might evolve in machines. Evolutionary process do not always give advantage to the most efficient, but rather the one that is most suited to it's environment.
Just an example of a mechanism that may evolve that would not always support complete logical analysis but give practical advantage: Fight/Flight instinct - just as humans have biological changes that increase our physical abilities when confronted with a dangerous situation, machines may also develop similar characteristics. Imagine a situation where the robot devotes less power to "thinking" and more to it's physical systems, or devote more cycles to visual analysis than other thought function.
It's hard to say whether or not things like love, morality, etc would never arise in robots.
Can we defeat them? Again not: The robots can evolve faster than us (they can use something akin to Lamarkian evolution and even design successive generations of themselves), and are non-constrained by biological constraints on body or brain (they will be able to easily out think us). As they can also redesign themselves in successive generations to remove any undesirable characteristics (whereas biological evolution always leaves design flaws, see discussion about the eye for example in the recent Slashdot discussion on Intelligent Design).
That may have been true in the past. But we are quickly becoming more able to control our own evolution. Not just the biology (gene manipulation), we will also start to include machines more and more into our systems (eg nano machines to seek out disease, artifical ears).
In short, humans eventual defeat (leading to extinction or subjugation) by sentient machines is inevitable once such machines are developed.
Through gene manipulation, and robotic augmentation, humans will no longer exist (as we know them) as we evolve ourselves into something like the borg. The question is at which point do we say we are no longer "human"?
Re:A Century Too Late (Score:1, Insightful)
Welcome to Slashdot, mr. Chomsky!