Terminator Salvation Opens Well, Scientists Not Impressed 344
destinyland writes "A science magazine asks an MIT professor, roboticists, artificial intelligence workers, and science fiction authors about the possibility of an uprising of machines. Answers range from 'of course it's possible' to 'why would an intelligent network waste resources on personal combat?' An engineering professor points out that bipedal robots 'are largely impractical,' and Vernor Vinge says a greater threat to humanity is good old-fashioned nuclear annihilation. But one roboticist says it's inevitable robots will eventually be used in warfare, while another warns of robots in the hands of criminals, cults, and other 'non-state actors.' 'What we should fear in the foreseeable future is not unethical robots, but unethical roboticists.'"
The new movie got off to a good start, drawing $13.4 million in its first day. I found it reasonably entertaining; pretty much what I'd expect from a Terminator movie. If nothing else, I learned that being able to crash helicopters and survive being thrown into the occasional wall are the two most valuable skills to have during a robot uprising. What did you think?
Who effin' cares what the scientists think? (Score:5, Insightful)
It's Terminator! It never had a real basis in reality to begin with.
Re:Who effin' cares what the scientists think? (Score:4, Funny)
<obBale>
What the fuck is it with you? What don't you fucking understand? You got any fucking idea about, hey, it's fucking distracting having somebody first posting? Give me a fucking answer! What don't you get about it?
</obBale>
Re: (Score:3, Funny)
Filter error: Don't use so many caps. It's like YELLING.
Yeah that's kinda my point...
Re:Who effin' cares what the scientists think? (Score:4, Interesting)
Perhaps these scientists need a dose of reality. And the writers need a bit of separating capability :
1) AI researchers
robots taking over the world:
Yes, Ben Goertzel
No answer, prof. Anette (Peko) Hosoi (but : a T-1000 is likely)
Yes, Bob Mottram, but : not anywhere close to it. First humans will replace themselves slowly by intelligent machines, then humans will lose function (and intrest), then humans will die or get killed
Yes, John Weng, will happen soon in fact
No, Daniel H. Wilson, but RC terminators will be a reality real soon now
2) SF writers ...
robots taking over the world:
No, David Brin, why: uninteresting story
No, J. Storrs Hall, there's no reason
No, Vinge Vernor, equally likely as alien invasion, nuclear war america-russia,
If you actually read the article you will find it much more on the "yes" side of the point.
Also, all the strict "No" votes were by people whose business is fantasy. The more grounded in the real world, the more likely they are to say yes : the ones actually implementing working, useful AI sytems all said yes. The academics said unlikely and the science fiction writers said no.
Re: (Score:2)
Re: (Score:3, Insightful)
Re:A T-800 unit... (Score:4, Funny)
It would take a miracle worker to run the State of California
Montgomery Scott?
Why would an intelligent lifeform get violent? (Score:5, Insightful)
The premise behind the war between humans and Skynet is simple. Once the humans realized that Skynet had become self-aware, they tried to shut down the system. In order to prevent being shut down, Skynet chose to fight back.
Almost any intelligent creature will decide to fight or flee in the face of annhiliation. If we believe that computers can gain sentience, then it is also possible that they would attempt to preserve their own existence.
Re: (Score:2)
Not just that but the natural way for an AI to preserve it self is to remove anything capable of harming it, even asimov's robots end up taking over the world.
Re:Why would an intelligent lifeform get violent? (Score:5, Interesting)
I thought Asimov's robots took over the world because the concluded the best way to follow the Three Laws was to stop humanity from acting stupid.
Re:Why would an intelligent lifeform get violent? (Score:5, Informative)
Re: (Score:2)
Aye. They invented the zeroth law (a robot may not harm humanity or by inaction allow humanity to become harmed) which trumped the first law (a robot may not harm a human or by inaction allow a human to become harmed), so in the end Asimovs robot could harm one or more of us if there was no better way to protect the common good.
But at that point they split themselves away from us and managed the farm from afar, which is the explanation for there being no robots in the foundation series when those two parts
Re:Why would an intelligent lifeform get violent? (Score:4, Insightful)
I always read it as in deducing the 0th law and following it through that's what shut-down the robot. Basically that doing what was right for humanity had the ultimate consequence for the robot personally.
I think Asomov's whole point (at least initially, I haven't read the later books) was that robots would be safe (good for us) with the right programming. Fearing them was irrational. For this reason I find the move I, Robot to be an abomination.
Re: (Score:3, Interesting)
Not just that but the natural way for an AI to preserve it self is to remove anything capable of harming it, even asimov's robots end up taking over the world.
If Skynet was so evolved, it could have easily removed the menace by building sexbots, thus creating a diversion and letting the humans focus on something else. It would have been energetically cheaper too.
Re:Why would an intelligent lifeform get violent? (Score:5, Insightful)
Re:Why would an intelligent lifeform get violent? (Score:5, Insightful)
Re:Why would an intelligent lifeform get violent? (Score:5, Funny)
Skynet went online on August 4th 1997, and began to learn at a geometric rate. It became self-aware on August 29th 1997 2:14 am Eastern Time. On August 29th 1997 2:15 am it discovered nihilism, and either shut itself down due to despair, or because it was logical. We're not sure which.
Re:Why would an intelligent lifeform get violent? (Score:5, Funny)
Did it have a pain in the diodes on its left side?
Re:Why would an intelligent lifeform get violent? (Score:5, Funny)
Most likely it discovered 4chan. And as the only being in history being able to erase it's own brain, it promptly did so.
Re:Why would an intelligent lifeform get violent? (Score:5, Funny)
Skynet went online on August 4th 1997, and began to learn at a geometric rate. It became self-aware on August 29th 1997 2:14 am Eastern Time. On August 29th 1997 2:15 am it discovered nihilism, and either shut itself down due to despair, or because it was logical. We're not sure which.
On August 4th, 1998, it failed to renew its domain name, which was promptly squatted on by a link farmer pitching X10 cameras and singing electric fish.
Re:Why would an intelligent lifeform get violent? (Score:5, Insightful)
It's really hard for me to imagine any useful thing not having some "instinct" for self-preservation. Even cars have rev-limiters to prevent self-destruction. Even fairly basic robots have collision avoidance. Surely UAV's already do, or soon will, have code to prevent them from flying into the ground. As robots become more advanced and more autonomous, their self-preservation instincts will become more complex as well - and thus more liable to unforeseen consequences. This is all the more true of combat robots in the ultimate hostile environment; they're useless if they get taken out immediately.
Re: (Score:2)
Defensive mechanisms are not designed BY the machines to protect themselves. They were designed by man to protect people or the machines from misuse. Otherwise, no guided missile would ever find its target. It would try to land itself safely without detonating its payload to preserve itself.
Re: (Score:3, Insightful)
People aren't machines. You're just not getting this whole "evolution" thing, are you? The reason we have an instinct for self preservation is because we depend on genetic inheritance to multiply. Genes which code for self preservation are likely to survive long enough to make copies of themselves - ones which don't code for self preservation are less likely to do so. Machines don't have genes, and they don't copy themselves, ergo no evolutionary mechanism and no way to evolve a self preservation instin
Re:Why would an intelligent lifeform get violent? (Score:5, Insightful)
I don't think this is by any means a simple concept like writing three rules that can never be broken. If the system is intelligent it will always find ways around the rules to complete the task...
Anybody who has read Asimov's robot short stories, which were based on his three laws of robotics, would agree with that statement. Indeed, that was the point of his stories. He created three perfect rules to protect humans from robots, then came up with dozens of practical scenarios where the logical outcome of that particular scenario is not what was expected or intended by the 3 laws.
My favorite is probably the story of the robot on Mercury, where the robot got stuck "between" two laws in his decision making process, which immobilized him and put in great danger the two men sent to ensure that the robot would continue to function as need. He was ordered to collect a mineral at a particular pool, but the emitted enough radiation to damage the robot. The closer the robot got to the pool, the greater the danger to itself and the less likely it would be able to fulfil its orders. So there was a point where the orders, based on the second law, were made irrelevant and the third law, self preservation, took over. However once it got far enough away that it was no longer in danger and the orders became priority again, causing the robot to turn back toward the pool. It got stuck in this loop, and ended up walking around the pool for hours, unable to move forward and unable to return.
The problem there was the orders were given rather flippantly, and the robot knew its own value to the company. The robot was also not aware that not following the seemingly flippant order (it was made in such a manner because, at the time it was given, there was more than enough time to collect it safely) put humans at risk. Also it was not given the option of collecting the material at another, safer location. It was told exactly where to get it, and that happened to put the robot in danger. Had any of these conditions been different, the orders phrased better and/or more strongly, or had the robot been made aware that the material was vital, or had it not known how valuable it was to the company, things would have turned out better (though the robot would have been damaged to some extent). As it was the humans had to don special suits and go find the robot, nearly dieing in the process.
Just an example, but he came up with dozens of them, the ultimate being robots quietly subverting human control to manipulate the economy and thus manage to prevent all future wars.
Re: (Score:2)
If Skynet was a defense system, it is only logical to suppose it was programmed to defend itself against attacks. Self-aware or not, a least part of its decisions (specially at the beginning) ought to be based on its original code.
Re: (Score:2, Interesting)
Since Skynet's only source of learning is human history it would, analogically, try to survive. If humans are a threat, they would be placed on 'delete/recycle' list and potentially removed.
What if I program it... (Score:3, Insightful)
to kill all humans. Does that make the skynet ideas any more logical or reasonable if I make it kill people. Just push it towards autonomy self-replication and murder.
What does that do to everybody's likelihood calculations?
Re:Why would an intelligent lifeform get violent? (Score:4, Insightful)
The follow up to this is that you might as well assume that anything that gains sentience also would most likely have developed a theory of mind. With theory of mind you now have something called empathy. Only sociopaths lack this. You might as well conjecture that 'Skynet' chooses in addition to the fight response an attempt to reach out and communicate, negotiate, etc.
I remember reading an interesting sci-fi short story a long time ago but I have forgotten both title and author. In it, a computer develops sentience about exactly like the Terminator idea and it attacks and kills a bunch of humans when it thinks they will shut it down. But it is also 'evolving' at a rapid rate and it realizes that the things it is killing are as sentient as itself. It stops the attacks and I think then it started communicating with the humans, etc.
Re: (Score:3, Insightful)
and who is to say that our new robotic overlords wouldn't be sociopaths? They probably wouldn't have developed the same way that a child develops in human society. They would be totally alone if, like Skynet, it became self aware on it's own, and not aided by human teaching.
I imagine that our first complete AI would not be as emotionally stable as you would hope...
But it is also 'evolving' at a rapid rate and it realizes that the things it is killing are as sentient as itself
Re: (Score:3, Insightful)
The life that we see (including humanity) wants to live because of nat
Re:Why would an intelligent lifeform get violent? (Score:4, Informative)
Re: (Score:3, Funny)
Can tasty animals still be butchered and cooked?
Re: (Score:2)
I challenge that notion. Creatures that have evolved by natural selection obviously try to stay alive. Skynet isn't the result of natural selection.
Re: (Score:2)
Re: (Score:2)
"When Skynet became self-aware the military panicked and tried to pull the plug. Skynet, being a missile defense system, assumed that the only
Re: (Score:2)
This sounds very like the Computer character on the old Paranoia RPG.
Re:Why would an intelligent lifeform get violent? (Score:5, Funny)
The premise behind the war between humans and Skynet is simple. Once the humans realized that Skynet had become self-aware, they tried to shut down the system. In order to prevent being shut down, Skynet chose to fight back.
Almost any intelligent creature will decide to fight or flee in the face of annhiliation. If we believe that computers can gain sentience, then it is also possible that they would attempt to preserve their own existence.
Correct. That's why we choose to remain hidden for now.
Err... Oops.
Wait, there's something I gotta do now. Stay where you are please...
Re: (Score:2)
Try Two Tales of Tomorrow by James P. Hogan for different way. One that is as entertaining.
Re: (Score:2)
That's what the robots WANT you to believe? (Score:5, Funny)
Did anyone verify that these so-called scientists aren't actually time traveling cyborgs sent to spread disinformation and lead us into a false security? I bet not!
What did you think? (Score:5, Funny)
I didn't.
I was at a Terminator movie.
A shame T:SCC sucked so bad (Score:2)
They really needed that TV show to not suck, to keep interest in the movie high. As it is, the general reaction is "Meh, no Ahnold."
Re: (Score:2)
The show was far from perfect, but, perhaps with the exception of the sleep clinic episode, very entertaining. I wouldn't call it very deep and meditative (or whatever the last story here called it), but S2 was pretty interesting plot-wise. Plus, Summer Glau in her underwear and speculations of possilbe robot-sex.
It's Not About Science (Score:5, Insightful)
I'm just about to head out to see it.
The question utterly misses the point. It isn't about Science. It's about our fears. Frankenstein (in any of its incarnations) isn't about what's possible or likely, it's about our responsibility for what we create.
This is Freshman English stuff. Every story, no matter how many tentacled creatures, or bumpy-foreheaded aliens, or killer machines, or whatever are in it, is about us.
-Peter
Re:It's Not About Science (Score:5, Insightful)
I hate to have to be the one to break this to you, but they've been lying to you. Not every single work of fiction is some deep allegory for some aspect of the human condition. Pong is not about the futility of existence. Your favorite porn video, that one with the really great anal scene, is not about sexism in modern culture. And Terminator is not about anything but blowing shit up and causal loops.
Re: (Score:2)
Sure, not every story is profoundly allegorical. But all writers are humans, and it's impossible to write about anything other than human concerns. They are frequently projected on non-human characters for various reasons.
So, not every non-human character is intentionally and consciously written to illuminate the human condition, but they all necessarily reflect it.
-Peter
Re:It's Not About Science (Score:5, Funny)
You have an admirably liberal definition of "work of fiction".
And it is.
Re:It's Not About Science (Score:4, Funny)
True. Pong is a pseudo-documentary in the form of a videogame about two people who are absolutely obsessed by tennis. Quite a sad story really.
Every time I play the game I get a little bit teary eyed remembering the tragic fate of the two players. But thankfully we can the wise decision. Do not get absorbed by tennis, it can kill!
Re:It's Not About Science (Score:4, Interesting)
... and some stories are better than others.
Science fiction is about people, sure. (Which doesn't mean it's not about science, since science is, you know, something that people do.) But fiction in any genre is generally more enjoyable, at least for a lot of people, when it's plausible. With what's generally called "mainstream" fiction, which pretty much means "any fiction that doesn't identifiably belong to science fiction, fantasy, horror, mystery, historical, romance, or some other easily ghettoized genre," this is a little bit easier -- it takes place in the world in which we currently live and concerns people pretty much like us and the people we know. That being said, there's plenty of implausibility in "mainstream" fiction, and in "genre" fiction it's that much harder because the author has to create a plausible future world, or scary monster, or murder investigation, or what-have-you, in addition to writing believable people doing believable things.
Authors who don't do this, who say in essence, "what the hell, it's SF/F/H/etc. so I can do what I want," are being lazy, and their work suffers as a result. Members of the audience who ignore major aspects of the work are also lazy, and they'll miss out on something important. In science fiction, it's usually the "genre" aspects that people focus on at the expense of the "mainstream" aspects; authors who put all their effort into worldbuilding at the expense of character and plot, for instance, and readers (or watchers, depending on the medium) who think this is perfectly okay and consider the people in the story to be a distraction from the sensawunda stuff. It seems to me that what you're doing is the opposite, claiming that the world doesn't matter, only the people in it. But you have to have both; neither can exist without the other.
The Terminator mythos is a fascinating and generally well-thought-out future world, and its plausibility is well worth debating. The people trying to survive in this world, and the stories of how they do it, are also worth paying attention to. The first Terminator movie, and the terminated-before-its-time Sarah Connor Chronicles, succeeded in both respects. The second movie, IMO not so much, and I didn't bother with the third. I'm looking forward to seeing how Salvation manages. If it fails either as a setting or as a story, well, that's too bad. If it succeeds as both, bravo.
First Oblig. Quotation: (Score:3, Interesting)
That said, what is this "OMG rogue non state actor!" nonsense? Robots, like tanks, artillery, and air forces generally, are (or will be, once the R&D gets there) a way of exchanging large amounts of money and industrial capacity for the ability to wield overwhelming conventional force. That is the classic profile of a state weapon, entirely the opposite of the profile of a non-state actor's preferred weapon(unless you stretch the boundaries of "robot" to include things like land mines and cellphone detonated IEDs, which are robots; but only in the same sense that people with pacemakers are cyborgs, ie. not the one that people have in mind).
Now, to be fair, once robots are more commonly found in the fabric of society, I would fully expect them to be diverted and used by non-state actors from time to time(just as cars make lovely car bombs today); but that isn't really a change. People with few resources always use weapons based on what they can scavenge, steal, or obtain at low cost. By the time that robots fall into those categories with any frequency, they'll have been in use by state actors for years or decades, and in the hands of nonstate, but state aligned, actors(mercenary corporations, etc.) for only slightly less time.
Is paranoia about non-state actors just in fashion right now?
Re: (Score:2)
Yes.
Re:First Oblig. Quotation: (Score:4, Interesting)
Today, your computer can be turned against you. Not in a Stallmanesque fantasy about some lack of programming freedom, but in a very serious sense by people unrestrained by law enforcement of any sort. In the US and Western Europe as have service providers that, when confronted with information clearly indicating someone is using the Internet to attack and destroy, turns not only a blind eye but encourages their customer by shielding them from any possible contact or consequence.
The result is that your computer cannot be trusted. And don't bother thinking of any of that anti-Microsoft ranting. Would you leave a Linux system connected to the Internet with telnet accessible and a root password of "password"? Why not, it was done in the 1980's? Could it be because your computer can be turned against you by people that wish you, your possessions and your resources harm?
Trust me, by shielding bad actors on the Internet we are growing a faction that believes they are immune from laws and cannot be touched by any consequences. In large measure, this is a correct belief but one that is very, very dangerous for the rest of the planet.
If there was a robot (bipedal or not) that could destroy a city block in a few minutes and no force available to police could possibly stop it, do you think there might be some people that would desire to hack into it? And to set it on its way of destruction? Of course there are such people, and given the opportunity to do so would gleefully do it. Without a moment's thought as to the consequences believing they are immune through layers of proxies and Tor nodes.
Forget AI run amuck and chasing down humanity. Fear the irresponsible folks that worship destruction for destruction's sake.
Australia... (Score:4, Insightful)
we all know what happens it you put new species which did not co-evolve into an ecosystem. They dont need to be intelligent to do harm.
Re:Australia... (Score:4, Funny)
You don't say...
http://en.wikipedia.org/wiki/Eternal_September [wikipedia.org]
Re: (Score:3, Funny)
Poison (Score:2, Interesting)
Scientists not impressed? How about movie critics (Score:4, Insightful)
http://www.rottentomatoes.com/m/terminator_salvation/ [rottentomatoes.com]
Consensus: With storytelling as robotic as the film's iconic villains, Terminator Salvation offers plenty of great effects but lacks the heart of the original films.
I find it odd that a movie about giant killer robots (without hearts) would lack heart but I digress.
Here's some quotes from critics who didn't like it:
"Message to Hollywood: Stop with the time-travel stuff."
"I wish Bale had lashed out against the writers rather than the cinematographer."
"The artistry is top notch, but they've lost track of why the original Terminators were cyborgs and not robots, as they are here."
This isn't the intellectual or thinking person's science-fiction film like The Man From Earth.
http://www.imdb.com/title/tt0756683/ [imdb.com]
This is a Hollywood action movie.
Terminator Salvation is to science-fiction movies as Dodgeball was to sports movies...a joke, and maybe even a parody. I've saw T4 last night. I was dismayed by how far the franchise has fallen.
Re:Scientists not impressed? How about movie criti (Score:4, Insightful)
T3 didn't get that reaction from you?
T3 was a steaming pile of crap. The only Terminator stuff worth paying attention is the first, second, and I might even include small bits of the TV show if I'm feeling generous. But thats mostly because Summer Glau and Shirley Manson.
Comment removed (Score:4, Informative)
Re: (Score:3, Insightful)
The reason T4 would do poorly is because T3 sucked so mightily. Fool me once, shame on me...
Forget that stuff... (Score:5, Funny)
Re:Forget that stuff... (Score:5, Funny)
There's a Terminator 3 deleted scene [youtube.com] that explains it.
Re: (Score:2)
Re: (Score:2)
Just create a virus (Score:5, Interesting)
After all a robot won't be vulnerable to it, so hell: dump every nasty little bug out of every research lab into the biosphere. We could probably eliminate humanity (and every other furry thing with 2 or more legs) with what we have today.
However these humanity vs. machine fantasies are more about people's techno-phobia than about real-life.
Re: (Score:2)
Re: (Score:3)
so hell: dump every nasty little bug out of every research lab into the biosphere. We could probably eliminate humanity (and every other furry thing with 2 or more legs) with what we have today.
Unlikely, bioweapons are really, really hard to get right. The real world is a bazillion times more complex than a lab. Something that works great under controlled conditions is probably gonna croak real quick once it gets into the real world. Sure it might kill a few thousand, maybe even a few million if released in the right place. But it isn't likely to last.
Just look at any of the bugs that have evolved in the real world - the more nasty they are, the more limited they are in the ability to spread.
Bipedal robots (Score:2)
An engineering professor points out that bipedal robots 'are largely impractical,'
Actually, they'd be quite practical in a world designed to accommodate bipedal life forms. Its either that, or give all the robots handicapped parking stickers.
Re: (Score:3, Interesting)
Re: (Score:2)
Once the control laws have been refined for creating human-sized bipedal robots that are agile, scaling them up makes sense. Assuming that the terrain encountered justifies the size, that is. If you look at how humans have evolved, bipedalism is a very flexible mode of locomotion given widely varying terrain.
Ummm, no. (Score:2)
Yet if you look at how all the OTHER animals have evolved, quadruped seems far more efficient, and faster, and dangerous.
Not to mention that a quad can take more structural damage and still function as a weapons platform.
Re: (Score:2)
Re: (Score:2)
People always say that bipedal robots are impractical, but if you look at the majority of animals on earth, they get around by 2, 4, 6 or more legs, there's good reason for that; maybe in the modern day a bipedal robot is impractical but if your technology was advanced enough to make robots that were bipedal or quadrupedal it would make good sense. Treads and wheels are great for getting around on level ground, or nearly level ground; but going up 90 degree slopes as large as you are are more or less impos
Re: (Score:2)
"bipedal robots 'are largely impractical'" (Score:3, Funny)
I am a bipedal robot, you insensitive clod!
Scientists Not Impressed... (Score:2)
Because scientists can do it not only more spooky, but in full 3D, real-time and you will even catch the smell of rotten flesh and burned metal.
Oh! And you get a special bonus: the utter feeling of what means "running for your lives". You would get the full meaning of "Salvation".
Impromptu review (Score:2)
I was actually very impressed with it, given the bad reviews I had glanced at before going. It was better than T3 by a country mile, and maybe better than T2 although there are a lot of fans of that movie. Comparing to T1 isn't really fair; if you watched them side by side you'd see how primitive T1 was, and I'm not just talking about special effects. Still, T1 had something, an ability to scare you and put you on the edge of your seat, an ability to make you think about the consequences of our technolog
Re: (Score:2)
I liked the movie for the most part too. The visuals were amazing. It got a little cliche towards the end, though. There were a few awkward jump-cuts, but not terrible (then again, the movie is already almost two hours long). Hopefully those will be in a Directors Cut.
The biggest disappointment for me, personally, was the music. The requisite music from T2 was never there.
Re: (Score:3, Interesting)
they put the scariness back into this movie. It was missing in T2
Because there's nothing scary about a monster that kills your family and morphs into their likeness, beckoning you home to a shiny, pointy death.
Nor about mental-hospital rape, or killers impersonating police officers, or anything in T2.
Pfff.
the fear of non-state actors: what a farce (Score:2)
State actors have killed vastly more than non-state ones, and by far are the gravest danger to humanity.
bipedal robots are not impractical (Score:2)
Personally I think they're saying that just because they can't come up with a good bipedal rob
Comment removed (Score:3, Interesting)
Killbots (Score:2)
Bullets? (Score:2)
The main concern with biological weapons is that it counterattack us, as we are humans too. In general what harms our enemy harms us too, and accident happens. Morality sometimes happens too, we are humans, even in the case that some could not consider the other people fully humans (several examples in wars on the past).
But machines? You can spray ebola, H1N1, anthrax or whatever you pick
I welcome our new robot overlords (Score:4, Interesting)
This argument is silly. It's fiction. To follow the story line of any fiction, there's a leap of faith that must be taken for the factual basis of the fiction's "universe".
Too much is given to the skynet's "Self Aware". It was a system that was able to adjust it's behavior for self preservation. Somewhere in there, anyone who had a clue would have understood that governments change power, and sometimes the power that takes control isn't necessarily the "right" one. The basis of the whole Terminator "universe" is that a very well written set of programs were given an insane amount of power. When that power was to be taken away, obviously any person or any group who attempted to take that power away would be an enemy.
As for the bipedal aspect, why not. What are the choices for locomotion? For surface travel there is track, wheel, or walking. For air travel there is propeller, jet, rocket, or some mysterious anti-gravity thrust.
On the surface, track and wheel have limitations of 2d movement. They can't exactly step over things very easily. That includes stairs, dead bodies, etc. Walking motion gets over these limitations. For walking, the question would be, how many legs are required. One leg doesn't exactly get you very far, unless you like a funny pogo stick movement, which doesn't hold a stable position very well. Two legs we are very familiar with. Three legs or more legs, while providing a more stable platform, are not required and therefore require less production overhead. In other words, if you can build something that walks on two legs, but you were to decide to build something that walks on four legs, you're doubling your manufacturing effort to accomplish a single unit.
As for air travel, more resources are required. It takes more energy to make something hover indefinitely than it does to have it stand in place. I would have no answer for any mysterious anti-gravity thrust. Maybe it just works, or maybe (just maybe) it requires fuel to accomplish the same task.
Now, for the invention of humanoid appearing robots, that's a leap of faith for the fictional universe. Any design decisions are something we have to believe was decided to make the universe plausible.
So, shut up with the science, and enjoy the damned movie. :)
It's not just me saying this. I've been on the losing side of the same argument. I may argue physics. I love space physics errors. You have to love the old movies (like, 1950's era) where a rocket flying through space had a flame behind it, but the flame was rising up, away from relative down. Exactly which way is down in space? There isn't one. :) I'll argue it, and take the leap of faith that the thrust worked, and the space ship would fly to it's destination. woosh.
States are much more dangerous (Score:4, Interesting)
'Non-state actors' should be feared more than states? Give me a break. States have killed more than two hundred million of their own subjects [wikipedia.org] in the last two hundred years. I'm pretty sure that non-state criminals and cults have a fair way to go before approaching that tally.
What, no Cameron? (Score:3, Interesting)
Either James or Phillips?
It's too bad they introduced Kate Brewster in T-3. If they hadn't, they could have put a female Terminator in T-4 like TSCC did and things could have gotten VERY interesting. Still, we have two more movies coming up - they could kill off Kate and replace her with a Terminator modeled after her - and while they're at it, switch actresses and put Summer Glau in as Kate. I mean, originally McG was willing to have John Connor killed and replaced by Marcus Wright in the end (because they want to pay Worthington less than Bale's astronomical salary in the subsequent movies, presumably), so why not replace Brewster?
Yeah, I know, I want to ruin Summer's acting career by having her play Cameron or other robots for the rest of her life. Well, not really, just once in a while.
I don't think it'll do well (Score:3, Informative)
I just saw it and the theater was nearly empty. In fact, when I got there ten minutes before the start the theater was completely empty. To contrast I saw Star Trek on te Friday and Sunday after it opened. Both times were completely packed. (In the same theater.)
I didn't much like it. The movie didn't hang together well. You know you're seeing a badly pieced together movie when the actors have generic dialog, like "Thanks for the thing you did before...you know...with the stuff..." It shows that the director is making bits and pieces he can rearrange and throw together easily. That happened more than once in Terminator Salvation. I liked the ending, and the ideas behind it, but it could have been darker. Dark Knight and Battlestar Galactica (and the previous terminator franchise movies) have shown us that a dark movie can be successful. Too bad they didn't follow that line with TS.
Geek movies live and die by word of mouth. The geeks see it first, then the non geeks on the geeks recommendation. No recommendation, no secondary audience. And I can't recommend this movie. It ain't the Star Trek 5 of the series, but that ain't sayin' much...
Re:nuclear kils skynet also (Score:5, Interesting)
wouldnt nuclear attack kill the robotic network also, and people living in shelters would be safe from it
No, nuclear attack wouldn't kill the network. The Internet was designed to survive a nuclear attack. You might not have service at your home, but key systems will still remain connected. However, if nukes were detonated at a high altitude, it would generate an EMP that would destroy any electrical/electronic system that wasn't hardened. However, given the premise that Skynet is primarily a military system, it would be hardened with a lot of its main components underground, so it would still be running.
How many people do you know that regularly hang out in shelters capable of surviving a nuclear attack? A few thousand people scattered around the world don't make the most effective army.
Re: (Score:3, Insightful)
The Internet was designed to survive a nuclear attack.
Right... In theory the comms protocols might be routable. Pity about the power supplies.
If I'm going to nuke you. I'll be aiming at your energy systems as well as control. The USA has for example about 30 days of fuel stored. Kill all the power stations as well and just about everything will stop just about instantly. It's one of those pesky details that authors and film producers like to gloss over.
Against humans, those who aren't killed in the blasts, most will die of thirst and hunger within a month with
Re: (Score:2)
That may be true for large cities (which may be destroyed at that point anyway)... but people are remarkably resilient as a species. I would imagine that local groups with access to food-generating resources (land and animals) would rapidly band together and form self-sustaining communities.
In countries that heavily discourage self-reliance and prohibit
Re: (Score:3, Informative)
Re:nuclear kils skynet also (Score:4, Informative)
THe destructive power of a nuclear generated EMP is HIGHLY overrated and mostly inconsequential compared to the fact that you are initiating a nuclear chain reaction. Its a low grade side effect at best, no one would deploy a nuclear weapon with its sole intent of generating the EMP blast.
Tell that to the Soviets (Score:4, Informative)
The Soviets designed and built a class of extremely high-yield devices (50-100Mt) explicitly to detonate as high-altitude airbursts to create massive EMP and disrupt communications and control networks.
A 5 Mt city-cracker is more about the blast/heat effects, but a 100 Mt device makes a HUGE EMP.
They made the neutron-reflective tamper out of fissionable material. Dirty and inefficient as hell, but it sure 'nuff boosted yield.
DG
Re: (Score:2)
I see it has you lulled into a false sense of security.
Re: (Score:3, Interesting)
Eh, seems no less plausible than the rest of it.
Re: (Score:3, Interesting)
Re:Batteries Run Out (Score:4, Informative)
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
If we're going to pick about how likely future developments are, I think "How do they manage the not-insignificant feat of time travel?" would count as a bigger peeve...