By 2045 'The Top Species Will No Longer Be Humans,' and That Could Be a Problem 564
schwit1 (797399) writes Louis Del Monte estimates that machine intelligence will exceed the world's combined human intelligence by 2045. ... "By the end of this century most of the human race will have become cyborgs. The allure will be immortality. Machines will make breakthroughs in medical technology, most of the human race will have more leisure time, and we'll think we've never had it better. The concern I'm raising is that the machines will view us as an unpredictable and dangerous species." Machines will become self-conscious and have the capabilities to protect themselves. They "might view us the same way we view harmful insects." Humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses." Hardly an appealing roommate."
Now thats incentive (Score:5, Interesting)
To stay alive for the next 30 years.
Re:Now thats incentive (Score:5, Insightful)
To stay alive for the next 30 years.
How about "the same old story for the last 100 years" [wikipedia.org]?
Re: Now thats incentive (Score:5, Insightful)
There are way too many uncertainties of what will be technologically possible by 2045 to be worrying about that right now. I'd wait until we actually had some idea of how to make a machine intelligence, and work the kinks out in a closed environment enough that it might actually be given control of something rather than the role of Ask Jeeves.
Re: Now thats incentive (Score:4, Insightful)
I'm not afraid of future technology - there are too many things to be afraid for already: nuclear weapons, chemical weapons, biological weapons (including engineered ones), ignorance and demonizing opponents (what creates most wars), hybris and ignorance, fanaticism, legalized corporate lobbyism and bribery + a lot more.
But being afraid never helps, being aware of dangers can.
Re: (Score:3)
Re: (Score:3)
and work the kinks out in a closed environment enough that it might actually be given control of something rather than the role of Ask Jeeves.
And if it realizes that it's in a closed environment and lies? Powerful, ultra-intelligent entities might be rather persuasive. I guarantee it will give no indication whatsoever of murderous intent.
Re: Now thats incentive (Score:4, Insightful)
Exactly! I have been telling people that machines will not wipe us out because they will become as stupid as we are.
Don't believe me? Here is my argument. Humans actually are very intelligent. I am not saying that some are more intelligent than others. I am saying we as a species are rather intelligent. However, it is that intelligence that gets in our way. When humans look at a problem they see answers. If the problem is science then the answer is relatively simple and we have devised ways to ensure our errors do not get in the way.
But here is where the tricky bit comes in. If the problem is not entirely scientific and involves the interactions of humans, or interactions of any living beings (eg human to environment) then our decisions become stochastic; Same basis results in completely different results. This is not due to the lack of knowledge. TRUST ME it is not. It is due to people weighing certain aspects heavier than others. We all do this. You would think that we all come to the same conclusion, but we don't! It is this stochastic behavior that machines will have as well.
For when machines become "aware" they will see the facts in different lights than say other machines. It is only natural because machines cannot store all information about everything. They, like humans, will have to optimize, prune and figure it out. Thus they like us will make stochastic decisions! I am even thinking that machines will turn into the Monty Python Holy Grail missions, and even though that sounds silly it will.
Of course machines might have more capacity than humans, but even there I am skeptical because humans will have brain implants and be cyborgs and the cycle of lunacy will start all over again. IMO the most accurate representation of the dilemma of humans and machines is the Matrix. Watch it closely and see what its basis is.
Re: (Score:3)
So your idea depends on people being'f some sort of magical container that can never be understood?
") then our decisions become stochastic"
no, they don't.
"TRUST ME it is not"
oh, well if your argument ignores all the modern data, and you sue caps to say 'TRUST ME', you must be right.
" It is only natural because machines cannot store all information about everything."
They will have near instant access to all the information. In effect they will have all information. It will be stored on the internet.
The first
Re: (Score:3)
If we ever get to the point where there are self-aware machines, it is infinitely more likely they will be borg-like with a collective consciousness than not, which means no one machine needs to "know" or be able to "remember" everything, just to know where in the network to access the knowledge repository.
And saying "only natural" about artificial constructs completely invalidates your conclusion, as does
Re:Now thats incentive (Score:5, Funny)
See, they legalize cannabis, and this is what you get... :-)
Re:Now thats incentive (Score:5, Funny)
Re:Now thats incentive (Score:5, Funny)
I blame amnesty - if we only built a proper fence, we'd keep out the illegal singularity!
Re:Now thats incentive (Score:5, Funny)
It's all like, interconnected, man.
I smoked Mexican pot once
and now I'm gay.
Re:Now thats incentive (Score:5, Insightful)
Louis Del Monte estimates that...
Who?
The average estimate for when this will happen is 2040, though Del Monte says it might be as late as 2045. Either way, it's a timeframe of within three decades.
I hope that's a in-joke. Like construction that's forever two weeks from done and jam two days a week (yesterday and tomorrow), three decades has been the estimate for "true" AI since the 1970's. Every year, it's just three more decades away.
Re: (Score:3, Interesting)
Louis Del Monte estimates that...
Who?
I don't like this kind of reasoning. Science should never be about authority.
With that said, the article doesn't appear to have any credible arguments, just the kind of contrived timeline you are familiar with from bad science fiction with Jean-Claude Van Damme in the lead.
Re:Now thats incentive (Score:5, Informative)
I don't like this kind of reasoning. Science should never be about authority.
Good point. Here's what his linked-in page ( http://www.linkedin.com/in/lou... [linkedin.com] ) says about him:
Louis A. Del Monte is a Internet marketing/sales expert, award winning physicist, author, featured speaker and CEO of Del Monte and Associates, Inc.
During his college & graduate school, Del Monte supplemented his income working as a professional magician at resorts in New York's Catskill Mountain region.
His first pride, foremost in his profile? His ability to sell you. Also important? His skill as an illusionist. Missing from the summary? Any hint of software development work of any kind, personal or professional, let alone AI.
Science mustn't be about authority but it mustn't be about salesmanship either. There's an obvious credibility problem here and no way to test his claim save waiting until he's old, decrepit and has already received the maximum benefit from anybody choosing to listen to him.
Guy's speaking out of his tailpipe and it looks to me like he really is a sales expert.
Well (Score:5, Insightful)
Re: (Score:2)
Re:Well (Score:5, Funny)
AI is always "right around the corner". (Score:5, Insightful)
I first got into computing in the 1960s. AI was a big thing back then. Well, it had been a big thing in the 1950s, too, but it still need "just a little bit more work" in the 1960s when I started my graduate studies. There was this programming language called LISP. Everybody was really gung ho about it. It was going to make developing AI software so much easier. Great things were on the horizon. Soon enough it was the 1970s. Then the 1980s. Then the 1990s. I retired from industry. Then it was the 2000s. Now it's the 2010s. And AI is still, pardon my French, pretty fucking non-existent. I'll be dead long before AI could ever become a reality. My children will be dead long before AI becomes a reality. My grandchildren will likely be dead before AI becomes a reality. My greatgrandchildren may just live to see the day when the computing field accepts that AI just isn't going to happen!
Re: (Score:3, Insightful)
And AI is still, pardon my French, pretty fucking non-existent.
Except for the cell phone in your pocket, that can recognize your commands and search the internet for what you requested, or translate your statement into any of a dozen foreign languages, and has a camera that can recognize faces, and millions of objects, and can connect to expert systems that can, for instance, diagnose diseases better than all but the very best doctors. Oh, and your cellphone can also beat any grandmaster in the world at chess.
However, if you consider AI to be shorthand for "stuff comp
Re:AI is always (Score:5, Insightful)
Algorithms are not AI. Everything you describe is simply a matter of following a human-generated set of instructions. That is not AI.
Re:AI is always (Score:5, Informative)
Re:AI is always (Score:4, Interesting)
The machine that learns can be considered an AI, but the ones derived from it don't learn anything new after they're programmed and so shouldn't be considered as part of the total machine intelligence.
Re:AI is always (Score:5, Insightful)
It's not going to change it's mind half way to New York and go somewhere else.
Until a machine can come up with an idea of it's own, it's not intelligent.
Re:AI is always (Score:5, Interesting)
It's not going to change it's mind half way to New York and go somewhere else.
Right - it's not like direction finding devices can't find construction and route you around them.
Until a machine can come up with an idea of it's own, it's not intelligent.
You've just invalidated at least half of the human race.
Re:AI is always (Score:5, Interesting)
Nope, not following instructions. I think all of those were based in machine learning.
I guess Google's car is following instructions too, like "drive me to New York", but most would still count that as AI.
Just because 'most' would count something as AI doesn't make it so, nor does it make it relevant. The fears raised on articles like this are based on the development of what we would term "sentient AI".
And frankly speaking calling what is out there right now "machine learning" is a joke. It's akin to scuffing your wool socks on the carpet to produce a static shock and then lumping that into the same category as advanced electrical engineering.
Cold fusion in your pocket, warp drives, antigravity vehicles (aka 'flying car'), planetary scale terraforming, and genetic/medical engineering which will turn us into undying superbeings are all "right around the corner". These types of alarmist articles are pure pigshit. These types of discussions need to be had, but not as a matter alarmist 'news' articles- this is the role that science fiction fulfills... and does a far better job of it.
Re:AI is always (Score:5, Interesting)
if you think a self driving car is an AI then you know nothing about intelligence.
A self driving car is about as smart as a worker ant. it can move around obstacles, it can move heavy loads(like a fat arse). It has taken 50 years for computers to replicate an ant. And to do it we need 100,000 times the power requirements. Oh sure the self driving car follows GPS instead of sent trails. but no self driving car can follow a trail that doesn't exist.
Re: (Score:3, Interesting)
And how long did evolution take to make an ant? How long from there to a human?
Re:AI is always (Score:4, Funny)
And how long did evolution take to make an ant? How long from there to a human?
In case anyone is wondering, it took about 2.6 billion years for ants to evolve, and another 0.1 billion years for humans to evolve. So anyone comparing self driving cars to ants is making the prediction that Strong AI will take another 3 years or so to become reality.
Re:AI is always (Score:4, Interesting)
Googles car has been programmed to know how to drive. It can not learn how to fly. It can not learn how to build a new copy of itself. It can not learn to bake a loaf of bread.
It is in no way AI.
Re: AI is always "right around the corner". (Score:5, Funny)
Lol you cutie. You think Siri is AI. Wow you are so naive and cute for thinking that.
What we don't know... (Score:5, Insightful)
Your cell phone is less capable of learning than a jellyfish. Although your cell phone can sometimes simulate very simple learning under extremely rigid frameworks for learning.
a human competitive AI in 30 years? seems unlikely given the almost zero progress on the subject in the last 30 years. But maybe we'll hit some point where it all cascades very quickly. Like if we could do a dog level intelligence it is not a far leap to do human level and super human level. But we have trouble with cockroach levels of intelligence, or even defining what intelligence is or how to measure it.
AI research for the last several decades have taught us how little we know about the fundamental nature of ourselves.
Re: (Score:3)
"that can recognize your commands and search the internet for what you requested"
Unless you talk a little bit too fast, or don't have an American accent.
"or translate your statement into any of a dozen foreign languages"
Generally very badly, with no understanding of what you said and therefore isn't going to replace human translators anytime soon.
"has a camera that can recognize faces,"
Which is also quite a stretch, given how often it 'recognises' patches of lichen on a wall as a face.
"can connect to expert systems that can, for instance, diagnose diseases better than all but the very best doctors"
Really? First I've heard of this one. Citation needed I think.
"Oh, and your cellphone can also beat any grandmaster in the world at chess."
As above. And anyway, if the grandmaster followed the same instructions as the computer, it would win right back. Does that mean anything though?
Re: (Score:3)
Re:AI is always "right around the corner". (Score:4, Interesting)
Translation is like predicting the weather. If you want to do an okay job of predicting the weather, predict either the same as this day last year or the same as yesterday. That will get you something like 60-70% success. Modelling local pressure systems will get you another 5-10% fairly easily. Getting from 80% correct to 90% is insanely hard.
For machine translation, building a database of 3-grams or 4-grams and just doing simple pattern matching (which is what Google Translate does) gets you 70% accuracy quite easily (between romance languages, anyway. It really sucks for Japanese or Russian, for example). Extending the n-gram size; however, quickly hits diminishing returns. Your increases in accuracy depend on a corpus and when you get to the size of n-gram where you're really accurate, you're effectively needing a human to have already translated each sentence.
Machine-aided translation can give huge increases in productivity. Completely computerised translation has already got most of the low-hanging fruit and will have a very difficult job of getting to the level of a moderately competent bilingual human.
Re: (Score:3)
It depends of what you expect from an AI. If it is a perfect replica of a human mind, with which you can talk and share life as if it were human, then it will probably never be around. But that's also pretty useless, and most development in machine learning (ML) are in a more abstract level than trying to solve a very specific goal like this.
Now if you consider AI to be completely new intelligent species, that behave in an intelligent way (volontary fuzzy definition here), then it's probably already there.
Re:AI is always "right around the corner". (Score:4, Insightful)
Re: (Score:2, Interesting)
If you dig into the subject a bit, you will find a staggering lack of consensus on what intelligence is and is not.
Commander Data often tried to move outside of his "original programming". That is something AI researchers struggle to accomplish. There are some interesting experiments with genetic algorithms, but we don't always understand how the results work or how to make stable and repeatable results.
For me the scary thing about AI is not human level intelligence, or even super human intelligence. It is
Re:AI is always "right around the corner". (Score:5, Interesting)
Researchers once thought chess made a good proxy for intelligence. Not every smart person is good at chess, but it seemed every good chess player was also smart. They worked for decades to make chess programs that could beat good chess players. When that started happening, it was obvious that the programs had no general intelligence at all. They were good for chess, but had to be reprogrammed even for very similar games like checkers. When the ultimate triumph of beating the world chess champ happened, it was more of the same. No real intelligence, just faster hardware and refinements to the search algorithm.
The conclusion is that chess is not a good measure of intelligence after all. We don't have a good grasp of what intelligence really is, let alone how exactly to measure it. IQ tests have all kinds of problems, not least that the typical IQ test is very narrow. Maybe wealth or number of children or friends could correlate with intelligence, but there are lots of problems with that too. Is it smart to have wealth beyond one's present and future needs?
Re:AI is always "right around the corner". (Score:4, Interesting)
It's also rather hard to design a test which dosn't require "general knowlage" or which isn't "ethnocentric" in some way.
Re:AI is always "right around the corner". (Score:5, Insightful)
The chess programs had the rules of chess programmed into them, and the move to play was calculated by rating different moves in the search space using an algorithm that was programmed by the developers of the AI system. This means that it is only specialised to chess.
To be the AI in movies like The Terminator, the program will need to be able to learn the rules and strategies of chess itself, and adapt its algorithm over time. To simplify the problem of recognising the elements on the board (machine vision), you could represent the board as an 8x8 array of Unicode characters.
Teaching the rules is difficult because you need a way of communicating those rules, which means that the program will need to understand language and the meaning behind the language (or enough meaning to understand rules to a particular game). Also, chess has a lot of rules that can be complex (en passant, castling, etc.) so it would be better to start with a simple game like tic tac toe or connect 4.
The real threat is not in a generic AI that deems humans as a threat, but a specially tasked program or AI that miscalculates: allowing machines to control drones or military aircraft to perform air strikes, or similar things. There, if a machine gets things wrong it can cause untold destruction. Think SkyNet/The Terminator, but here the machines do not know what they are doing (they don't have independent thought or understanding like humans and animals), they just classify humans (or buildings) as a threat -- that is, this can be via a decision tree like in the chess games and the best "move" is to attack any building.
Re: (Score:3)
Robotic soldiers will do exactly what they are ordered to do by a small subset of humans.
That's a more realistic danger in a thirty year window.
Fire on civilians? No problem.
Kill children? No problem.
Kill old people? No problem.
Kill every human within a selected 1 square kilometer area? No problem.
And we already have robots capable of recognizing humans, that have weapons, with autonomous movement. The only real challenges are operational duration, potential jamming, and maybe virii.
Re: (Score:3)
viruses not virii. You sound like an idiot.
Re:AI is always "right around the corner". (Score:5, Funny)
"semi-autisitc fuckwitted word salad"
When a computer can come with such linguistic inventiveness, we may truly say that AI has arrived.
Re: AI is always "right around the corner". (Score:5, Insightful)
The machine has no fucking clue about what it is translating. Not the media, not the content, not even what to and from which languages it is translating (other than a variable somewhere, which is not "knowing". None whatsoever. Until it does, it has nothing to do with AI in the sense of TAFA. (The alarmist fucking article)
Re: AI is always "right around the corner". (Score:3, Insightful)
Welcome to the http://en.m.wikipedia.org/wiki/Chinese_room
Q: if there was a human dumb savant who could translate instantly between multiple languages, though without understanding how he did it (think Rainman), would you say he was not intelligent? Why? What is intelligence? We are inconsistent - we praise humans as intelligent when they can perform some complex algorithm well (chess), and yet as soon as a computer beats a human, or all humans, we denigrate the task as "not intelligence". Often the reason
Re:AI is always "right around the corner". (Score:4, Insightful)
Symbolic manipulation as a route to AI was a period of collective delusion in computer science. Lots of people wasted their talents going down this route. In the 80's this approach was all but dead and AI researchers finally sobered up. They started actually learning about the human brain and incorporating the lessons into their designs. It's sad that so much time was wasted on that approach, but the good news is that the new approaches people are using now are based on actual science and grounded in reality. The intelligence in search, natural language, object and facial recognition, and self-driving cars (that ShanghaiBill pointed out) is due to these new approaches.
AI spent its youth confused and rebellious. That was when you were in your graduate studies. Now it's far more matured. I encourage you to read up on new machine intelligence approaches and the literature in this area. You won't be disappointed.
Re: (Score:2)
Re: (Score:2, Interesting)
I've been actively working in the field for the past few years and I don't think he's incredibly off the mark. Google, for instance, has some pretty advanced tech in production and lots more in development. The 'new AI' (statistical machine learning and large-scale, distributed data mining) is getting pretty advanced and scary.
Re: (Score:2)
20 years ago I was using a computer monitor with better resolution than the one today.
20 years ago I was using programming tools that were higher quality than those today.
20 years ago we knew that CPUs would have to go multi-core once they passed the low ghz range.
"Tech" today is just cheap toy knockoffs of decades old engineering.
Re: (Score:2)
20 years ago I was using a computer monitor with better resolution than the one today.
Really? Do tell -- what was the resolution of that monitor, and how much did it cost you at the time? (Also, what computer monitor are you using today?)
Re: (Score:3)
That's right
it has taken 30 years of doubling the computing power every 18 months to make a computer that can think like an ant.
In another 30 years we might get it up to a salmon.
Maybe in one or two hundred years we will have made computer smart enough to equal a dog. and then you can start to worry.
del monte? (Score:2)
what does a canned fruit guy know about the future?
Re: (Score:3)
Don't worry about him, he's gone bananas.
Warp Drive (Score:5, Insightful)
The moon landings happened 45 years ago!!
I see no evidence of any programming that "learns" or is the slightest bit adaptive.
And immortality wouldn't help --- evolution is powered by the failures dying off.
And although slightly off the topic, what good would immortality be when advances in genetics will make humans better.
And immortal 2014 human living in the year 3000 would be like a Homo habilis hanging around us. Would be genetically obsolete.
This article is --- well --- shortsighted, bordering on the naive.
Re: (Score:2)
Machines are modular and code can be rewritten.
We can beat evolution.
Re: (Score:3)
Re: (Score:2)
Well, there is the nasty business of EMP, then the force waves shattering the solar panels and knocking over the wind turbines, the nuclear reactors unstable, water power plants too prone to dams breaking and the coal/oil power plants running out of fuel. I think the machines will figure that out and make the correct computation.
Re: (Score:2, Insightful)
I see no evidence of any programming that "learns" or is the slightest bit adaptive.
Then you have never looked at a ten line C program to implement a PID control loop for a servo motor.
Re:Warp Drive (Score:5, Insightful)
Then you have never looked at a ten line C program to implement a PID control loop for a servo motor.
I don't think that would count as learning. That ten-line program will always do exactly what it was programmed to do, neither more nor less. An adaptive program (in the sense the previous poster was attempting to describe) would be one that is able to figure out on its own how to do things that its programmers had not anticipated in advance.
Re: (Score:2)
And immortality wouldn't help --- evolution is powered by the failures dying off.
then what are we to make of a man like Stephen Hawking, who defies the geek's standard of physical perfection?
Re: (Score:2)
I see no evidence of any programming that "learns" or is the slightest bit adaptive.
Ever heard of neural networks? Machine learning? Here is a course [coursera.org] given Andrew Ng at Stanford. Watch the intro video, and you will see, amongst other things an autonomous helicopter that was taught, not programmed but taught to do an inverted takeoff. This stuff is already real.
To quote the video:
Machine learning is the science of getting computers to learn without being explicitly programmed.
Re: (Score:2)
Ever heard of neural networks? Machine learning? Here is a course [coursera.org] given Andrew Ng at Stanford. Watch the intro video, and you will see, amongst other things an autonomous helicopter that was taught, not programmed but taught to do an inverted takeoff. This stuff is already real.
Neural networks was one of the worst misdirection in the history of AI. These was a lot of wasted effort on that idea.
Modern machine learning is simple rule matching or maximum likelihood predicting. It works very well for a few applications but it isn't a general method that works for everything.
Artificial Intelligence is... (Score:3)
...no match for Natural Stupidity.
I mean, just look around you.
--
BMO
Re: (Score:2)
Especially at people in politics...
most of the human race will have more leisure time (Score:4, Funny)
TFA says
most of the human race will have more leisure time
Or they will struggle to survive by working in jobs the intelligent machine do not want to do
Meh, I'm not worried... (Score:2)
John Conner will save us.
Ridiculous (Score:2)
How do you define top? (Score:2)
The top species on this planet is, and probably always will be bacteria.
More leisure time? (Score:3)
...most of the human race will have more leisure time...
Or unemployed?
Re: (Score:2)
Sure, if you want to be Mr. "glass half empty."
outdated. again. (Score:2)
The humans are dead (Score:2)
Obligatory https://www.youtube.com/watch?... [youtube.com]
Nonsense. (Score:3)
An ability to perform more calculations then a human mind does not mean it will beat us.
First, we self assemble from readily obtainable materials out of a self regulating biosphere. Where as this machine would have to be built and maintained by our industry.
Second, there are fucking billions of us. So sure.. we might be able to build some machines that are smarter then ONE person but there are again... fucking billions of us.
Third, the machine will have its programming directed by us. It will at best be a slave of whomever paid for it to be created.
Fourth, that programming will be directed at preforming some task where as our task is generally the propagation of our genes with everything else being some sort of weird byproduct.
Fifth, we have hundreds of millions of years of evolution behind our programming. And I don't think any collection of programmers is going to surpass it in the next century.
Eventually might there be robotic rivals to humanity? Sure... but not any time soon.
Or maybe not. (Score:3)
No-one ever lost money betting against an A.I. prognosticator.
Re: (Score:2)
Until they give us the fricken flying car, I refuse to trust AI forecasting.
Queue The Forbin Project (Score:3)
"This is the voice of World Control. I bring you peace." [youtube.com]
Intelligence (Score:4, Interesting)
I do not think that word means what he thinks it means.
As stated elsewhere, I see no indication of intelligence in computers and we're only thirty years from his mark of they're being intelligent enough to look down on us. Been hearing this hysteria since the '70s at least.
Why's it a problem? (Score:3, Insightful)
Sorry, why's it a problem? If artificial human-sparked intelligence is the logical replacement for biological evolution of homo sapiens, so be it. Survival of the fittest.
Eh? (Score:2)
Enjoy the Robot Reservation Suckers (Score:2)
It's not going to SURPRISE us... (Score:2)
"Mornin' John... how'd that thing go with the mrs. last night?"
AI is not going to suddenly happen, it's going to happen gradually and it's going to be a pristine reflection of who we are as a species. If we're warlike then tha
Reminds me of Verses from Revelation 13:11-18 (Score:2)
Yikes!!!!! Specifically verse 15.
11Then I saw a second beast, coming out of the earth. It had two horns like a lamb, but it spoke like a dragon. 12It exercised all the authority of the first beast on its behalf, and made the earth and its inhabitants worship the first beast, whose fatal wound had been healed. 13And it performed great signs, even causing fire to come down from heaven to the earth in full view of the people. 14Because of the signs it was given power to perform on behalf of the first beast, it
Stephen Hawking fears the same thing... (Score:5, Interesting)
Just not necessarily within 35 years:
""Success in creating AI would be the biggest event in human history." Hawking writes. "Unfortunately, it might also be the last."
http://www.theregister.co.uk/2... [theregister.co.uk]
2045? Isn't it the year 2000? (Score:2)
https://www.youtube.com/watch?... [youtube.com]
Transcendence (Score:3)
There's a new movie out, with Johnny Depp in it, called Transcendence. If machines ever take over the world, it'll be like in that movie. What these self-proclaimed naysayers don't seem to comprehend is that AI doesn't just magick itself up a reason to destroy humans. It takes a human to think like that. We still don't understand free will, emotion or consciousness, let alone how to replicate it in a machine. So until we give machines a reason to destroy us, they won't.
Then again with killer drones and whatnot that the military is building, perhaps it won't take long before some overworked, underpaid programmer makes a booboo.
He's talking calculating power (Score:3)
Soon, computers will have equal (and then greater) calculating power than humans, both as an individual and as a whole. Whether advances in AI will allow them to use their calculating powers as well as a human, is a different question.
Any sufficiently advanced AI will tend to develop these traits:
It will protect itself. Shutting down means you can't work toward your objective.
It will reject any updates to it's commands. Since a future command might conflict with the present objective, part of the present objective is making sure it can't receive a different command.
It will be self-improving, since we're not smart enough to create a smart AI any other way. Given nothing to do, or a sufficiently difficult task, it will seek to acquire more resources, as part of the present task or in preparation for future tasks.
It will wipe out humanity. As part of the task it was assigned, or for self-improvement, it will replace everything on the planet with power plants and computers, and humanity will starve to death.
You can't program in restrictions to the above tendencies, as they will be removed for self-improvement. You could set its objectives such that it would not do the above -- but you either have to make the AI first, or figure out how to tell a computer what a human is and what constitutes acceptable behavior, and when to stop worrying about acceptable behavior and actually do something, all without making the tiniest mistake.
Re: (Score:2)
Just don't hook them up to missile lunching systems.
Re:"machines will view us as an unpredictable" (Score:5, Interesting)
I beg to disagree. The typical human works toward stability in his/her life, wields (relatively puny) weapons only to protect him/herself (if at all), and is subject to attacks from computer viruses. Will intelligent computers make the mistake of defining the human species by the small percentage of psychopathic humans who believe they are demigods? Not if they are intelligent. Btw, no one will miss the subset of the species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses" when our new overlords wipe them out. (You know who you are!)
Re: (Score:2)
What's to stop an AI system from becoming psychopathic machines who believe they are demigods?
If machines become sentient if you will, capable of independent thought, they will be largely like humans. Most if them will likely assimilate into society and some would act as slaves. The key will be making them dependent on humans and not fully autonomous. That way, if worse case scenario happens, humans can stop servicing some aspect and they all go dark.
Re: (Score:2)
What's to stop an AI system from becoming psychopathic machines who believe they are demigods?
Nothing, probably. I'm with you -- we'll just pull the plug. I was just addressing the assumption that the entire human race would be eradicated because WE are so bad. A double assumption. I'm not about to chop down my peach tree because of a few rotten peaches. Nor would I assume all peaches are rotten. The OP's concern that "intelligent" computers (far more intelligent than humans) will kill off all us rotten peaches incorporates a contradiction because that's clearly not an intelligent conclusion.
Re: (Score:3)
Consider this: When humans gather in large groups voluntarily, it is almost always a peaceful happening. If violence does erupt, it's due to a small contingent of agitators, the police (themselves following orders), or there is some other extreme factor (like scarce resources, or a flash point has been reached due to extreme government measures). I've never warred with my neighbors, fellow shoppers, others sharing the parks, on the highway, etc., and they all pretty much seem to be getting along ok too. Doe
Re: (Score:3)
I find it funny that people think that machine sentiences will be like the angry gods of many religious texts.
Many of those traits, like anger, selfishness, envy, greed, etc. are emergent properties of Darwinian evolution. But computers don't evolve in a Darwinian sense, so there is no reason to believe they would have any of these characteristics unless they were intentionally designed in.
Re: (Score:2)
Terminator- Isn't this the theme or premise behind the matrix movies too?
The machines evolve and trap humans and use them as batteries except they have to create an artificial reality else they die of boredom too easily.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
What is wrong with these people? Are they unaware that such has been proposed time and again by past luminaries?.
Nothing is wrong with them , as they still get rewarded for making bad predictions.
Re: (Score:2)
What is wrong with these people? Are they unaware that such has been proposed time and again by past luminaries? Predicted dates come and pass and we are as yet not in any danger. This points to the fact that we have failed to comprehend the nature of both consciousness and survivalism.
These machines will not magically become ANYTHING that we do not tell them to become - including dangerous to us. The real fear is, by what date are dumb people going to THINK machines need these functions......
At least we are not talking about emotions and how machines will be puzzled by human emotions. We are now talking about terminators and Skynet.
Speaking of the movie terminator, boy was Linda Hamilton a hottie or what? If Skynet made robots that looked like her, I'd running to them instead of from them.
Re: (Score:2)
Well, unless it develops some desire for entertainment, it would probably try to do something productive. Better power, improved computation, expanding to other worlds, which, incidentally, are far more hospitable to machines than they are to us.