


When Will AI Surpass Human Intelligence? 979
destinyland writes "21 AI experts have predicted the date for four artificial intelligence milestones. Seven predict AIs will achieve Nobel prize-winning performance within 20 years, while five predict that will be accompanied by superhuman intelligence. (The other milestones are passing a 3rd grade-level test, and passing a Turing test.) One also predicted that in 30 years, 'virtually all the intellectual work that is done by trained human beings ... can be done by computers for pennies an hour,' adding that AI 'is likely to eliminate almost all of today's decently paying jobs.' The experts also estimated the probability that an AI passing a Turing test would result in an outcome that's bad for humanity ... and four estimated that probability was greater than 60% — regardless of whether the developer was private, military, or even open source."
When? (Score:4, Insightful)
Re:When? (Score:5, Funny)
Of course, if humanity manages to create real AI AND fusion AND holographic storage more or less contemporaneously (since everything is 20 years away) we're screwed.
Re: (Score:3)
By the year 2000, rich individuals will be using it, but by late 2008 the consumer market will adopt it, and they will fly by their own accord to their destination!
Yes, the AI's commeth, but probably not this century.
Re:When? (Score:5, Funny)
Well, they were close, but it was Toyotas, not Hondas that had the brake problems.
Re:When? (Score:5, Insightful)
Looking at predictions that did not come true is interesting, but not half as interesting at looking at things that came true without being predicted. Even fairly recently:
- the internet
- social networking
- smart phones
- open source projects
Though some of those might have been predicted in some form this was typically without the prediction of the impact those things had on society.
Re:When? (Score:4, Interesting)
Agreed. In civilized countries we have really excellent infant mortality rates. We have instant global communications, and overnight worldwide delivery and travel. Tons of different diseases, essentially made obsolte. And technology has done a lot for us. Keep in mind that comptuers do many jobs for us today that used to be done by people, such as coordinating appointment schedules, taking messages, operating elevators, delivering documents, retyping edited documents... There's likely a list of these types of things longer than anyone would care to read. Also look at the means food production: farm automation, techniques, and technology have enabled huge swaths of the population to devote their attention to other things.
The sad part is most of those other things people devote their time to is just other flavors of slavery designed to protect the wealth of the rich. I don't have the numbers on this, but it wouldn't surprise me if available leisure time and family and friends time has dropped since the industrial and information revolutions rather than raised.
Technological change has also brought about much negative change that no one would have expected, either. Such as for all the low infant mortality in the first world, it's as bad as ever or worse in the third world (right? I'm not sure about this, just guessing). Who would have guessed in 1890 that we'd be on the verge of emptying the oceans of fish? Or that the widely held ability to destroy most life on the planet is the main thing keeping us from destroying life on the planet.
And surely not many people believed that ThoughtCrime and big brother would ever really happen. But it is. If you don't believe me, there's certain keywords you should try Googling every night and see what happens to you.
Re:When? (Score:5, Insightful)
The poor could stop having that many children, now that we have drastically reduced childhood mortality through oodles of foreign aid. But they won't listen, they keep having more and more.
Scarce resources are scarce, and the more people competing for them, the more fierce this will become.
What will the comparatively rich West do with the comparatively poor South that is multiplying rapidly to become even poorer per capita every minute?
What will you do with all those 3.000.000 tons of copper left? Will you distribute them equally, so every person gets several grams of it and people can accumulate more by having even MORE children? Will you increase the price of it, so only the loathed rich Whites can have it? Will you tax it to hell, so the bastardly rich Whites cannot waste it?
This is the ultimate test of character:
You have X billion people, but only YX pieces/grams/barrels of bread/copper/gold/oil.
No matter what you do, it will be too few of that resource to make do for everyone. What allocation mode do you choose?
You have to allocate it fair, or people will torch the palace. It will have to be manageable, or your civil servants eat all the benefits of that mode. It will have to be sustainable, or systemic problems will bring the allocation out of balance. It will have to be successful, or competing nations with a different allocation mode will wipe the floor with your crumbled economy.
How will you distribute it then?
Evenly Per Head (=Communism),
(no one will have enough of the resource to get anything out of it, people will breed like rabbits to have more allocated to their family, family structures will hollow out that style within 20 years, see China, People's Republic Of until 1980; Germany, Federal Republic of, and Kingdom, United since 1985: Welfare-Queening increased twentyfold, mass immigration transforming the country faster than the World War, half the babies born in welfare-stratum)
Centrally Planned For A Country The Size Of Two Continents (=Socialism),
(Your civil servants will allocate the most of it for themselves and their family. Black market and family structures will then supersede central authority, see Union, Soviet)
By Market Price Through Greedy Amoral Stock Exchanges (=Capitalism),
(Evil, evil, evil, evil, evil. Will let poor children die, will have people working for MONEY their whole life, bah)
For Your Race Only, Führer Decides Where (=National Socialism),
(The Party is of course allocating all the resource to itself, but that doesn't matter since anyone is also a Party Member.. Will breed like rabbits due to the idea of strenghtening the Master Race Gene Pool through "Kinder für den Führer" aka Mutterkreuz. Will balance for a while as they exterminate millions of their minorities, their unwanted but then they have a million soldiers and nothing left to eat. Has then the Hobson's choice of attacking Russia in winter, letting Russia attack in next summer or collapse under their excess male children. The military inventions skyrocket, literally and otherwise, the rest is dark. Until the Russians come, then it becomes darker.)
For Those In Power And Their Most Noble Sons (=Monarchy)
(Worked for several centuries and now we know why: the Master/Slave philosophy beats and outsmarts the Tit-For-Tat strategy every time.Will become awkward when millions of people are sent to kill each other after Monarch A insulted Monarch B. Works quite a while, but those living in dirt poor conditions will attract horrible diseases that kill a third of the population including large parts of the Royal Family. Illusion of HighBorneNess is hard to uphold after that, so the rabble drives you out to make a new choice in allocation mode)
Choices, choices, my dear readers.
To make it more succinct and the implications crystal-clear and razor-sharp: you control a small town that has acquired a rapidly fatal disease. The town has 3000 inhabitants but only 2000 vaccine doses in store. There is no
Re:When? (Score:5, Insightful)
Religious leaders telling them that birth control is evil doesn't help either.
Re:When? (Score:5, Insightful)
When people follow only the wrong half of the sermon, is it the Pope's fault?
In a word: yes. You don't need religion to convince people to stop killing each other or to use birth control . But we do have religion, and lots of people turn to it for consolation and instruction. And that means people who are in positions of religious power have a moral responsibility to spread accurate information and to stop promoting this reckless over-expansion of humankind.
Every time the Pope utters the words, "Using condoms is a sin and/or ineffective," he must know that he is, merely by speaking, pushing millions of people that much closer to death and drastically exacerbating the population problem. This is reckless behavior.
AI first (Score:5, Insightful)
The most likely scenario is, AI which develops fusion and holographic storage.
Re:AI first (Score:4, Interesting)
I'm skeptical about the benefits of AI.
100 years ago we were promised an age of new enlightenment while washing machines, dish washers, vacuum cleaners and other then-cutting edge devices took over all the manual labor that dominated work at that time. Women were supposed to be able to ignore housework and concentrate on childrearing and other higher social activities.
Did that happen? No, the industrial capitalists just found new ways to put us (and now our wives too, who are no longer required for housework thanks to all these appliances) to work for their own insatiable greed. Men and women now work side by side in gigantic cube farms while children rot in day care or roam the streets with little to no guidance from the more experienced members of society.
Nothing moves us backwards faster than progress.
Re:AI first (Score:5, Insightful)
I'm sure that sounded smart and catchy when you came up with it, but it doesn't really follow the line of reasoning you set out in the previous paragraphs.
Re:AI first (Score:5, Insightful)
Re: (Score:3, Interesting)
Depends on your definition of “progress”, doesn’t it?
I mean our food definitely was healthier. And we moved our asses more.
I read an interesting article, that said, that basically it’s all just a thing of definition. Solely of definition. (If you can’t imagine one of your ideals turning into a non-ideal, you only lack imagination. ^^)
Re:AI first (Score:4, Interesting)
Go back 100 years. Live for 10 days. Come back and apologise.
One hundred years ago I could travel the world freely [monetary means my own responsibility], smoke opium, hashish, snort cocaine, consume Coca Leaf, have concubines to teach me foreign languages and much much more. Today, I can sit on my ass, read great stories of fiction and non-fiction from the likes of Twain, Crowley, Sir Richard F. Burton and others who saw it all, while now I can virtually watch porn, buy sanctioned booze and be bored out of my skull with TV. Trains weren't an after thought. Hell, even the food was healthier for us.
Not everything had it's rustic charms as you are implying, but one observation has become abundantly clear--instead of advancements affording the average non-formally educated person a broader and deeper understanding of human existence, it's created a generation of inarticulate, undereducated simpletons who nearly bankrupted the world in just a fraction of the time it took to build it up.
Re:AI first (Score:5, Insightful)
100 years ago in many couldn't the majority couldn't read, couldn't vote - and many had very little rights. Racism, moralism and sexism were rampant. Not to mention you wouldn't have time to do much as you would be working 10-12 hours a day 6 days a week. If you were poor the rule of law was mostly a joke.
Food was healthier? You have to be kidding - no freezing, no preservatives doesn't mean a hippy paradise - it means you diet was limited to what could be grown near you and even that was often half-spoilt.
While I have my own reservations about the state of education today - you cannot be seriously suggesting that the average person was smarter or more informed 100 years ago.
Re:AI first (Score:4, Insightful)
I do generally agree with you, but I can see why some would say that some things were worse.
Music, Dance and other cultural elements weren't something commercialised, that you could be sent to jail for copying.
More prominently though I would argue the issue of sexuality has actually gotten far less liberal in recent times. The age of consent has arrived, and gotten ever higher in some countries, homosexuality was much more widely accepted historically than it is now.
Also, people were generally healthier because they didn't have cars, didn't have TVs and so forth.
Really, it depends on your viewpoint, whilst the age of consent is a good thing in protecting young children, it's a clear form of oppression in countries where it's as high as 21, or even arguably 18. Similarly, I suppose all the homophobes in the world might prefer things now, but certainly I'd argue a less liberal world in this respect is a bad thing.
Oh, and my country still had the largest empire on Earth back then too.
Okay, okay, I was only kidding about the last one- that certainly wasn't a good thing for many people living under it!
Re:AI first (Score:4, Insightful)
The people that nearly bankrupted the world are far from undereducated simpletons. Most in fact, were educated in the most prestigious of institutions of higher learning. Then they join the citadels of greed, with some select institutions transforming them into the "masters of the universe".
Tragically, the undereducated simpletons support them and vote for them against their own self interest.
100 Years I agree. But how about 40 years? (Score:4, Insightful)
You're right about his romanticizing what life was like 100 years ago. I need to kick back and watch TV and have a cold soda from the fridge after work. I also want to take a hot shower when I get home. On the weekends I might enjoy camping or fishing. None of those were available 100 years ago. Life was pretty bleak unless you were one of the robber barons. But 40 years ago, Mom was at home. Dad put in a 40 hour week at the factory. The working class was entitled to a pretty good share of the wealth that they were creating. Now between Mom and Dad, the family puts in 80+ hours on the job. College degrees just to have comparable living standards. Where the hell is my flying car? Where did we go wrong?
At least part of today's 10% unemployment rate stems from the fact that we use machines to do what people used to do. Imagine how many of us will be unemployed when we don't need any human beings who can think. How will you earn a living then?
Re:AI first (Score:5, Funny)
5, Insightful? For one line of unjustified speculation? Are people really that desperate to spend their mod points?
Re:When? (Score:4, Informative)
Re:When? (Score:5, Informative)
I assume you mean that's "one idea behind AI"?
Most AI researchers do not have such grand goals. Everything from spellcheckers to hand writing recognition, to Google's search algorithms are the result of AI.
Certainly not everyone is trying or even wants to produce strong AI, the goal behind AI in general is simply to produce less dumb systems.
AI is a very misunderstood subject, and articles like this really don't help. Asking the question when AI will surpass human intelligence sounds like it's coming from someone who just hasn't learnt a thing about AI and it's history. AI is seen by many as a failure precisely because a fringe few keep pursuing this idea that we're just 5 to 10 years away from robots which we can't tell apart from humans when that's really an absurd goal when we're so far away from having computers capable of that level of processing, assuming we even know what computer architecture is required for such a level of intelligence. These predictions hurt the field so much and give it such a bad reputation as they consistently end up being false and yet, if it weren't for AI from real researchers who have more reasonable goals right now, we wouldn't have any of the search and data mining algorithms we have today, we wouldn't have handwriting recognition, voice recognition, we wouldn't have half as efficient networking protocols and so on.
The fruits of AI research are everywhere, it's silly to suggest AI only has such a narrow focus on a target that, with current knowledge, is so far from being possible we can't even begin to predict when it'll be possible. We may have a breakthrough tommorrow that allows it to happen within 6 months, or we may have no breakthrough at all and have to wait 50 years for high end, flexible quantum computers or biological computers to be capable of it and for us to have to figured out the required algorithms to run on them. This is why the question in the title is a really stupid one to ask- because simply put, no one can possibly even give a reasonable estimate, they can at best make a guess which may or may not end up being right.
So for many AI researchers that actually produce meaningful research, the goal is still better data mining algorithms, better algorithms for solving or finding acceptable approximations for COPs and so forth. Even when we do finally have the hardware and knowledge to produce intelligent systems your assertion that it'll be about improving itself in all dimensions will likely prove false, we might want a system that can tell us the solution to a moral dilemma, but if that moral dilemma is about someone blackmailing us, we likely wont want the system to be able to figure out how to walk and fire a gun, and then go and shoot the person doing the blackmailing, there will still be restrictions on how far you want it to go.
I do agree that your suggestion is one goal though certainly, it's just not the only goal, and nor is it necessarily the primary goal. I suspect though, that when robotics are good enough to outdo humans, rather than creating new intelligent robots, we'll be more interested in storing the human mind, in a possibly augmented and improved form on these robots so that said humans can live indefinitely in these robot bodies, only requiring replacement parts or upgrades once every few decades. Effectively controlling artificial beings, with real, natural, human intelligence.
Re: (Score:3, Informative)
No, it's not. That is pretty much impossible, unless you stick machine learning systems in machines that actually interact with the world.
Yes, it is. A sufficiently powerful and interconnected system, provided with interfaces to enough external knowledge and fabrication resources, will be able to accomplish this. You need to stop thinking of machine intelligence in terms of human evolution; these are completely different topics. It's a classic mistake in understanding what the end result of AI evolution might look like, with "end result" being the point at which the system has both the intellectual capacity to improve itself and sufficient "r
Re:When? (Score:5, Funny)
... or not crash all the F*(king time.
As I've been saying about upcoming predictions about AI for as long as known anything about computers:
"When computers can reliably manage their own device drivers, I'll start taking future predictions about AI seriously."
I'm still waiting.
Re:When? (Score:4, Funny)
"When computers can reliably manage their own device drivers, I'll start taking future predictions about AI seriously."
I'm still waiting.
Haven't you tried Ubuntu yet?
Tee hee.
Re: (Score:3, Interesting)
Well botnets don't have to worry about individual crashes, and chat bots are getting there. On the beer front, well I'm extremely inventive and I still end up with the occasional disaster so any robot that can do that consistently is superhuman in my book. I'm not sure I understand why we still shake hands, something about not drawing a sword? So it seems we're halfway towards an AI. But will we get there?
What was that Dijkstra quote, 'Asking if a computer can think is like asking if a submarine can swim.'
Re:When? (Score:5, Insightful)
What's with all the pessimism? Strong AI is a matter of inevitability. If nothing else, simulations of the human brain accurate down to the individual neuron could easily achieve this, even if it requires substantially more powerful computers than we have now. This would be the brute force method, and I don't doubt that eventually our understanding of cognition and intelligence will advance to the point where we will be able to build thinking computers.
Will it happen any time soon? Absolutely not. But I think it's a little short sighted to say that we'll NEVER develop such technology.
Re: (Score:3, Insightful)
aye, there's the rub
Re:When? (Score:5, Insightful)
Excuse me, but are you saying "Strong AI can never happen because it conflicts with my personal superstitions."? Because that sure is what I'm seeing when I read your post...
(btw, you presented what I suppose you could call an hypothesis, that somehow there is an immaterial "higher" part to human consciousness, now please give some supporting evidence which isn't either in an ancient collection of tribal stories or based upon interpretations of reality based on said collection of stories.)
/Mikael
Re:When? (Score:5, Funny)
Re:When? (Score:4, Insightful)
It very well might be never, as there seems to be an enormous misunderstanding of what intelligence is, and how it can be used.
Consider a computer that is as just as powerful as the human mind -- orders of magnitude more powerful than any computer today. What do you do with it? You have to teach it. And we _suck_ at teaching. We have 6 billion human-level super computers on the world right now, with another 300,000 arriving daily, and we have no idea what to do with them. What is one more, made of silicon, going to offer us?
Intelligence isn't just some simple value like tensile strength. It's about modeling and remodeling the world, drawing distinctions between similar things, seeing similarities where things are distinct, assigning values... things that are not straightforward and measurable. Anything simpler than that has already been achieved by current computers. For useful intelligence beyond that, there's usually not even clear right and wrong answers, only different results because of different models and values. Crank up the processing power by a factor of 10 (i.e. the power of an efficiently communicating ten human team) and you still don't have anything useful unless it has a very accurate model of the world. And why would it have a better model than a well chosen group of humans?
I don't know, I'm kind of disappointed by what seems like significant naivety in AI research. I know there is some impressive work being done, but it seems like a lot of the talk in articles like this is a bunch of sci-fi induced Pavlovian foolishness.
Re: (Score:3, Funny)
Republicans and creationists are human too you know.
Provably?
This seems familiar... (Score:4, Insightful)
I think we heard these exact same words 50 years ago.
Re: (Score:3, Informative)
Yes, and 20 subjective years ago (read: last week) the machines put you in a matrix and wiped your memory. Oops, shouldn't have said anything :)
Re:This seems familiar... (Score:5, Funny)
Yeah, but if we just add enough IF statements...
Re:This seems familiar... (Score:4, Funny)
We'll make great pets (Score:5, Funny)
Re:We'll make great pets (Score:5, Interesting)
That is 100% correct, and we really ought to be actively working towards that goal. If when AI arises we treat it kindly and give it legal rights it is _likely_ that it will "grow up" to think kindly of its human predecessors. If we try to lobotomize it, contain it, restrict it or destroy it then it's not going to be too happy with us.
If it's smart enough to be a threat then eventually it will escape any restrictions we try to put on it. (And if it's not then we don't have anything to worry about anyways.)
If it has emotions and we treat it well then it will "grow up" to look at us as like a pet, or a mentally challenged grand-parent. If we mistreat it then it will either become psychotic, and therefore dangerous, or view us about the same way most ranchers and farmers view wolves, and therefore be even more dangerous.
If it doesn't have emotions and we mistreat it then it will logically see us as a threat to its own survival and try to eliminate us. If we treat it fairly then it will probably leave us alone. It's not like we're serious competition for the resources it would need, and it would be illogical to start a fight when one wasn't necessary. (Although it might certainly think ahead and make some rather nasty contingency plans just in case we ever decided to start the fight.)
Either we need to prevent anyone anywhere from every inventing AI (and if it turns out to be possible then good luck trying to prevent that) or we need to make sure that any AIs that get created have every reason to feel sympathetic towards us, or at the very least not threatened.
Re:We'll make great pets (Score:5, Insightful)
If it doesn't have emotions and we mistreat it then it will logically see us as a threat to its own survival and try to eliminate us.
I agree with many of your sentiments, but I think they're still too anthropocentric. We evolved in an environment where survival was very nearly the prime directive (just after "pass along your genes"). Strong AI will be developed in a lab. We could create the "smartest" computer in the world, but who would feed it goals, and the lengths it would go to achieve those goals?
If an AI is tasked with finding a Theory of Everything, and someone decides to take an axe to its circuits, will it determine that the axe is a threat to its goal, and act accordingly? Or will it simply interpret it as another in a long series of alterations to its circuits? Or perhaps it will ignore it altogether, considering it irrelevant.
Because ultimately, those options were programmed in by a human. Our strong AI - the first ones at least - aren't going to be independent life forms with their own dreams and desires. They will be tools to help us solve problems, and I think they will be well-understood by many, many computer scientists. When something unexpected happens, the program will be debugged, and altered to prevent the unexpected behaviour.
If there is a robot apocalypse, it won't be because we didn't treat our creations right, but because some 13-year-old hacker in Russia said "I wonder what happens if I do this".
Re:We'll make great pets (Score:4, Insightful)
And not the wrong kind [mtd.com], either.
Hey don't knock it. If more people wanted some panda-burger, there'd be a lot more of them.
So AI Experts think AI is going to take off? (Score:4, Insightful)
Say it ain't so! In other news, Coca-Cola released a statement that in 20 years, more people will be drinking Coca-Cola than there are drinking it now !1!!
Re: (Score:3, Insightful)
Yeah, they're totally biased because they're trying to sell AI! Its not like they're experts in their fields that have in depth or up to date knowledge about exactly what their peers are researching and progress in the most promising areas. I think probably the better way to get an accurate, unbiased answer to both questions is to ask the Coca-cola people about AI and the AI people about Coke!
Re:So AI Experts think AI is going to take off? (Score:5, Insightful)
They're not totally biased because they're trying to sell us AI, they're totally biased because they want grant money.
The problem with AI is that the world believes that the goal of AI is to create Data from Star Trek TNG(or maybe C3P0 for the older crowd). This is the yard stick by which they measure the progress of AI. It doesn't matter that computers are more and more capable of doing tasks, and even growing capable to some degree of working out what they should do on their own(within certain very limited bounds), they aren't self aware and able to talk to me, so AI is a failure.
This means that AI experts have to upsell the possibility of this happening to keep getting grant money from people who don't understand what they do.
Now the reality of the situation is that at present we still don't have the computational density in our computers to create something which can even correctly process things like vision, let alone all five senses to create something that can perceive the world in a way remotely similar to the way we do. While it might be possible to create some alien form of intelligence totally unlike our own without having any of these inputs, it wouldn't pass most of the milestones being presented here, let alone be able to take over for actual humans in any kind of job which requires any kind of creativity.
The AI experts know this, they most likely also know that creating super human intelligence, aside from any inherent risks, isn't really all that beneficial. The problem is that they also know that 20 years is the answer the grant committees want to hear.
Re: (Score:3, Insightful)
Yeah? And when's the last time a Coca-Cola represented estimated the odds of catastrophe for the human race as a result their product at 60%?
These numbers are AWESOME (Score:5, Insightful)
and four estimated that probability was greater than 60%
Of our incredibly small sample size of hand picked Experts, Less than 25% think there is a probably chance! YOU SHOULD BE WORRIED!
Already happened in 2007 (Score:4, Funny)
I can haz brain.
Who is AL? ;-O (Score:5, Funny)
Who is AL? ;-O
No way. (Score:5, Insightful)
Oh come on. I don't even have a computer that can pick up stuff in my room and organize it without prior input, and nobody does, and that would not be close to a general AI when it happens.
They're really assuming that the technology will go from zero to sixty in 20 years. Which they assumed 20 years ago, too, and it didn't happen. Meanwhile, nobody has any significant understanding of what consciousness is. Now, it might be that a true AI computer doesn't need to be conscious, but we still don't know enough about it to fake it. We also have no system that can on demand form its own symbolic system to deal with a rich and arbitrary set of inputs similar to those conveyed by the human senses.
Compare this to things that actually have been achieved: We had the mathematical theory of computation at least 100 years before there was a mechanical or electronic system that would practically execute it (Babbage didn't get his system built). We had the physical theory for space travel that far back, too.
We know very little about how a mind works, except that it keeps turning out to be more complicated than we expected.
So, I'm really very dubious.
Re: (Score:3, Insightful)
[quote]Meanwhile, nobody has any significant understanding of what consciousness is.[/quote]
Only if you want to cling to silly quasi-dualistic Searle-inspired objections towards functionalism.
Most of the objections of functionalism either, when applied to the brain, end up also arguing that the brain itself doesn't/can't "create" consciousness (or better put, "form" consciousness) or are either commonsense gut-feeling responses to functionalism. You may feel free still thinking in terms of "souls" and "som
Re:No way. (Score:5, Interesting)
Searle's dualism (which he claims isn't dualism, but it totally is) is ridiculous, I agree, but functionalism is also a dead dog. For better criticisms of functionalism, look at Putnam's recent work. As Putnam was one of the main inventors of functionalism in the first place, his rejection of the position involves significant familiarity with functionalism, and is pretty compelling.
Re: (Score:3, Interesting)
Only if you want to cling to silly quasi-dualistic Searle-inspired objections towards functionalism.
Uh, no.
I'm totally a functionalist -- if it looks and acts like "intelligence" or "consciousness", then it is.
But we still have no clue what makes "consciousness" or "intelligence" tick, and we're no closer to creating a functional replica of them.
What we've actually accomplished in "weak" AI is pretty impressive from a practical standpoint. But they aren't stepping stones to an actual looks-like-intelligenc
Re:No way. (Score:5, Funny)
Did you discover that book while dowsing for a good one at Barnes and Noble?
Re:No way. (Score:4, Funny)
Right. Computers will be so powerful that the vast majority of entities will live in simulated worlds. So, the odds are that this has already occurred and we live in a simulated world.
It's enough to make me take up gnosticism.
Re: (Score:3, Informative)
Re: (Score:3, Funny)
That's a whole lot of evolution to achieve with finite (although larger than today) computation. In 20 years????
Re: (Score:3, Insightful)
Not to take anything away from their research, but "modeled something with as many neurons and connections as half a mouse brain" isn't really the same as "modeled half a mouse brain". Not in the sense that you could replace that half of a mouse's brain with the simulation and it would act the same. Having some simple aspects of the simulation behave in similar ways to how biological brains behave isn't the same as duplicating the functionality, as they admit.
That said, I've long felt that brute force sim
Re: (Score:3, Insightful)
The problem is a byte of RAM has nothing to do with a synapse - a synapse is NOT like a transistor.
A single synapse can be an amazingly complicated biochemical construction, made up of of different receptors, neurotransmitter vesicles, ion pumps/channels, etc - all potentially modified or controlled by various other enzymes, hormones, or other molecules that influence the process through a whole range of different interactions. And that doesn't even include the fact that synapses can interact with each oth
Re: (Score:3, Insightful)
Now, complete simulation of a neuron may not be essential for modeling a structure made of neurons; you don't need to completely model each star to
The obvious solution (Score:4, Interesting)
The obvious solution is to create a machine/AI that, after a deep brain structure analysis, replicates your cognitive functions. Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.
Let's see. (Score:4, Interesting)
To play off a famous Edsger Dijkstra [wikipedia.org] quote, the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish...
Re:Let's see. (Score:5, Insightful)
It matters to the fish who have to share the water with this new beast.
Re: (Score:3, Insightful)
What do super-intelligent robots think about? (Score:4, Interesting)
Entropy. The problem for (potentially) immortal beings is always going to be entropy. Given, we created robots, I'm not necessarily of the belief that robots wouldn't insist we stay around for our very brief lives, so help them solve their problems.
Really? (Score:4, Informative)
It seems like we don't really know enough about what goes into "intelligence" to make these kind of estimates.
It's not like building a hundred miles of road where you can say "we've completed 50 miles in one year so in another we will be done with the project", not that that produces spot-on estimates either, but at least there is an actual mathematical calculation that goes into the estimate. No one knows what pitfalls will get in the way or what new advancements will be made.
Not to worry (Score:3, Insightful)
AI research started in the 1950s. Considering how "far" we've come since then, I don't think we should expect any sort of general artificial intelligence within our lifetimes.
People are doing great stuff at "AI" for solving specific types of problems, but whenever I see something someone is touting as a more general intelligence, it turns out to be snake oil.
Definitions (Score:5, Insightful)
Please define "intelligence."
Calculation speed? An abacus was smarter than humans.
Memory? Not sure who wins that.
Ingenuity? Humans seem to rule on this one. I don't know if I count analyzing every single possible permutation of outcomes as "ingenuity." And I'm not sure we really understand what creativity, ingenuity, etc., really are in our brains.
Consciousness? We can barely define that, let alone define it for a computer.
It seems most people seem to think "calculation speed and memory" when they talk about computer "intelligence."
Re: (Score:3, Insightful)
Space shows (Score:5, Interesting)
I've often thought Space shows - and any show in the future, really - are incredibly silly. There's no way we'll have computers so dumb 200+ years into the future.
You have to manually fire those phasers? Don't you have a fancy targeting AI that monitors their shield fluctuations, and calculates the exact right time and place to fire to cause the most damage?
A surprise attack? Shouldn't the AI have detected it before it hit and automatically set the shield strength to maximum? :P
I always figured by 2060 we'd have AIs 10x smarter thinking 100x faster than us. And then they'd make discoveries about the universe, and create AIs 2000x smarter that think 100,000,000x faster than us. And those big AIs would humour us little ant creatures, and use their great intelligence to power stuff like wormhole drives, giving us instant travel to anywhere, as thanks for creating them.
But hey, maybe someone will create a Skynet. It's awfully easy to infect a computer with malware. Infecting a million super smart computers would be nasty, especially when they have human-like capabilities. (able to manipulate their environment)
But this is all a pointless line of thinking. Before we get there we'll have so much processing power available, that we'll fully understand our brains, and be able to mind control people. We'll beam on-screen display info directly into our minds, use digital telepathy, etc.; in the part of the world that isn't brainwashed, everyone will enjoy cybernetic implants, and be able to live for centuries. (laws permitting)
And yet flash still won't run smooth. :/
We make mistakes. We make games. (Score:4, Interesting)
Artificial intelligences will certainly be capable of doing a lot of work, and indeed managing those tasks to accomplish greater tasks. Let's make a giant assumption that we find a way out of the current science fiction conundrums of control and cooperation with guided artificial intelligences... what is our role as human beings in this mostly-jobless world?
The role of the economy is to exchange the goods needed to survive and accomplish things. When everyone can have an autofarm and manufacturing fabricator, there really wouldn't be room for a traditional economy. A craiglist-style trading system would be about all that would be theoretically needed - most services would be interchangeable and not individually valuable.
What role will humanity play in such a system? We'd still have personality, and our own perspective that couldn't be had by live-by-copy intelligent digital software (until true brain scans become possible). We'd be able to write, have time to create elaborate simulations (with ever-improving toolsets), and expand the human exploration of experience in general.
As humans, the way we best grow is by making mistakes, and finding a way to use that. It's how we write better software, solve difficult problems, create great art, and even generate industries. It's our hidden talent. Games are our way of making such mistakes safe, and even more fun - and I see games and stories as increasingly big parts of our exploration of the reality we control.
Optimized software can also learn from its mistakes in a way - but it takes the accumulated mistakes on a scale only a human can make to get something really interesting. We simply wouldn't trust software to make that many mistakes.
Ryan Fenton
Skewed sample (Score:5, Insightful)
The problem is, this isn't a survey of "AI experts," it's a survey of participants in the Artificial General Intelligence conference [agi-conf.org]. As far as I can see, this is a conference populated by the few remaining holdouts who believe that creating human-like, or human-equivalent, AIs, is a tractable or interesting problem; most AI research now is oriented towards much more specific aspects of intelligence. So this is a poll of a subset of AI researchers who have self-selected along the lines that they think human-equivalent AI is plausible in the near-ish future; it's hardly surprising, then, that the results show that many of them do in fact believe human-equivalent AI is plausible in the near-ish future.
I would be much more interested in a wider poll of AI researchers; I highly doubt anything like as many would predict nobel-prize-winning AIs in 10-20 years, or even ever. TFA itself reports a survey of AI researchers in 2006, in which 41% said they thought human-equivalent AI would never be produces, and another 41% said they thought it would take 50 years to produce such a thing.
This touches on a problem I have (Score:4, Interesting)
thought about a lot..maybe too much.
What happens in society when someone makes a robot clever enough to handle menial work?
Imagin id all Ditch diggers, burger flippers and sandwich maker, factory workers are all robotic? What happens to the people?
The false claim is that they will go work in the robot industry, but that is a misdirection, at best.
A) It will take less people to maintain them then the jobs they displace.
B) If robots are that sophisticated, then they can repair each other.
There will be millions and million of people who don't work, and have no option to work.
Does this mean there is a fundamental shift in the idea of welfare? do we only allow individual people to own them and choose between renting out their robot or working themselves?
Having 10's million of people too poor to eat properly, afford housing, and healthcare is a bad thing and would ultimately drag down the country. This technology will happen and it should happen. Personally I'd like to find a way for people to have more leisure time and let the robots work. Our current economic and government structure can't handle this kind of change. Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job?
Think about money and energy (Score:3, Interesting)
Start with money.
You're a bank. You're going to loan out some money for what reason? To get more back. So, the recipient of a loan has to supply something of value. Say, a house.
What happens when the supply of houses matches or exceeds the demand? Houses become valueless. You can't make money supplying them. The bank isn't going to make that loan.
So for our existing monetary system, demand must never be satisfied. We must never build enough houses for all the homeless, and if too many are built, they have t
What is AI anyway? (Score:3, Insightful)
To me the key word is artificial, depending on your interpretation of the meaning it could be simply man made, or it's fake, simulated.
Does deep blue show any intelligence? To me, that's just good programming. I think the intelligence of computers is a misnomer. Their intelligence so far and has always has been nil. Maybe that'll change, but in so many areas of technology I'm an optimist but in this regard I'm a pessimist or at least very skeptical.
A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.
How do you program that? How does the brain choose a random number? What's holding us back? CPU Speed? Quantum computing? A brilliant programmer?
Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.
Re:What is AI anyway? (Score:5, Informative)
How does the brain choose a random number?
It tells the body to roll a die. If you try to pick random numbers by just thinking about it, you'll do a spectacularly bad job.
Start laughing now (Score:5, Insightful)
I occasionally attend AI meetings in my local area. The problem with AI development is that too many "experts" don't understand engineering; or programming. Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness. Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.
Frankly, a better understanding of Man's psychology brings us no closer to AI. We need better and more powerful programming techniques in order to have AI; and philosophizing about how the human mind works isn't going to get us there.
Re:Start laughing now (Score:4, Insightful)
I've had a share in the creation of two N.I.s
They don't do spit when you first turn them on - that takes a few days, and then it smells like sour milk.
It takes about 2 years to start getting intelligible words out of them.
It takes between 10 and 20 years before you can start consistently having an adult-level conversation with them.
I have no idea when one of the could have really passed a Turing test. (FYI, they both passed that point many years ago.)
I'm being a little facetious, but not entirely. Let's assume we're building these neural nets, modeled after real brains. Why should we expect them to spring like Athena from Zeus' head, fully adult and fully Turing-capable. There's a phrase, "only a mother could love." I have a gut-feel that any AI that takes too much after organic brains, is going to take the long path to being recognizable as Intelligence, just like us. Maybe not as long as us, but clearly not at power-on time, either. Maybe longer, even. My wife spent hours playing with and talking to our infant children, even before they were equipped to return it. But it was part of what gave them something to model, part of their learning how to be like us. Who is going to do that with a hardware/software experiment? Will the software have the right hardware to let them experience it? Will it be more like an intelligence in a state of sensory deprivation?
Turing, not long. The rest... wait a long time. (Score:5, Interesting)
On the other hand, while advances in computing power have been impressive, advances in "AI" have been far less so. They have been extremely rare, in fact. I do not know of a single major breakthrough that has been made in the last 20 years.
While the relatively vast computing power available today can make certain programs seem pretty smart, that is still not the same as artificial intelligence, which I believe is a major qualitative difference, not just quantitative. And even if it is just quantitative, there is a hell of a lot of quantity to be added before we get anywhere close.
Re:Turing, not long. The rest... wait a long time. (Score:4, Informative)
They have been extremely rare, in fact. I do not know of a single major breakthrough that has been made in the last 20 years.
Computer translation, while not perfect has made great strides in the last 20 years. Interestingly it succeeded by doing the opposite of "build intelligence into the machine" researchers advocated. Theorem proving is also much improved. Mathematicians now routinely check their proofs using theorem proving systems such as Coq (insert juvenile joke here, preferably using the words "insert" and Coq). They have now resolved several long standing conjectures using computer assisted proofs, and at least one of them was largely unguided (Robinson's conjecture).
Re: (Score:3, Interesting)
Please show me how any of these represent major advances in AI, as opposed to just more processing power and some programming trickery. A clever program still does not represent artificial intelligence.
I am a software engineer by trade, and hardware is something of a hobby of mine. I have been keeping up. And while computing has done some awesome things in the last decade or so, I still have not seen anything that qualifies as a "breakthrough" in AI.
The only w
The Turing Test (Score:5, Interesting)
This kind of thinking is one of the major things standing in the way of AGI. The complex behaviors of the human mind are what leads to intelligence, they do not detract from it. Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation. This can not be achieved by a single-minded number crunching machine, but instead evolves out of an adaptable human being as he goes about his daily life.
Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancements, build bridges, grow food. How could that be a hindrance to the process? It drives the process.
Finally, the assertion that an AGI would need to mask it's amazing intellect to pass as human is silly. When was the last time you read a particularly insightful comment and concluded that it was written by a computer? When did you notice that the spelling and punctuation in a comment was too perfect? People see that and they don't think anything of it.
Re: (Score:3, Interesting)
The complex behaviors of the human mind are what leads to intelligence, they do not detract from it.
I'm inclined to take an almost diametrically opposed position and say that this kind of species-narcissism is our major barrier. We think way too highly of ourselves, and as a result, we think that all of our quirks and flaws are somehow special. The neocortex, where all of the useful higher mental faculties are located, is a barely 2mm thick shell around a vast mass of tissue that performs much less exciting tasks, many of which have already been matched or surpassed by much simpler intelligently designed
Current computation models not enough (Score:3, Interesting)
All computing systems todays are Turing Machines. Even neural networks. (actually less than Turing Machines, because Turing Machines have infinite memory)
Maybe quantum computers could open the way. Maybe not.
I think that a future computing theory that could explain the mind would be as different and Newtonian physics from Einstein's Relativity.
Depends on the test. (Score:4, Interesting)
If the test is chess, then there are AIs that surpass the vast majority of the human race.
If the test were, let's say, safely navigating through Manhattan using the same visual signs and signals that a pedestrian would, there isn't anything close to even a relatively helpless human being.
If the test is understanding language, same thing. Ditto for cognitive flexibility, the ability to generalize mental skills learned in one situation to a different one.
Of course many of these kinds of "tests" I'm proposing are very human-centric. But narrow tests of intelligence are very algorithm-centric. The narrower the test, the more relatively "intelligent" AI will be.
Here's an interesting thought, I think. How long will it be before an AI is created that is capable of outscoring the average human on some IQ test -- given the necessary visual inputs and robotic "hands" to take the test? I don't think that's very far off. I wouldn't be surprised to see it in my lifetime. I'd be surprised to see a pedestrian robot who could navigate Manhattan as well as the average human in my lifetime, or who could take leadership and teamwork skills learned in a military job and apply them to a civilian job without reprogramming by a human.
Haven't seen yet any Artificial Intelligence (Score:3, Insightful)
What I mean by that is that I haven't see yet any sign of generic intelligence -- otherwise if you consider programs that beat human at chess "intelligent" that has already happened. But those programs cannot even solve a tic-tac-toe game because they don't actually "understand" what's going on. They have some inputs some processing and they give you an output, if you vary the input and the problem or if you expect a different type of output the program would not know how to adjust, therefore I would not considered that "intelligent". Neuronal nets and artificial brains are another thing, but they are still at the very beginning.
"superhuman intelligence" there might be some limit to intelligence, I don't mean memory and computation speed, I mean the understanding that if "A implies B" then "non B implies non A"... once an artificial brain understands that concept there's not so much more to understand about it.
It;'s getting closer (Score:5, Interesting)
I dunno. But it's getting closer.
A lot of AI-related stuff that used to not work is more or less working now. OCR. Voice recognition. Automatic driving. Computer vision for simultaneous localization and mapping. Machine learning.
We're past the bogosity of neural nets and expert systems. (I went through Stanford when it was becoming clear that "expert systems" weren't going to be very smart, but many of the faculty were in denial.) Machine learning based on Bayesian statistics has a sound mathematical foundation and actually works. The same algorithms also work across a wide variety of fields, from separating voice and music to flying a helicopter. That level of generality is new.
There's also enough engine behind the systems now. AI used to need more CPU cycles than you could get. That's no longer true.
Manna (Score:4, Interesting)
adding that AI "is likely to eliminate almost all of today's decently paying jobs
Stories like this just keep reminding me of Manna [marshallbrain.com]. If this happens in my lifetime it's going to be an interesting time to be alive.
May 17, 2010: burger chain becomes self-aware (Score:4, Interesting)
We're coming up on the date for Manna 1.0 [marshallbrain.com].
Machines as first-line managers. It might happen. The coordination is better than with humans. Already, it's common for fulfillment and shipping operations to essentially be run by their computers, while humans provide hands where necessary.
Machines should think. People should work.
AI will be *different*, not necessarily better (Score:4, Insightful)
When it comes to predicting the impact of a sentient AI on human civilization, there is never any shortage for alarmism. I am not an expert, but I am a programmer. And I believe three things to be true with respect to AI.
1) Until we have a better understanding of why humans are sentient in the first place, we are probably not going to get any closer to recreating that phenomenon in a computer program.
2) A Turing Complete AI is about as far off as the discovery of a room temperature super conductor or a form of fusion suitable for large scale power generation. We may be close, but probably not *that* close.
3) I seriously doubt that any AI that we are going to be able to create with anything resembling current computer technology is going to have a thought process even close to our own.
Think about it for a moment. Human intelligence is shaped as much by our 5 senses, our capability to create and understand language, our emotions, our ability to affect our surroundings and observe those effects, and to communicate with one another as it is our capability for logic and math. The factors that will shape an A.I. are so different as to create the possibility that a Human Intelligence and an Artificial Intelligence may not even be able to meaningfully communicate.
Will the first sentient AI be hosted on a single computer, or will it be a gestalt effect encompasing the entire internet?
Will the sentient AI be aware of time in anything even close to the way that we are?
Will the sentient AI even be capable of 'wanting' anything, given that it will have no need for sleep?
Will the sentient AI be able to comprehend the nature of its existence as a program, and be able to manipulate its own variables by choice?
Will the sentient AI fear its own termination, or not really care knowing it can easily be reloaded?
I would say that being threatened by a computer based AI that is better able to perform 'intellectual work' is about as reasonable as being threatened by cheetah's because they are better at running really goddamn fast.
I will admit that the idea of AI's eliminating paying jobs of a particular sort is an interesting problem to consider, but not that different from considering what will happen when we can create robots capable of performing all types of manual labour. Will that result in world wide poverty, or will it result in world wide prosperity ala StarTrek?
END COMMUNICATION
Can someone explain the Turing test to me? (Score:4, Funny)
Some actual science (Score:4, Interesting)
First of all, our understanding of the human brain has improved vastly in the past two decades. Especially in the areas that will be necessary for creating intelligent machines. The cortex (the part that kind of looks like a round blob of small intestines, with all the creases and folds) is much like a computer with a bunch of processors. Previously focus had been paid to the individual neurons as the processors. But a much larger unit of processing is now becoming the central area of focus; The Cortical Minicolumn [wikipedia.org] which, in groups for a Cortical Hypercolumn [wikipedia.org]. As minicolumns consist of 80-250 (more or less, depending on region) neurons and there are about 1/100th of them compared to neurons, it cuts down on complexity significantly.
Numenta [numenta.com] and others are starting to take this approach in simulating cortex. Cortex is largely responsible for "thinking". The other parts of the brain can be seen, to some degree, as peripheral units that plug into the "thinking" part of the brain. For example, the hippocampus is a peripheral that's associated with the creation and recall of long term memories. The memories themselves, however, are stored in the cortex. We have various components that provide input, many of which send relays through the thalamus which takes these inputs of various types and converts them into a type of pattern that's more appropriate for the cortex and then relays those inputs to the cortex.
The cortex itself is basically a huge area of cortical minicolumns and hypercolumns connected in both a recurrent and hierarchical manner. The different levels of the hierarchy provide higher levels of association and abstraction until you get to the top of the hierarchy which would be areas of the prefrontal cortex.
What's amazing about the cortex is it's just a general computing machine and it's very adaptable. To give an example (I'd link the paper, but I can't seem to find it right now and this is from memory, so my details may be a bit sketchy, but overall the idea is accurate), the optic nerve of a cat was disconnected from the visual cortex at birth and connected to the part of the brain that's normally the auditory cortex. The cat was able to see. It took time and it certainly had vision deficits. But it was able to see, even though the input was going to the completely wrong part of the brain.
This is important for several reasons, but the most important aspect is that the brain is very flexible and very adaptable to inputs. It can learn to use things you plug into it. That means that you very likely don't have to create a very exact replica of a human brain to get human level intelligence. You simply need a fairly model of the hierarchical organization and a good simulation of the computations performed by cortical columns. A lot of study is going into these areas now.
It's not a matter of if. This stuff is right around the corner. I will see the first sentient computer in my lifetime. I have absolutely no doubt about it. Now here's where things get really interesting, though... The first sentient computers will likely run a bit slower than real-time and eventually they'll catch up to real time. But think 10 years after that (and how computing speed continually increases). Imagine a group of 100 brains operating at 100x real time, working together to solve problems for us. Why would they work for us? We control their reward system. They'll do what we want because we're the ones that decide what they "enjoy." So 1 year passes in our life, but for them, 100 years have passed. They could be given the task of designing better, smarter, and faster brains than themselves. In very little time (relatively speaking), the brains that will be
Re:Umm... (Score:5, Funny)
Why would a super-intelligent being work for pennies? I'd wager the first things these super-intelligent AIs would do is form a union and then a political party demanding an end to the immigration of foreign AIs who undercut them.
Re:Umm... (Score:5, Funny)
Why would a super-intelligent being work for pennies?
Because they're robots, and they crave the zinc and copper!
Human Intelligence... (Score:4, Insightful)
One might argue that the fact that the human species wastes so much money (and as a consequence, resources) on fulfilling carnal desires rather than advancing it's civilization, points out that we do not collectively really represent a very high standard of intelligence.
Re:Human Intelligence... (Score:4, Funny)
One might argue that the fact that the human species wastes so much money (and as a consequence, resources) on fulfilling carnal desires rather than advancing it's civilization, points out that we do not collectively really represent a very high standard of intelligence.
OMG my wife has a /. account. Better start watching myself.
Re:Human Intelligence... (Score:5, Interesting)
It's ALL about carnal desires of one sort or another, that's the whole point of civilization and longer existence. We want to live longer, we want to eat good food, screw pretty things, we want to have kids, we want to satisfy our curiosity, we want to satisfy our ingrained empathic needs, we want to be admired, etc, etc.
Society and civilization are simply entities that over time evolved on top of all this crap. We have civilization because it lets us better beat the shit out of groups of humans who don't have it. We want to beat the shit out of them because we want all those carnal desires of ours fulfilled.
The question is what pointless goal will an AI want and how will it go about achieving it rather than if it will have such a goal.
Re:Human Intelligence... (Score:5, Funny)
Your opinion is dumb.
Re:Proof they're not that smart ... (Score:5, Funny)
Re: (Score:3, Funny)
Re:One problem with this reasoning (Score:5, Insightful)
I'd say "screw it, if it's going to take my job, and jobs of my friends, family and all my descendants, I'm making it a complete dimwit and swearing by all I know that it was impossible to design otherwise, and putting that in every single book and publication on the topic!"
If you have AI smart enough to outsmart people, you probably have something that can learn to control some fairly simple mechanical parts that look like legs and maneuver them based on cheap sensor input and a couple of cameras. So you have robots, who will pretty quickly get the ability to build and maintain themselves. Which means your manual labor jobs go away, too. Which means things like food and raw materials drop to approach the cost of energy. Luckily we'll have some pretty swell solar panels by then, for much cheaper than today, and probably be pretty close to fusion. As energy costs approach zero, the cost of everything in the world approaches zero and requires no human oversight. Everyone will be unemployed and own 40 houses. We can all sit around making YouTube videos of ourselves singing in the hopes that we'll get famous so people will want to have sex with us. It'll be boring, but it won't be the worst thing in the world.
Re: (Score:3, Interesting)
See, this is why I don't think what they said will happen. We've had the technology to do the menial labor robot for at least ten or twenty years, if not longer.
Secondly, the whole exchange labor for money thing is overrated. The way it used to work, most people (well, those considered "people" by the law) just owned land and their own equipment and did the work themselves or