Please create an account to participate in the Slashdot moderation system


Forgot your password?
Robotics Hardware Science

When Will AI Surpass Human Intelligence? 979

destinyland writes "21 AI experts have predicted the date for four artificial intelligence milestones. Seven predict AIs will achieve Nobel prize-winning performance within 20 years, while five predict that will be accompanied by superhuman intelligence. (The other milestones are passing a 3rd grade-level test, and passing a Turing test.) One also predicted that in 30 years, 'virtually all the intellectual work that is done by trained human beings ... can be done by computers for pennies an hour,' adding that AI 'is likely to eliminate almost all of today's decently paying jobs.' The experts also estimated the probability that an AI passing a Turing test would result in an outcome that's bad for humanity ... and four estimated that probability was greater than 60% — regardless of whether the developer was private, military, or even open source."
This discussion has been archived. No new comments can be posted.

When Will AI Surpass Human Intelligence?

Comments Filter:
  • Really? (Score:4, Informative)

    by mosb1000 ( 710161 ) <> on Wednesday February 10, 2010 @07:53PM (#31092840)

    It seems like we don't really know enough about what goes into "intelligence" to make these kind of estimates.

    It's not like building a hundred miles of road where you can say "we've completed 50 miles in one year so in another we will be done with the project", not that that produces spot-on estimates either, but at least there is an actual mathematical calculation that goes into the estimate. No one knows what pitfalls will get in the way or what new advancements will be made.

  • by martin-boundary ( 547041 ) on Wednesday February 10, 2010 @07:58PM (#31092894)

    I think we heard these exact same words 50 years ago.

    Yes, and 20 subjective years ago (read: last week) the machines put you in a matrix and wiped your memory. Oops, shouldn't have said anything :)

  • by Daniel Dvorkin ( 106857 ) * on Wednesday February 10, 2010 @08:16PM (#31093120) Homepage Journal

    How does the brain choose a random number?

    It tells the body to roll a die. If you try to pick random numbers by just thinking about it, you'll do a spectacularly bad job.

  • by Alomex ( 148003 ) on Wednesday February 10, 2010 @08:41PM (#31093396) Homepage

    They have been extremely rare, in fact. I do not know of a single major breakthrough that has been made in the last 20 years.

    Computer translation, while not perfect has made great strides in the last 20 years. Interestingly it succeeded by doing the opposite of "build intelligence into the machine" researchers advocated. Theorem proving is also much improved. Mathematicians now routinely check their proofs using theorem proving systems such as Coq (insert juvenile joke here, preferably using the words "insert" and Coq). They have now resolved several long standing conjectures using computer assisted proofs, and at least one of them was largely unguided (Robinson's conjecture).

  • Re:When? (Score:4, Informative)

    by ( 1195047 ) <.ten.yargelap. .ta. .sidarap.pilihp.> on Wednesday February 10, 2010 @08:45PM (#31093442) Homepage Journal
    You're grossly missing the point. The idea behind AI is to create a system that is capable of improving itself in all dimensions (volume of knowledge, the rate at which is can acquire new knowledge, and its own underlying "hardware") without further human intervention.
  • Re:When? (Score:3, Informative)

    by ( 1195047 ) <.ten.yargelap. .ta. .sidarap.pilihp.> on Wednesday February 10, 2010 @10:51PM (#31094910) Homepage Journal

    No, it's not. That is pretty much impossible, unless you stick machine learning systems in machines that actually interact with the world.

    Yes, it is. A sufficiently powerful and interconnected system, provided with interfaces to enough external knowledge and fabrication resources, will be able to accomplish this. You need to stop thinking of machine intelligence in terms of human evolution; these are completely different topics. It's a classic mistake in understanding what the end result of AI evolution might look like, with "end result" being the point at which the system has both the intellectual capacity to improve itself and sufficient "real world" interfaces (in terms of acquisition of raw materials, fabrication facilities, etc) to do so.

    You're correct in your perception that human programming is required to get it to the "kickstart" point, but further intervention will not be required after that. What this means for humanity is completely unpredictable, but given the accelerating pace of technological development it's probably only a matter of time before we outdo ourselves as a species. I don't find this as troubling as some folks might; nothing is forever in this universe. Like other blobs of matter floating around in the cosmos, we're here today... but who knows about tomorrow?

  • Re:No way. (Score:3, Informative)

    by Dr. Spork ( 142693 ) on Wednesday February 10, 2010 @11:29PM (#31095192)
    There is another great Bruce, the author Bruce Sterling, who gave a great speech on this topic, really, the best talk on the whole internet as far as I know. Here's a link to the .mp3 []. The title is "The Singularity: Your Future as a Black Hole." (There's also a video of this on FORA, but the sound really sucks and the excellent q/a session is omitted.)
  • Re:Let's see. (Score:2, Informative)

    by hellop2 ( 1271166 ) on Thursday February 11, 2010 @12:51AM (#31095834)
    I believe the quote is: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”

    You see, his point was that computers will never have human-like intelligence. Humans don't think in binary. Humanity exists in part because humans are able to forget... that we all age evokes compassion. Children are special to humans because they cannot be easily reproduced. Computers can be mass-produced.

    These are some reasons why a computer will never have a need for intelligence in the way we think of it. Therefore computer "intelligence" is fundamentally different than human intelligence. IT seems to me that you're missing Dijkstra's point altogether. It's not about speed. It's about applying a term incorrectly. The speed of submarines is interesting. They can go faster than fish. Dive deeper than fish. See underwater. But they don't "swim" because swimming is something that living things do. Likewise, computers could solve problems faster than humans. Recall more information. Derive better solutions. But "thinking" is something that living things do.

    Also, you didn't even link to the quote. []
  • Re:When? (Score:5, Informative)

    by Xest ( 935314 ) on Thursday February 11, 2010 @05:45AM (#31097416)

    I assume you mean that's "one idea behind AI"?

    Most AI researchers do not have such grand goals. Everything from spellcheckers to hand writing recognition, to Google's search algorithms are the result of AI.

    Certainly not everyone is trying or even wants to produce strong AI, the goal behind AI in general is simply to produce less dumb systems.

    AI is a very misunderstood subject, and articles like this really don't help. Asking the question when AI will surpass human intelligence sounds like it's coming from someone who just hasn't learnt a thing about AI and it's history. AI is seen by many as a failure precisely because a fringe few keep pursuing this idea that we're just 5 to 10 years away from robots which we can't tell apart from humans when that's really an absurd goal when we're so far away from having computers capable of that level of processing, assuming we even know what computer architecture is required for such a level of intelligence. These predictions hurt the field so much and give it such a bad reputation as they consistently end up being false and yet, if it weren't for AI from real researchers who have more reasonable goals right now, we wouldn't have any of the search and data mining algorithms we have today, we wouldn't have handwriting recognition, voice recognition, we wouldn't have half as efficient networking protocols and so on.

    The fruits of AI research are everywhere, it's silly to suggest AI only has such a narrow focus on a target that, with current knowledge, is so far from being possible we can't even begin to predict when it'll be possible. We may have a breakthrough tommorrow that allows it to happen within 6 months, or we may have no breakthrough at all and have to wait 50 years for high end, flexible quantum computers or biological computers to be capable of it and for us to have to figured out the required algorithms to run on them. This is why the question in the title is a really stupid one to ask- because simply put, no one can possibly even give a reasonable estimate, they can at best make a guess which may or may not end up being right.

    So for many AI researchers that actually produce meaningful research, the goal is still better data mining algorithms, better algorithms for solving or finding acceptable approximations for COPs and so forth. Even when we do finally have the hardware and knowledge to produce intelligent systems your assertion that it'll be about improving itself in all dimensions will likely prove false, we might want a system that can tell us the solution to a moral dilemma, but if that moral dilemma is about someone blackmailing us, we likely wont want the system to be able to figure out how to walk and fire a gun, and then go and shoot the person doing the blackmailing, there will still be restrictions on how far you want it to go.

    I do agree that your suggestion is one goal though certainly, it's just not the only goal, and nor is it necessarily the primary goal. I suspect though, that when robotics are good enough to outdo humans, rather than creating new intelligent robots, we'll be more interested in storing the human mind, in a possibly augmented and improved form on these robots so that said humans can live indefinitely in these robot bodies, only requiring replacement parts or upgrades once every few decades. Effectively controlling artificial beings, with real, natural, human intelligence.

  • Re:AI first (Score:3, Informative)

    by twostix ( 1277166 ) on Thursday February 11, 2010 @08:18AM (#31098166)

    Erm 100 years ago was 1910, not 1610 and hardly as uncivilised as you assume it was. The majority most definitely could read and write, life wasn't fundamentally that different than it is now. It was in the middle of the industrial revolution, compulsory education had been around for awhile and creature comforts were starting to flood into the home.

    Second, food preservation has been around since the early 1600s, using glass jars to preserve fruit and veges has been in the home since the mid 1800s. In fact I have a food preservation boiler passed down to me from my great, great grandfather that was made in 1890 and he was a poor as poor convict farmer who took up a 200 acre selection in the mountains when he was pardoned in 1850.

    Third my family is spread across the world and most of that movement was started from about 1830. Poverty came in the depression but before that the "average person" was reasonably well off and could most certainly travel. Otherwise just who exactly populated the US, Canada and Australia? The majority were certainly not just the European aristocrats. My great great grandmother for example brought herself over from Ireland in 1847. The travel cost wasn't the problem, the six months at sea was why people didn't often do it.

    You really should read a few proper history books and not simply assume that because you think something that it's true.

    In fact it's extremely chauvinistic that you just write the past off like that based on nothing but pure ignorance of your own past.

    So very very ignorant.

You are always doing something marginal when the boss drops by your desk.