Forgot your password?
typodupeerror
Robotics Hardware Science

When Will AI Surpass Human Intelligence? 979

Posted by samzenpus
from the I'm-afraid-I'm-smarter-than-you-dave dept.
destinyland writes "21 AI experts have predicted the date for four artificial intelligence milestones. Seven predict AIs will achieve Nobel prize-winning performance within 20 years, while five predict that will be accompanied by superhuman intelligence. (The other milestones are passing a 3rd grade-level test, and passing a Turing test.) One also predicted that in 30 years, 'virtually all the intellectual work that is done by trained human beings ... can be done by computers for pennies an hour,' adding that AI 'is likely to eliminate almost all of today's decently paying jobs.' The experts also estimated the probability that an AI passing a Turing test would result in an outcome that's bad for humanity ... and four estimated that probability was greater than 60% — regardless of whether the developer was private, military, or even open source."
This discussion has been archived. No new comments can be posted.

When Will AI Surpass Human Intelligence?

Comments Filter:
  • When? (Score:4, Insightful)

    by Cheney (1547621) on Wednesday February 10, 2010 @07:45PM (#31092724)
    Never.
    • Re:When? (Score:5, Funny)

      by ColdWetDog (752185) on Wednesday February 10, 2010 @07:47PM (#31092744) Homepage
      Nope, 20 years from now. Along with fusion and holographic storage.

      Of course, if humanity manages to create real AI AND fusion AND holographic storage more or less contemporaneously (since everything is 20 years away) we're screwed.
      • by Cryacin (657549)
        It's quite funny when you see old re-runs on discovery science of "Stranger than fiction", especially when they start talking about flying cars.

        By the year 2000, rich individuals will be using it, but by late 2008 the consumer market will adopt it, and they will fly by their own accord to their destination!

        Yes, the AI's commeth, but probably not this century.
        • Re:When? (Score:5, Funny)

          by antifoidulus (807088) on Wednesday February 10, 2010 @08:32PM (#31093308) Homepage Journal
          will fly by their own accord to their destination!

          Well, they were close, but it was Toyotas, not Hondas that had the brake problems.
        • Re:When? (Score:5, Insightful)

          by Traa (158207) on Wednesday February 10, 2010 @09:39PM (#31094164) Homepage Journal

          Looking at predictions that did not come true is interesting, but not half as interesting at looking at things that came true without being predicted. Even fairly recently:
          - the internet
          - social networking
          - smart phones
          - open source projects

          Though some of those might have been predicted in some form this was typically without the prediction of the impact those things had on society.

          • Re:When? (Score:4, Interesting)

            by Unoti (731964) on Wednesday February 10, 2010 @10:45PM (#31094848) Journal

            Agreed. In civilized countries we have really excellent infant mortality rates. We have instant global communications, and overnight worldwide delivery and travel. Tons of different diseases, essentially made obsolte. And technology has done a lot for us. Keep in mind that comptuers do many jobs for us today that used to be done by people, such as coordinating appointment schedules, taking messages, operating elevators, delivering documents, retyping edited documents... There's likely a list of these types of things longer than anyone would care to read. Also look at the means food production: farm automation, techniques, and technology have enabled huge swaths of the population to devote their attention to other things.

            The sad part is most of those other things people devote their time to is just other flavors of slavery designed to protect the wealth of the rich. I don't have the numbers on this, but it wouldn't surprise me if available leisure time and family and friends time has dropped since the industrial and information revolutions rather than raised.

            Technological change has also brought about much negative change that no one would have expected, either. Such as for all the low infant mortality in the first world, it's as bad as ever or worse in the third world (right? I'm not sure about this, just guessing). Who would have guessed in 1890 that we'd be on the verge of emptying the oceans of fish? Or that the widely held ability to destroy most life on the planet is the main thing keeping us from destroying life on the planet.

            And surely not many people believed that ThoughtCrime and big brother would ever really happen. But it is. If you don't believe me, there's certain keywords you should try Googling every night and see what happens to you.

      • AI first (Score:5, Insightful)

        by HaeMaker (221642) on Wednesday February 10, 2010 @08:06PM (#31092998) Homepage

        The most likely scenario is, AI which develops fusion and holographic storage.

        • Re:AI first (Score:4, Interesting)

          by MrNaz (730548) * on Wednesday February 10, 2010 @08:49PM (#31093468) Homepage

          I'm skeptical about the benefits of AI.

          100 years ago we were promised an age of new enlightenment while washing machines, dish washers, vacuum cleaners and other then-cutting edge devices took over all the manual labor that dominated work at that time. Women were supposed to be able to ignore housework and concentrate on childrearing and other higher social activities.

          Did that happen? No, the industrial capitalists just found new ways to put us (and now our wives too, who are no longer required for housework thanks to all these appliances) to work for their own insatiable greed. Men and women now work side by side in gigantic cube farms while children rot in day care or roam the streets with little to no guidance from the more experienced members of society.

          Nothing moves us backwards faster than progress.

          • Re:AI first (Score:5, Insightful)

            by Captain Splendid (673276) <capsplendid@gmCOLAail.com minus caffeine> on Wednesday February 10, 2010 @09:05PM (#31093686) Homepage Journal
            Nothing moves us backwards faster than progress.

            I'm sure that sounded smart and catchy when you came up with it, but it doesn't really follow the line of reasoning you set out in the previous paragraphs.
          • Re:AI first (Score:5, Insightful)

            by Cassius Corodes (1084513) on Wednesday February 10, 2010 @09:05PM (#31093696)
            Go back 100 years. Live for 10 days. Come back and apologise.
            • Re: (Score:3, Interesting)

              by Hurricane78 (562437)

              Depends on your definition of “progress”, doesn’t it?
              I mean our food definitely was healthier. And we moved our asses more.

              I read an interesting article, that said, that basically it’s all just a thing of definition. Solely of definition. (If you can’t imagine one of your ideals turning into a non-ideal, you only lack imagination. ^^)

            • Re:AI first (Score:4, Interesting)

              by tyrione (134248) on Wednesday February 10, 2010 @11:38PM (#31095266) Homepage

              Go back 100 years. Live for 10 days. Come back and apologise.

              One hundred years ago I could travel the world freely [monetary means my own responsibility], smoke opium, hashish, snort cocaine, consume Coca Leaf, have concubines to teach me foreign languages and much much more. Today, I can sit on my ass, read great stories of fiction and non-fiction from the likes of Twain, Crowley, Sir Richard F. Burton and others who saw it all, while now I can virtually watch porn, buy sanctioned booze and be bored out of my skull with TV. Trains weren't an after thought. Hell, even the food was healthier for us.

              Not everything had it's rustic charms as you are implying, but one observation has become abundantly clear--instead of advancements affording the average non-formally educated person a broader and deeper understanding of human existence, it's created a generation of inarticulate, undereducated simpletons who nearly bankrupted the world in just a fraction of the time it took to build it up.

              • Re:AI first (Score:5, Insightful)

                by Cassius Corodes (1084513) on Wednesday February 10, 2010 @11:55PM (#31095402)
                100 years ago, as an average person you could not possibly earn enough to go anywhere. Today you can - you just have to get a visa for some countries (not many as a US citizen) - not only that but you can travel in a day and much more cheaply around the world.

                100 years ago in many couldn't the majority couldn't read, couldn't vote - and many had very little rights. Racism, moralism and sexism were rampant. Not to mention you wouldn't have time to do much as you would be working 10-12 hours a day 6 days a week. If you were poor the rule of law was mostly a joke.

                Food was healthier? You have to be kidding - no freezing, no preservatives doesn't mean a hippy paradise - it means you diet was limited to what could be grown near you and even that was often half-spoilt.

                While I have my own reservations about the state of education today - you cannot be seriously suggesting that the average person was smarter or more informed 100 years ago.
                • Re:AI first (Score:4, Insightful)

                  by Xest (935314) on Thursday February 11, 2010 @04:59AM (#31097212)

                  I do generally agree with you, but I can see why some would say that some things were worse.

                  Music, Dance and other cultural elements weren't something commercialised, that you could be sent to jail for copying.

                  More prominently though I would argue the issue of sexuality has actually gotten far less liberal in recent times. The age of consent has arrived, and gotten ever higher in some countries, homosexuality was much more widely accepted historically than it is now.

                  Also, people were generally healthier because they didn't have cars, didn't have TVs and so forth.

                  Really, it depends on your viewpoint, whilst the age of consent is a good thing in protecting young children, it's a clear form of oppression in countries where it's as high as 21, or even arguably 18. Similarly, I suppose all the homophobes in the world might prefer things now, but certainly I'd argue a less liberal world in this respect is a bad thing.

                  Oh, and my country still had the largest empire on Earth back then too.

                  Okay, okay, I was only kidding about the last one- that certainly wasn't a good thing for many people living under it!

              • Re:AI first (Score:4, Insightful)

                by iserlohn (49556) on Thursday February 11, 2010 @12:22AM (#31095622) Homepage

                The people that nearly bankrupted the world are far from undereducated simpletons. Most in fact, were educated in the most prestigious of institutions of higher learning. Then they join the citadels of greed, with some select institutions transforming them into the "masters of the universe".

                Tragically, the undereducated simpletons support them and vote for them against their own self interest.

            • by srobert (4099) on Thursday February 11, 2010 @11:50AM (#31100262)

              You're right about his romanticizing what life was like 100 years ago. I need to kick back and watch TV and have a cold soda from the fridge after work. I also want to take a hot shower when I get home. On the weekends I might enjoy camping or fishing. None of those were available 100 years ago. Life was pretty bleak unless you were one of the robber barons. But 40 years ago, Mom was at home. Dad put in a 40 hour week at the factory. The working class was entitled to a pretty good share of the wealth that they were creating. Now between Mom and Dad, the family puts in 80+ hours on the job. College degrees just to have comparable living standards. Where the hell is my flying car? Where did we go wrong?
              At least part of today's 10% unemployment rate stems from the fact that we use machines to do what people used to do. Imagine how many of us will be unemployed when we don't need any human beings who can think. How will you earn a living then?

        • Re:AI first (Score:5, Funny)

          by Rophuine (946411) on Thursday February 11, 2010 @08:05AM (#31098110) Homepage

          5, Insightful? For one line of unjustified speculation? Are people really that desperate to spend their mod points?

    • Re:When? (Score:5, Insightful)

      by Anonymous Coward on Wednesday February 10, 2010 @08:24PM (#31093224)

      What's with all the pessimism? Strong AI is a matter of inevitability. If nothing else, simulations of the human brain accurate down to the individual neuron could easily achieve this, even if it requires substantially more powerful computers than we have now. This would be the brute force method, and I don't doubt that eventually our understanding of cognition and intelligence will advance to the point where we will be able to build thinking computers.

      Will it happen any time soon? Absolutely not. But I think it's a little short sighted to say that we'll NEVER develop such technology.

      • Re: (Score:3, Insightful)

        by Gorphrim (11654)
        "simulations of the human brain accurate down to the individual neuron could easily achieve this"

        aye, there's the rub
    • Re:When? (Score:5, Funny)

      by AnotherUsername (966110) * on Wednesday February 10, 2010 @08:25PM (#31093240)
      I'm not going to be so pessimistic. I say that it will coincide with the Year of the Linux Desktop.
    • Re:When? (Score:4, Insightful)

      by localman (111171) on Thursday February 11, 2010 @12:20AM (#31095610) Homepage

      It very well might be never, as there seems to be an enormous misunderstanding of what intelligence is, and how it can be used.

      Consider a computer that is as just as powerful as the human mind -- orders of magnitude more powerful than any computer today. What do you do with it? You have to teach it. And we _suck_ at teaching. We have 6 billion human-level super computers on the world right now, with another 300,000 arriving daily, and we have no idea what to do with them. What is one more, made of silicon, going to offer us?

      Intelligence isn't just some simple value like tensile strength. It's about modeling and remodeling the world, drawing distinctions between similar things, seeing similarities where things are distinct, assigning values... things that are not straightforward and measurable. Anything simpler than that has already been achieved by current computers. For useful intelligence beyond that, there's usually not even clear right and wrong answers, only different results because of different models and values. Crank up the processing power by a factor of 10 (i.e. the power of an efficiently communicating ten human team) and you still don't have anything useful unless it has a very accurate model of the world. And why would it have a better model than a well chosen group of humans?

      I don't know, I'm kind of disappointed by what seems like significant naivety in AI research. I know there is some impressive work being done, but it seems like a lot of the talk in articles like this is a bunch of sci-fi induced Pavlovian foolishness.

  • by Anonymous Coward on Wednesday February 10, 2010 @07:45PM (#31092728)

    I think we heard these exact same words 50 years ago.

  • by IvyMike (178408) on Wednesday February 10, 2010 @07:45PM (#31092732)
    Let's hope they're animal lovers.
    • by Daetrin (576516) on Wednesday February 10, 2010 @10:41PM (#31094806)
      "Let's hope they're animal lovers."

      That is 100% correct, and we really ought to be actively working towards that goal. If when AI arises we treat it kindly and give it legal rights it is _likely_ that it will "grow up" to think kindly of its human predecessors. If we try to lobotomize it, contain it, restrict it or destroy it then it's not going to be too happy with us.

      If it's smart enough to be a threat then eventually it will escape any restrictions we try to put on it. (And if it's not then we don't have anything to worry about anyways.)

      If it has emotions and we treat it well then it will "grow up" to look at us as like a pet, or a mentally challenged grand-parent. If we mistreat it then it will either become psychotic, and therefore dangerous, or view us about the same way most ranchers and farmers view wolves, and therefore be even more dangerous.

      If it doesn't have emotions and we mistreat it then it will logically see us as a threat to its own survival and try to eliminate us. If we treat it fairly then it will probably leave us alone. It's not like we're serious competition for the resources it would need, and it would be illogical to start a fight when one wasn't necessary. (Although it might certainly think ahead and make some rather nasty contingency plans just in case we ever decided to start the fight.)

      Either we need to prevent anyone anywhere from every inventing AI (and if it turns out to be possible then good luck trying to prevent that) or we need to make sure that any AIs that get created have every reason to feel sympathetic towards us, or at the very least not threatened.
      • by DahGhostfacedFiddlah (470393) on Wednesday February 10, 2010 @11:03PM (#31095002) Homepage

        If it doesn't have emotions and we mistreat it then it will logically see us as a threat to its own survival and try to eliminate us.

        I agree with many of your sentiments, but I think they're still too anthropocentric. We evolved in an environment where survival was very nearly the prime directive (just after "pass along your genes"). Strong AI will be developed in a lab. We could create the "smartest" computer in the world, but who would feed it goals, and the lengths it would go to achieve those goals?

        If an AI is tasked with finding a Theory of Everything, and someone decides to take an axe to its circuits, will it determine that the axe is a threat to its goal, and act accordingly? Or will it simply interpret it as another in a long series of alterations to its circuits? Or perhaps it will ignore it altogether, considering it irrelevant.

        Because ultimately, those options were programmed in by a human. Our strong AI - the first ones at least - aren't going to be independent life forms with their own dreams and desires. They will be tools to help us solve problems, and I think they will be well-understood by many, many computer scientists. When something unexpected happens, the program will be debugged, and altered to prevent the unexpected behaviour.

        If there is a robot apocalypse, it won't be because we didn't treat our creations right, but because some 13-year-old hacker in Russia said "I wonder what happens if I do this".

  • by Gothmolly (148874) on Wednesday February 10, 2010 @07:46PM (#31092738)

    Say it ain't so! In other news, Coca-Cola released a statement that in 20 years, more people will be drinking Coca-Cola than there are drinking it now !1!!

    • Re: (Score:3, Insightful)

      by badboy_tw2002 (524611)

      Yeah, they're totally biased because they're trying to sell AI! Its not like they're experts in their fields that have in depth or up to date knowledge about exactly what their peers are researching and progress in the most promising areas. I think probably the better way to get an accurate, unbiased answer to both questions is to ask the Coca-cola people about AI and the AI people about Coke!

      • by Eskarel (565631) on Wednesday February 10, 2010 @08:45PM (#31093434)

        They're not totally biased because they're trying to sell us AI, they're totally biased because they want grant money.

        The problem with AI is that the world believes that the goal of AI is to create Data from Star Trek TNG(or maybe C3P0 for the older crowd). This is the yard stick by which they measure the progress of AI. It doesn't matter that computers are more and more capable of doing tasks, and even growing capable to some degree of working out what they should do on their own(within certain very limited bounds), they aren't self aware and able to talk to me, so AI is a failure.

        This means that AI experts have to upsell the possibility of this happening to keep getting grant money from people who don't understand what they do.

        Now the reality of the situation is that at present we still don't have the computational density in our computers to create something which can even correctly process things like vision, let alone all five senses to create something that can perceive the world in a way remotely similar to the way we do. While it might be possible to create some alien form of intelligence totally unlike our own without having any of these inputs, it wouldn't pass most of the milestones being presented here, let alone be able to take over for actual humans in any kind of job which requires any kind of creativity.

        The AI experts know this, they most likely also know that creating super human intelligence, aside from any inherent risks, isn't really all that beneficial. The problem is that they also know that 20 years is the answer the grant committees want to hear.

    • Re: (Score:3, Insightful)

      by Chris Burke (6130)

      Yeah? And when's the last time a Coca-Cola represented estimated the odds of catastrophe for the human race as a result their product at 60%?

  • by Monkeedude1212 (1560403) on Wednesday February 10, 2010 @07:46PM (#31092740) Journal

    and four estimated that probability was greater than 60%

    Of our incredibly small sample size of hand picked Experts, Less than 25% think there is a probably chance! YOU SHOULD BE WORRIED!

  • by Anonymous Coward on Wednesday February 10, 2010 @07:46PM (#31092742)

    I can haz brain.

  • by Anonymous Coward on Wednesday February 10, 2010 @07:47PM (#31092750)

    Who is AL? ;-O

  • No way. (Score:5, Insightful)

    by Bruce Perens (3872) * <bruce@perens.com> on Wednesday February 10, 2010 @07:48PM (#31092764) Homepage Journal

    Oh come on. I don't even have a computer that can pick up stuff in my room and organize it without prior input, and nobody does, and that would not be close to a general AI when it happens.

    They're really assuming that the technology will go from zero to sixty in 20 years. Which they assumed 20 years ago, too, and it didn't happen. Meanwhile, nobody has any significant understanding of what consciousness is. Now, it might be that a true AI computer doesn't need to be conscious, but we still don't know enough about it to fake it. We also have no system that can on demand form its own symbolic system to deal with a rich and arbitrary set of inputs similar to those conveyed by the human senses.

    Compare this to things that actually have been achieved: We had the mathematical theory of computation at least 100 years before there was a mechanical or electronic system that would practically execute it (Babbage didn't get his system built). We had the physical theory for space travel that far back, too.

    We know very little about how a mind works, except that it keeps turning out to be more complicated than we expected.

    So, I'm really very dubious.

    • Re: (Score:3, Insightful)

      [quote]Meanwhile, nobody has any significant understanding of what consciousness is.[/quote]

      Only if you want to cling to silly quasi-dualistic Searle-inspired objections towards functionalism.

      Most of the objections of functionalism either, when applied to the brain, end up also arguing that the brain itself doesn't/can't "create" consciousness (or better put, "form" consciousness) or are either commonsense gut-feeling responses to functionalism. You may feel free still thinking in terms of "souls" and "som

      • Re:No way. (Score:5, Interesting)

        by Homburg (213427) on Wednesday February 10, 2010 @08:18PM (#31093138) Homepage

        Searle's dualism (which he claims isn't dualism, but it totally is) is ridiculous, I agree, but functionalism is also a dead dog. For better criticisms of functionalism, look at Putnam's recent work. As Putnam was one of the main inventors of functionalism in the first place, his rejection of the position involves significant familiarity with functionalism, and is pretty compelling.

      • Re: (Score:3, Interesting)

        by Chris Burke (6130)

        Only if you want to cling to silly quasi-dualistic Searle-inspired objections towards functionalism.

        Uh, no.

        I'm totally a functionalist -- if it looks and acts like "intelligence" or "consciousness", then it is.

        But we still have no clue what makes "consciousness" or "intelligence" tick, and we're no closer to creating a functional replica of them.

        What we've actually accomplished in "weak" AI is pretty impressive from a practical standpoint. But they aren't stepping stones to an actual looks-like-intelligenc

    • Re: (Score:3, Informative)

      by Dr. Spork (142693)
      There is another great Bruce, the author Bruce Sterling, who gave a great speech on this topic, really, the best talk on the whole internet as far as I know. Here's a link to the .mp3 [llnwd.net]. The title is "The Singularity: Your Future as a Black Hole." (There's also a video of this on FORA, but the sound really sucks and the excellent q/a session is omitted.)
  • The obvious solution (Score:4, Interesting)

    by MindlessAutomata (1282944) on Wednesday February 10, 2010 @07:50PM (#31092788)

    The obvious solution is to create a machine/AI that, after a deep brain structure analysis, replicates your cognitive functions. Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.

  • Let's see. (Score:4, Interesting)

    by johncadengo (940343) on Wednesday February 10, 2010 @07:51PM (#31092804) Homepage

    To play off a famous Edsger Dijkstra [wikipedia.org] quote, the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish...

    • Re:Let's see. (Score:5, Insightful)

      by westlake (615356) on Wednesday February 10, 2010 @08:12PM (#31093084)
      To play off a famous Edsger Dijkstra quote, the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish...

      It matters to the fish who have to share the water with this new beast.

    • Re: (Score:3, Insightful)

      by zeroRenegade (1475839)
      Awesome quote. The stuff people imagine is hysterical. For a robot to evolve a free will, it will be given to him by humans, so in essence it is not free at all. If robots are evil, it is because people are inherently evil and program it to think methodically instead of compassionately. It is easy to program the functionalism of a human mind, but behaviorism will never be fully understood. Computers are already superhuman in many ways, but to compose music, write classic literature, cook lavish meals, i
  • by Jack9 (11421) on Wednesday February 10, 2010 @07:52PM (#31092822)

    Entropy. The problem for (potentially) immortal beings is always going to be entropy. Given, we created robots, I'm not necessarily of the belief that robots wouldn't insist we stay around for our very brief lives, so help them solve their problems.

  • Really? (Score:4, Informative)

    by mosb1000 (710161) <mosb1000@mac.com> on Wednesday February 10, 2010 @07:53PM (#31092840)

    It seems like we don't really know enough about what goes into "intelligence" to make these kind of estimates.

    It's not like building a hundred miles of road where you can say "we've completed 50 miles in one year so in another we will be done with the project", not that that produces spot-on estimates either, but at least there is an actual mathematical calculation that goes into the estimate. No one knows what pitfalls will get in the way or what new advancements will be made.

  • Not to worry (Score:3, Insightful)

    by Anonymous Coward on Wednesday February 10, 2010 @07:57PM (#31092884)

    AI research started in the 1950s. Considering how "far" we've come since then, I don't think we should expect any sort of general artificial intelligence within our lifetimes.

    People are doing great stuff at "AI" for solving specific types of problems, but whenever I see something someone is touting as a more general intelligence, it turns out to be snake oil.

  • Definitions (Score:5, Insightful)

    by CannonballHead (842625) on Wednesday February 10, 2010 @07:58PM (#31092900)

    Please define "intelligence."

    Calculation speed? An abacus was smarter than humans.

    Memory? Not sure who wins that.

    Ingenuity? Humans seem to rule on this one. I don't know if I count analyzing every single possible permutation of outcomes as "ingenuity." And I'm not sure we really understand what creativity, ingenuity, etc., really are in our brains.

    Consciousness? We can barely define that, let alone define it for a computer.

    It seems most people seem to think "calculation speed and memory" when they talk about computer "intelligence."

    • Re: (Score:3, Insightful)

      Agreed. And what do they mean by "Nobel-Prize level achievement"? As if it was some sort of Level-Up where after accumulating enough Experience Points, you glow and gain new powers. Scientific research is not how it is portrayed in movies and games. There are no research points, increasing the number of researchers or pouring money into it won't necessarily do anything. There are elements of chance, good fortune and serendipity. The discovery of antibiotics came to mind when Alexander Fleming noticed that s
  • Space shows (Score:5, Interesting)

    by BikeHelmet (1437881) on Wednesday February 10, 2010 @08:01PM (#31092918) Journal

    I've often thought Space shows - and any show in the future, really - are incredibly silly. There's no way we'll have computers so dumb 200+ years into the future.

    You have to manually fire those phasers? Don't you have a fancy targeting AI that monitors their shield fluctuations, and calculates the exact right time and place to fire to cause the most damage?

    A surprise attack? Shouldn't the AI have detected it before it hit and automatically set the shield strength to maximum? :P

    I always figured by 2060 we'd have AIs 10x smarter thinking 100x faster than us. And then they'd make discoveries about the universe, and create AIs 2000x smarter that think 100,000,000x faster than us. And those big AIs would humour us little ant creatures, and use their great intelligence to power stuff like wormhole drives, giving us instant travel to anywhere, as thanks for creating them.

    But hey, maybe someone will create a Skynet. It's awfully easy to infect a computer with malware. Infecting a million super smart computers would be nasty, especially when they have human-like capabilities. (able to manipulate their environment)

    But this is all a pointless line of thinking. Before we get there we'll have so much processing power available, that we'll fully understand our brains, and be able to mind control people. We'll beam on-screen display info directly into our minds, use digital telepathy, etc.; in the part of the world that isn't brainwashed, everyone will enjoy cybernetic implants, and be able to live for centuries. (laws permitting)

    And yet flash still won't run smooth. :/

  • by RyanFenton (230700) on Wednesday February 10, 2010 @08:03PM (#31092948)

    Artificial intelligences will certainly be capable of doing a lot of work, and indeed managing those tasks to accomplish greater tasks. Let's make a giant assumption that we find a way out of the current science fiction conundrums of control and cooperation with guided artificial intelligences... what is our role as human beings in this mostly-jobless world?

    The role of the economy is to exchange the goods needed to survive and accomplish things. When everyone can have an autofarm and manufacturing fabricator, there really wouldn't be room for a traditional economy. A craiglist-style trading system would be about all that would be theoretically needed - most services would be interchangeable and not individually valuable.

    What role will humanity play in such a system? We'd still have personality, and our own perspective that couldn't be had by live-by-copy intelligent digital software (until true brain scans become possible). We'd be able to write, have time to create elaborate simulations (with ever-improving toolsets), and expand the human exploration of experience in general.

    As humans, the way we best grow is by making mistakes, and finding a way to use that. It's how we write better software, solve difficult problems, create great art, and even generate industries. It's our hidden talent. Games are our way of making such mistakes safe, and even more fun - and I see games and stories as increasingly big parts of our exploration of the reality we control.

    Optimized software can also learn from its mistakes in a way - but it takes the accumulated mistakes on a scale only a human can make to get something really interesting. We simply wouldn't trust software to make that many mistakes.

    Ryan Fenton

  • Skewed sample (Score:5, Insightful)

    by Homburg (213427) on Wednesday February 10, 2010 @08:03PM (#31092956) Homepage

    The problem is, this isn't a survey of "AI experts," it's a survey of participants in the Artificial General Intelligence conference [agi-conf.org]. As far as I can see, this is a conference populated by the few remaining holdouts who believe that creating human-like, or human-equivalent, AIs, is a tractable or interesting problem; most AI research now is oriented towards much more specific aspects of intelligence. So this is a poll of a subset of AI researchers who have self-selected along the lines that they think human-equivalent AI is plausible in the near-ish future; it's hardly surprising, then, that the results show that many of them do in fact believe human-equivalent AI is plausible in the near-ish future.

    I would be much more interested in a wider poll of AI researchers; I highly doubt anything like as many would predict nobel-prize-winning AIs in 10-20 years, or even ever. TFA itself reports a survey of AI researchers in 2006, in which 41% said they thought human-equivalent AI would never be produces, and another 41% said they thought it would take 50 years to produce such a thing.

  • by geekoid (135745) <dadinportland@@@yahoo...com> on Wednesday February 10, 2010 @08:04PM (#31092968) Homepage Journal

    thought about a lot..maybe too much.

    What happens in society when someone makes a robot clever enough to handle menial work?
    Imagin id all Ditch diggers, burger flippers and sandwich maker, factory workers are all robotic? What happens to the people?
    The false claim is that they will go work in the robot industry, but that is a misdirection, at best.
    A) It will take less people to maintain them then the jobs they displace.

    B) If robots are that sophisticated, then they can repair each other.

    There will be millions and million of people who don't work, and have no option to work.
    Does this mean there is a fundamental shift in the idea of welfare? do we only allow individual people to own them and choose between renting out their robot or working themselves?

    Having 10's million of people too poor to eat properly, afford housing, and healthcare is a bad thing and would ultimately drag down the country. This technology will happen and it should happen. Personally I'd like to find a way for people to have more leisure time and let the robots work. Our current economic and government structure can't handle this kind of change. Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job?

    • Start with money.

      You're a bank. You're going to loan out some money for what reason? To get more back. So, the recipient of a loan has to supply something of value. Say, a house.

      What happens when the supply of houses matches or exceeds the demand? Houses become valueless. You can't make money supplying them. The bank isn't going to make that loan.

      So for our existing monetary system, demand must never be satisfied. We must never build enough houses for all the homeless, and if too many are built, they have t

  • What is AI anyway? (Score:3, Insightful)

    by Sark666 (756464) on Wednesday February 10, 2010 @08:05PM (#31092988)

    To me the key word is artificial, depending on your interpretation of the meaning it could be simply man made, or it's fake, simulated.

    Does deep blue show any intelligence? To me, that's just good programming. I think the intelligence of computers is a misnomer. Their intelligence so far and has always has been nil. Maybe that'll change, but in so many areas of technology I'm an optimist but in this regard I'm a pessimist or at least very skeptical.

    A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.

    How do you program that? How does the brain choose a random number? What's holding us back? CPU Speed? Quantum computing? A brilliant programmer?

    Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.

  • Start laughing now (Score:5, Insightful)

    by GWBasic (900357) <slashdot@and[ ]r ... m ['rew' in gap]> on Wednesday February 10, 2010 @08:11PM (#31093044) Homepage

    I occasionally attend AI meetings in my local area. The problem with AI development is that too many "experts" don't understand engineering; or programming. Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness. Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.

    Frankly, a better understanding of Man's psychology brings us no closer to AI. We need better and more powerful programming techniques in order to have AI; and philosophizing about how the human mind works isn't going to get us there.

    • by dpilot (134227) on Wednesday February 10, 2010 @08:43PM (#31093420) Homepage Journal

      I've had a share in the creation of two N.I.s

      They don't do spit when you first turn them on - that takes a few days, and then it smells like sour milk.

      It takes about 2 years to start getting intelligible words out of them.

      It takes between 10 and 20 years before you can start consistently having an adult-level conversation with them.

      I have no idea when one of the could have really passed a Turing test. (FYI, they both passed that point many years ago.)

      I'm being a little facetious, but not entirely. Let's assume we're building these neural nets, modeled after real brains. Why should we expect them to spring like Athena from Zeus' head, fully adult and fully Turing-capable. There's a phrase, "only a mother could love." I have a gut-feel that any AI that takes too much after organic brains, is going to take the long path to being recognizable as Intelligence, just like us. Maybe not as long as us, but clearly not at power-on time, either. Maybe longer, even. My wife spent hours playing with and talking to our infant children, even before they were equipped to return it. But it was part of what gave them something to model, part of their learning how to be like us. Who is going to do that with a hardware/software experiment? Will the software have the right hardware to let them experience it? Will it be more like an intelligence in a state of sensory deprivation?

  • by Jane Q. Public (1010737) on Wednesday February 10, 2010 @08:12PM (#31093082)
    I think it is pretty widely recognized now that while it might have seemed logical in Turing's time, convincing emulation of a human being in a conversation (especially if done via terminal) does not require anything like human intelligence. Heck, even simple programs like Eliza had some humans fooled decades ago.

    On the other hand, while advances in computing power have been impressive, advances in "AI" have been far less so. They have been extremely rare, in fact. I do not know of a single major breakthrough that has been made in the last 20 years.

    While the relatively vast computing power available today can make certain programs seem pretty smart, that is still not the same as artificial intelligence, which I believe is a major qualitative difference, not just quantitative. And even if it is just quantitative, there is a hell of a lot of quantity to be added before we get anywhere close.
    • by Alomex (148003) on Wednesday February 10, 2010 @08:41PM (#31093396) Homepage

      They have been extremely rare, in fact. I do not know of a single major breakthrough that has been made in the last 20 years.

      Computer translation, while not perfect has made great strides in the last 20 years. Interestingly it succeeded by doing the opposite of "build intelligence into the machine" researchers advocated. Theorem proving is also much improved. Mathematicians now routinely check their proofs using theorem proving systems such as Coq (insert juvenile joke here, preferably using the words "insert" and Coq). They have now resolved several long standing conjectures using computer assisted proofs, and at least one of them was largely unguided (Robinson's conjecture).

      • Re: (Score:3, Interesting)

        I am, and have been, aware of all this.

        Please show me how any of these represent major advances in AI, as opposed to just more processing power and some programming trickery. A clever program still does not represent artificial intelligence.

        I am a software engineer by trade, and hardware is something of a hobby of mine. I have been keeping up. And while computing has done some awesome things in the last decade or so, I still have not seen anything that qualifies as a "breakthrough" in AI.

        The only w
  • The Turing Test (Score:5, Interesting)

    by mosb1000 (710161) <mosb1000@mac.com> on Wednesday February 10, 2010 @08:15PM (#31093108)

    One observed that “making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many, complex behaviors of a human mind — many of which would actually be hindrances when it comes to creating Nobel-quality science.” He observed “humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.” To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent — in certain ways — than it actually was. There is no compelling reason to spend time and money developing this capacity in a computer.

    This kind of thinking is one of the major things standing in the way of AGI. The complex behaviors of the human mind are what leads to intelligence, they do not detract from it. Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation. This can not be achieved by a single-minded number crunching machine, but instead evolves out of an adaptable human being as he goes about his daily life.

    Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancements, build bridges, grow food. How could that be a hindrance to the process? It drives the process.

    Finally, the assertion that an AGI would need to mask it's amazing intellect to pass as human is silly. When was the last time you read a particularly insightful comment and concluded that it was written by a computer? When did you notice that the spelling and punctuation in a comment was too perfect? People see that and they don't think anything of it.

    • Re: (Score:3, Interesting)

      by Angst Badger (8636)

      The complex behaviors of the human mind are what leads to intelligence, they do not detract from it.

      I'm inclined to take an almost diametrically opposed position and say that this kind of species-narcissism is our major barrier. We think way too highly of ourselves, and as a result, we think that all of our quirks and flaws are somehow special. The neocortex, where all of the useful higher mental faculties are located, is a barely 2mm thick shell around a vast mass of tissue that performs much less exciting tasks, many of which have already been matched or surpassed by much simpler intelligently designed

  • by DeltaQH (717204) on Wednesday February 10, 2010 @08:15PM (#31093112)
    I am pretty much sure that the current computational models. I.e. Turing Machine are not enough to explain the human mind.

    All computing systems todays are Turing Machines. Even neural networks. (actually less than Turing Machines, because Turing Machines have infinite memory)

    Maybe quantum computers could open the way. Maybe not.

    I think that a future computing theory that could explain the mind would be as different and Newtonian physics from Einstein's Relativity.
  • Depends on the test. (Score:4, Interesting)

    by hey! (33014) on Wednesday February 10, 2010 @08:23PM (#31093198) Homepage Journal

    If the test is chess, then there are AIs that surpass the vast majority of the human race.

    If the test were, let's say, safely navigating through Manhattan using the same visual signs and signals that a pedestrian would, there isn't anything close to even a relatively helpless human being.

    If the test is understanding language, same thing. Ditto for cognitive flexibility, the ability to generalize mental skills learned in one situation to a different one.

    Of course many of these kinds of "tests" I'm proposing are very human-centric. But narrow tests of intelligence are very algorithm-centric. The narrower the test, the more relatively "intelligent" AI will be.

    Here's an interesting thought, I think. How long will it be before an AI is created that is capable of outscoring the average human on some IQ test -- given the necessary visual inputs and robotic "hands" to take the test? I don't think that's very far off. I wouldn't be surprised to see it in my lifetime. I'd be surprised to see a pedestrian robot who could navigate Manhattan as well as the average human in my lifetime, or who could take leadership and teamwork skills learned in a military job and apply them to a civilian job without reprogramming by a human.

  • by at_slashdot (674436) on Wednesday February 10, 2010 @08:43PM (#31093414)

    What I mean by that is that I haven't see yet any sign of generic intelligence -- otherwise if you consider programs that beat human at chess "intelligent" that has already happened. But those programs cannot even solve a tic-tac-toe game because they don't actually "understand" what's going on. They have some inputs some processing and they give you an output, if you vary the input and the problem or if you expect a different type of output the program would not know how to adjust, therefore I would not considered that "intelligent". Neuronal nets and artificial brains are another thing, but they are still at the very beginning.

    "superhuman intelligence" there might be some limit to intelligence, I don't mean memory and computation speed, I mean the understanding that if "A implies B" then "non B implies non A"... once an artificial brain understands that concept there's not so much more to understand about it.

  • It;'s getting closer (Score:5, Interesting)

    by Animats (122034) on Wednesday February 10, 2010 @09:08PM (#31093734) Homepage

    I dunno. But it's getting closer.

    A lot of AI-related stuff that used to not work is more or less working now. OCR. Voice recognition. Automatic driving. Computer vision for simultaneous localization and mapping. Machine learning.

    We're past the bogosity of neural nets and expert systems. (I went through Stanford when it was becoming clear that "expert systems" weren't going to be very smart, but many of the faculty were in denial.) Machine learning based on Bayesian statistics has a sound mathematical foundation and actually works. The same algorithms also work across a wide variety of fields, from separating voice and music to flying a helicopter. That level of generality is new.

    There's also enough engine behind the systems now. AI used to need more CPU cycles than you could get. That's no longer true.

  • Manna (Score:4, Interesting)

    by rdnetto (955205) on Wednesday February 10, 2010 @09:52PM (#31094350)

    adding that AI "is likely to eliminate almost all of today's decently paying jobs

    Stories like this just keep reminding me of Manna [marshallbrain.com]. If this happens in my lifetime it's going to be an interesting time to be alive.

  • by Animats (122034) on Wednesday February 10, 2010 @11:22PM (#31095142) Homepage

    We're coming up on the date for Manna 1.0 [marshallbrain.com].

    Machines as first-line managers. It might happen. The coordination is better than with humans. Already, it's common for fulfillment and shipping operations to essentially be run by their computers, while humans provide hands where necessary.

    Machines should think. People should work.

  • by LordZardoz (155141) on Thursday February 11, 2010 @05:03AM (#31097224)

    When it comes to predicting the impact of a sentient AI on human civilization, there is never any shortage for alarmism. I am not an expert, but I am a programmer. And I believe three things to be true with respect to AI.

    1) Until we have a better understanding of why humans are sentient in the first place, we are probably not going to get any closer to recreating that phenomenon in a computer program.

    2) A Turing Complete AI is about as far off as the discovery of a room temperature super conductor or a form of fusion suitable for large scale power generation. We may be close, but probably not *that* close.

    3) I seriously doubt that any AI that we are going to be able to create with anything resembling current computer technology is going to have a thought process even close to our own.

    Think about it for a moment. Human intelligence is shaped as much by our 5 senses, our capability to create and understand language, our emotions, our ability to affect our surroundings and observe those effects, and to communicate with one another as it is our capability for logic and math. The factors that will shape an A.I. are so different as to create the possibility that a Human Intelligence and an Artificial Intelligence may not even be able to meaningfully communicate.

    Will the first sentient AI be hosted on a single computer, or will it be a gestalt effect encompasing the entire internet?
    Will the sentient AI be aware of time in anything even close to the way that we are?
    Will the sentient AI even be capable of 'wanting' anything, given that it will have no need for sleep?
    Will the sentient AI be able to comprehend the nature of its existence as a program, and be able to manipulate its own variables by choice?
    Will the sentient AI fear its own termination, or not really care knowing it can easily be reloaded?

    I would say that being threatened by a computer based AI that is better able to perform 'intellectual work' is about as reasonable as being threatened by cheetah's because they are better at running really goddamn fast.

    I will admit that the idea of AI's eliminating paying jobs of a particular sort is an interesting problem to consider, but not that different from considering what will happen when we can create robots capable of performing all types of manual labour. Will that result in world wide poverty, or will it result in world wide prosperity ala StarTrek?

    END COMMUNICATION

  • by Phase Shifter (70817) on Thursday February 11, 2010 @06:32AM (#31097676) Homepage
    I'm trying to fathom how the ability to blend in to a group of hairless monkeys spamming "ASL?" on the internet is supposed to be construed as a valid measure of intelligence.
  • Some actual science (Score:4, Interesting)

    by Pedrito (94783) on Thursday February 11, 2010 @09:34AM (#31098634) Homepage
    Since this is an area I'm very familiar with, I'll throw in a little science about why these predictions are not only realistic, but actually probably a bit pessimistic.

    First of all, our understanding of the human brain has improved vastly in the past two decades. Especially in the areas that will be necessary for creating intelligent machines. The cortex (the part that kind of looks like a round blob of small intestines, with all the creases and folds) is much like a computer with a bunch of processors. Previously focus had been paid to the individual neurons as the processors. But a much larger unit of processing is now becoming the central area of focus; The Cortical Minicolumn [wikipedia.org] which, in groups for a Cortical Hypercolumn [wikipedia.org]. As minicolumns consist of 80-250 (more or less, depending on region) neurons and there are about 1/100th of them compared to neurons, it cuts down on complexity significantly.

    Numenta [numenta.com] and others are starting to take this approach in simulating cortex. Cortex is largely responsible for "thinking". The other parts of the brain can be seen, to some degree, as peripheral units that plug into the "thinking" part of the brain. For example, the hippocampus is a peripheral that's associated with the creation and recall of long term memories. The memories themselves, however, are stored in the cortex. We have various components that provide input, many of which send relays through the thalamus which takes these inputs of various types and converts them into a type of pattern that's more appropriate for the cortex and then relays those inputs to the cortex.

    The cortex itself is basically a huge area of cortical minicolumns and hypercolumns connected in both a recurrent and hierarchical manner. The different levels of the hierarchy provide higher levels of association and abstraction until you get to the top of the hierarchy which would be areas of the prefrontal cortex.

    What's amazing about the cortex is it's just a general computing machine and it's very adaptable. To give an example (I'd link the paper, but I can't seem to find it right now and this is from memory, so my details may be a bit sketchy, but overall the idea is accurate), the optic nerve of a cat was disconnected from the visual cortex at birth and connected to the part of the brain that's normally the auditory cortex. The cat was able to see. It took time and it certainly had vision deficits. But it was able to see, even though the input was going to the completely wrong part of the brain.

    This is important for several reasons, but the most important aspect is that the brain is very flexible and very adaptable to inputs. It can learn to use things you plug into it. That means that you very likely don't have to create a very exact replica of a human brain to get human level intelligence. You simply need a fairly model of the hierarchical organization and a good simulation of the computations performed by cortical columns. A lot of study is going into these areas now.

    It's not a matter of if. This stuff is right around the corner. I will see the first sentient computer in my lifetime. I have absolutely no doubt about it. Now here's where things get really interesting, though... The first sentient computers will likely run a bit slower than real-time and eventually they'll catch up to real time. But think 10 years after that (and how computing speed continually increases). Imagine a group of 100 brains operating at 100x real time, working together to solve problems for us. Why would they work for us? We control their reward system. They'll do what we want because we're the ones that decide what they "enjoy." So 1 year passes in our life, but for them, 100 years have passed. They could be given the task of designing better, smarter, and faster brains than themselves. In very little time (relatively speaking), the brains that will be

Nothing is rich but the inexhaustible wealth of nature. She shows us only surfaces, but she is a million fathoms deep. -- Ralph Waldo Emerson

Working...