Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics Hardware Science

When Will AI Surpass Human Intelligence? 979

destinyland writes "21 AI experts have predicted the date for four artificial intelligence milestones. Seven predict AIs will achieve Nobel prize-winning performance within 20 years, while five predict that will be accompanied by superhuman intelligence. (The other milestones are passing a 3rd grade-level test, and passing a Turing test.) One also predicted that in 30 years, 'virtually all the intellectual work that is done by trained human beings ... can be done by computers for pennies an hour,' adding that AI 'is likely to eliminate almost all of today's decently paying jobs.' The experts also estimated the probability that an AI passing a Turing test would result in an outcome that's bad for humanity ... and four estimated that probability was greater than 60% — regardless of whether the developer was private, military, or even open source."
This discussion has been archived. No new comments can be posted.

When Will AI Surpass Human Intelligence?

Comments Filter:
  • When? (Score:4, Insightful)

    by Cheney ( 1547621 ) on Wednesday February 10, 2010 @07:45PM (#31092724)
    Never.
  • by Anonymous Coward on Wednesday February 10, 2010 @07:45PM (#31092728)

    I think we heard these exact same words 50 years ago.

  • by Gothmolly ( 148874 ) on Wednesday February 10, 2010 @07:46PM (#31092738)

    Say it ain't so! In other news, Coca-Cola released a statement that in 20 years, more people will be drinking Coca-Cola than there are drinking it now !1!!

  • by Monkeedude1212 ( 1560403 ) on Wednesday February 10, 2010 @07:46PM (#31092740) Journal

    and four estimated that probability was greater than 60%

    Of our incredibly small sample size of hand picked Experts, Less than 25% think there is a probably chance! YOU SHOULD BE WORRIED!

  • No way. (Score:5, Insightful)

    by Bruce Perens ( 3872 ) * <bruce@perens.com> on Wednesday February 10, 2010 @07:48PM (#31092764) Homepage Journal

    Oh come on. I don't even have a computer that can pick up stuff in my room and organize it without prior input, and nobody does, and that would not be close to a general AI when it happens.

    They're really assuming that the technology will go from zero to sixty in 20 years. Which they assumed 20 years ago, too, and it didn't happen. Meanwhile, nobody has any significant understanding of what consciousness is. Now, it might be that a true AI computer doesn't need to be conscious, but we still don't know enough about it to fake it. We also have no system that can on demand form its own symbolic system to deal with a rich and arbitrary set of inputs similar to those conveyed by the human senses.

    Compare this to things that actually have been achieved: We had the mathematical theory of computation at least 100 years before there was a mechanical or electronic system that would practically execute it (Babbage didn't get his system built). We had the physical theory for space travel that far back, too.

    We know very little about how a mind works, except that it keeps turning out to be more complicated than we expected.

    So, I'm really very dubious.

  • by brunes69 ( 86786 ) <[slashdot] [at] [keirstead.org]> on Wednesday February 10, 2010 @07:54PM (#31092842)

    One might argue that the fact that the human species wastes so much money (and as a consequence, resources) on fulfilling carnal desires rather than advancing it's civilization, points out that we do not collectively really represent a very high standard of intelligence.

  • Re:No way. (Score:3, Insightful)

    by MindlessAutomata ( 1282944 ) on Wednesday February 10, 2010 @07:56PM (#31092872)

    [quote]Meanwhile, nobody has any significant understanding of what consciousness is.[/quote]

    Only if you want to cling to silly quasi-dualistic Searle-inspired objections towards functionalism.

    Most of the objections of functionalism either, when applied to the brain, end up also arguing that the brain itself doesn't/can't "create" consciousness (or better put, "form" consciousness) or are either commonsense gut-feeling responses to functionalism. You may feel free still thinking in terms of "souls" and "something more to humanity than just the flesh and neural machinery."

    The consciousness "debate" will never be settled (at rather, widely agreed upon), because the answer just doesn't mesh intuitively with human introspection. Many people cling to the basic concept of "souls," at least on an intuitive level, which is why we have nonsense like Chalmer's p-zombies muddying up the discussion.

  • Re:I call FUD! (Score:2, Insightful)

    by garyisabusyguy ( 732330 ) on Wednesday February 10, 2010 @07:56PM (#31092880)

    I am certain that another group of 'experts' said the same thing in 1980

  • Not to worry (Score:3, Insightful)

    by Anonymous Coward on Wednesday February 10, 2010 @07:57PM (#31092884)

    AI research started in the 1950s. Considering how "far" we've come since then, I don't think we should expect any sort of general artificial intelligence within our lifetimes.

    People are doing great stuff at "AI" for solving specific types of problems, but whenever I see something someone is touting as a more general intelligence, it turns out to be snake oil.

  • Life after AI (Score:1, Insightful)

    by TwiztidK ( 1723954 ) on Wednesday February 10, 2010 @07:58PM (#31092896)
    When the computers are doing all of the intellectual work what will people do? I doubt that factory jobs would be prevalent as the employees would be replaced by robots. Will we simply laze about all day posting on Slashdot? Or, will our robot overlords kill all of us? It seems like the easy solution would be not to develop advanced AI, it's not going to develop itself...yet.
  • Definitions (Score:5, Insightful)

    by CannonballHead ( 842625 ) on Wednesday February 10, 2010 @07:58PM (#31092900)

    Please define "intelligence."

    Calculation speed? An abacus was smarter than humans.

    Memory? Not sure who wins that.

    Ingenuity? Humans seem to rule on this one. I don't know if I count analyzing every single possible permutation of outcomes as "ingenuity." And I'm not sure we really understand what creativity, ingenuity, etc., really are in our brains.

    Consciousness? We can barely define that, let alone define it for a computer.

    It seems most people seem to think "calculation speed and memory" when they talk about computer "intelligence."

  • Not serious (Score:1, Insightful)

    by Anonymous Coward on Wednesday February 10, 2010 @08:00PM (#31092908)

    Should we remember all of the unrealized promises of AI from the 1950's? What makes anyone believe in these baseless claims? If anything, in 20 years they'll give us a better spam filter. Give me a break..

  • Skewed sample (Score:5, Insightful)

    by Homburg ( 213427 ) on Wednesday February 10, 2010 @08:03PM (#31092956) Homepage

    The problem is, this isn't a survey of "AI experts," it's a survey of participants in the Artificial General Intelligence conference [agi-conf.org]. As far as I can see, this is a conference populated by the few remaining holdouts who believe that creating human-like, or human-equivalent, AIs, is a tractable or interesting problem; most AI research now is oriented towards much more specific aspects of intelligence. So this is a poll of a subset of AI researchers who have self-selected along the lines that they think human-equivalent AI is plausible in the near-ish future; it's hardly surprising, then, that the results show that many of them do in fact believe human-equivalent AI is plausible in the near-ish future.

    I would be much more interested in a wider poll of AI researchers; I highly doubt anything like as many would predict nobel-prize-winning AIs in 10-20 years, or even ever. TFA itself reports a survey of AI researchers in 2006, in which 41% said they thought human-equivalent AI would never be produces, and another 41% said they thought it would take 50 years to produce such a thing.

  • Re:I call FUD! (Score:2, Insightful)

    by sictransitgloriacfa ( 1739280 ) on Wednesday February 10, 2010 @08:04PM (#31092960)
    No mention, of course, of the new jobs it will make possible. How many web designers were there in 1990? How many airline pilots in 1940?
  • What is AI anyway? (Score:3, Insightful)

    by Sark666 ( 756464 ) on Wednesday February 10, 2010 @08:05PM (#31092988)

    To me the key word is artificial, depending on your interpretation of the meaning it could be simply man made, or it's fake, simulated.

    Does deep blue show any intelligence? To me, that's just good programming. I think the intelligence of computers is a misnomer. Their intelligence so far and has always has been nil. Maybe that'll change, but in so many areas of technology I'm an optimist but in this regard I'm a pessimist or at least very skeptical.

    A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.

    How do you program that? How does the brain choose a random number? What's holding us back? CPU Speed? Quantum computing? A brilliant programmer?

    Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.

  • AI first (Score:5, Insightful)

    by HaeMaker ( 221642 ) on Wednesday February 10, 2010 @08:06PM (#31092998) Homepage

    The most likely scenario is, AI which develops fusion and holographic storage.

  • by Anonymous Coward on Wednesday February 10, 2010 @08:09PM (#31093020)

    When will AI be able to write original jokes that can make people laugh? And how about scripting a funny TV commercial?

  • by CharlyFoxtrot ( 1607527 ) on Wednesday February 10, 2010 @08:10PM (#31093028)

    And not the wrong kind [mtd.com], either.

    Hey don't knock it. If more people wanted some panda-burger, there'd be a lot more of them.

  • by CosmeticLobotamy ( 155360 ) on Wednesday February 10, 2010 @08:10PM (#31093036)

    I'd say "screw it, if it's going to take my job, and jobs of my friends, family and all my descendants, I'm making it a complete dimwit and swearing by all I know that it was impossible to design otherwise, and putting that in every single book and publication on the topic!"

    If you have AI smart enough to outsmart people, you probably have something that can learn to control some fairly simple mechanical parts that look like legs and maneuver them based on cheap sensor input and a couple of cameras. So you have robots, who will pretty quickly get the ability to build and maintain themselves. Which means your manual labor jobs go away, too. Which means things like food and raw materials drop to approach the cost of energy. Luckily we'll have some pretty swell solar panels by then, for much cheaper than today, and probably be pretty close to fusion. As energy costs approach zero, the cost of everything in the world approaches zero and requires no human oversight. Everyone will be unemployed and own 40 houses. We can all sit around making YouTube videos of ourselves singing in the hopes that we'll get famous so people will want to have sex with us. It'll be boring, but it won't be the worst thing in the world.

  • Start laughing now (Score:5, Insightful)

    by GWBasic ( 900357 ) <{moc.uaednorwerdna} {ta} {todhsals}> on Wednesday February 10, 2010 @08:11PM (#31093044) Homepage

    I occasionally attend AI meetings in my local area. The problem with AI development is that too many "experts" don't understand engineering; or programming. Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness. Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.

    Frankly, a better understanding of Man's psychology brings us no closer to AI. We need better and more powerful programming techniques in order to have AI; and philosophizing about how the human mind works isn't going to get us there.

  • by badboy_tw2002 ( 524611 ) on Wednesday February 10, 2010 @08:12PM (#31093072)

    Yeah, they're totally biased because they're trying to sell AI! Its not like they're experts in their fields that have in depth or up to date knowledge about exactly what their peers are researching and progress in the most promising areas. I think probably the better way to get an accurate, unbiased answer to both questions is to ask the Coca-cola people about AI and the AI people about Coke!

  • Re:Definitions (Score:3, Insightful)

    by Chicken_Kickers ( 1062164 ) on Wednesday February 10, 2010 @08:12PM (#31093076)
    Agreed. And what do they mean by "Nobel-Prize level achievement"? As if it was some sort of Level-Up where after accumulating enough Experience Points, you glow and gain new powers. Scientific research is not how it is portrayed in movies and games. There are no research points, increasing the number of researchers or pouring money into it won't necessarily do anything. There are elements of chance, good fortune and serendipity. The discovery of antibiotics came to mind when Alexander Fleming noticed that some old cultures contaminated with the Penicillium fungus appears to inhibit the growth of bacteria. Would a machine be able to make this discovery? There are historical forces, political factors, even the personalities of the researchers themselves. Machines can do all the tedious work, collect data and do analyses on them. But it still takes the human mind to make sense of the data and infer meanings and applications from them, sometimes far from the original project objectives.
  • Re:Let's see. (Score:5, Insightful)

    by westlake ( 615356 ) on Wednesday February 10, 2010 @08:12PM (#31093084)
    To play off a famous Edsger Dijkstra quote, the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish...

    It matters to the fish who have to share the water with this new beast.

  • Re:When? (Score:5, Insightful)

    by Anonymous Coward on Wednesday February 10, 2010 @08:24PM (#31093224)

    What's with all the pessimism? Strong AI is a matter of inevitability. If nothing else, simulations of the human brain accurate down to the individual neuron could easily achieve this, even if it requires substantially more powerful computers than we have now. This would be the brute force method, and I don't doubt that eventually our understanding of cognition and intelligence will advance to the point where we will be able to build thinking computers.

    Will it happen any time soon? Absolutely not. But I think it's a little short sighted to say that we'll NEVER develop such technology.

  • Re:No way. (Score:1, Insightful)

    by Anonymous Coward on Wednesday February 10, 2010 @08:32PM (#31093310)

    While I agree that we've moved beyond quasi-dualistic anti-functionalism arguments, I think that the question of "What is first-person conscious experience, and what is needed to generate it?" is still a worthwhile question, and one that I feel like functionalism sidesteps rather than approaches.

    Purely flesh + neural machinery or not aside, we still seem like we don't have a good idea why consciousness does the things that it does, nor do we have a good working definition of what it is, and it seems like both of those are important parts of figuring out if things other than humans (or, at the extreme, yourself) are conscious or not, and by extension whether external actors are human-level or not.

  • by at_slashdot ( 674436 ) on Wednesday February 10, 2010 @08:43PM (#31093414)

    What I mean by that is that I haven't see yet any sign of generic intelligence -- otherwise if you consider programs that beat human at chess "intelligent" that has already happened. But those programs cannot even solve a tic-tac-toe game because they don't actually "understand" what's going on. They have some inputs some processing and they give you an output, if you vary the input and the problem or if you expect a different type of output the program would not know how to adjust, therefore I would not considered that "intelligent". Neuronal nets and artificial brains are another thing, but they are still at the very beginning.

    "superhuman intelligence" there might be some limit to intelligence, I don't mean memory and computation speed, I mean the understanding that if "A implies B" then "non B implies non A"... once an artificial brain understands that concept there's not so much more to understand about it.

  • by dpilot ( 134227 ) on Wednesday February 10, 2010 @08:43PM (#31093420) Homepage Journal

    I've had a share in the creation of two N.I.s

    They don't do spit when you first turn them on - that takes a few days, and then it smells like sour milk.

    It takes about 2 years to start getting intelligible words out of them.

    It takes between 10 and 20 years before you can start consistently having an adult-level conversation with them.

    I have no idea when one of the could have really passed a Turing test. (FYI, they both passed that point many years ago.)

    I'm being a little facetious, but not entirely. Let's assume we're building these neural nets, modeled after real brains. Why should we expect them to spring like Athena from Zeus' head, fully adult and fully Turing-capable. There's a phrase, "only a mother could love." I have a gut-feel that any AI that takes too much after organic brains, is going to take the long path to being recognizable as Intelligence, just like us. Maybe not as long as us, but clearly not at power-on time, either. Maybe longer, even. My wife spent hours playing with and talking to our infant children, even before they were equipped to return it. But it was part of what gave them something to model, part of their learning how to be like us. Who is going to do that with a hardware/software experiment? Will the software have the right hardware to let them experience it? Will it be more like an intelligence in a state of sensory deprivation?

  • by Eskarel ( 565631 ) on Wednesday February 10, 2010 @08:45PM (#31093434)

    They're not totally biased because they're trying to sell us AI, they're totally biased because they want grant money.

    The problem with AI is that the world believes that the goal of AI is to create Data from Star Trek TNG(or maybe C3P0 for the older crowd). This is the yard stick by which they measure the progress of AI. It doesn't matter that computers are more and more capable of doing tasks, and even growing capable to some degree of working out what they should do on their own(within certain very limited bounds), they aren't self aware and able to talk to me, so AI is a failure.

    This means that AI experts have to upsell the possibility of this happening to keep getting grant money from people who don't understand what they do.

    Now the reality of the situation is that at present we still don't have the computational density in our computers to create something which can even correctly process things like vision, let alone all five senses to create something that can perceive the world in a way remotely similar to the way we do. While it might be possible to create some alien form of intelligence totally unlike our own without having any of these inputs, it wouldn't pass most of the milestones being presented here, let alone be able to take over for actual humans in any kind of job which requires any kind of creativity.

    The AI experts know this, they most likely also know that creating super human intelligence, aside from any inherent risks, isn't really all that beneficial. The problem is that they also know that 20 years is the answer the grant committees want to hear.

  • same old... (Score:1, Insightful)

    by metageek ( 466836 ) on Wednesday February 10, 2010 @08:46PM (#31093450)

    I think the OP found some news piece from the 1960s and decided to recycle it...

    If I was to make such a prediction I would say 60 years, then even if it would not happen I would not be around to be shamed.

    AI working? Nooooo! (and I'm in a machine learning research group)

  • Re:AI first (Score:5, Insightful)

    by Captain Splendid ( 673276 ) <capsplendid@nOsPam.gmail.com> on Wednesday February 10, 2010 @09:05PM (#31093686) Homepage Journal
    Nothing moves us backwards faster than progress.

    I'm sure that sounded smart and catchy when you came up with it, but it doesn't really follow the line of reasoning you set out in the previous paragraphs.
  • Re:AI first (Score:5, Insightful)

    by Cassius Corodes ( 1084513 ) on Wednesday February 10, 2010 @09:05PM (#31093696)
    Go back 100 years. Live for 10 days. Come back and apologise.
  • by Chris Burke ( 6130 ) on Wednesday February 10, 2010 @09:13PM (#31093816) Homepage

    Yeah? And when's the last time a Coca-Cola represented estimated the odds of catastrophe for the human race as a result their product at 60%?

  • Re:Let's see. (Score:3, Insightful)

    by zeroRenegade ( 1475839 ) on Wednesday February 10, 2010 @09:14PM (#31093820)
    Awesome quote. The stuff people imagine is hysterical. For a robot to evolve a free will, it will be given to him by humans, so in essence it is not free at all. If robots are evil, it is because people are inherently evil and program it to think methodically instead of compassionately. It is easy to program the functionalism of a human mind, but behaviorism will never be fully understood. Computers are already superhuman in many ways, but to compose music, write classic literature, cook lavish meals, it will never ever happen. Keep dreaming dreamers. Any creativity a robot contains would have come from our own instruction.
  • Re:No way. (Score:3, Insightful)

    by Chris Burke ( 6130 ) on Wednesday February 10, 2010 @09:22PM (#31093940) Homepage

    Not to take anything away from their research, but "modeled something with as many neurons and connections as half a mouse brain" isn't really the same as "modeled half a mouse brain". Not in the sense that you could replace that half of a mouse's brain with the simulation and it would act the same. Having some simple aspects of the simulation behave in similar ways to how biological brains behave isn't the same as duplicating the functionality, as they admit.

    That said, I've long felt that brute force simulation of human brains is our best bet for actually achieving AI, since progress is so abysmal on the algorithms side. But there's more to it than just taking that mouse-brain-sized neural network and waiting for Moore's Law to scale it up to human size.

  • Re:When? (Score:5, Insightful)

    by Traa ( 158207 ) on Wednesday February 10, 2010 @09:39PM (#31094164) Homepage Journal

    Looking at predictions that did not come true is interesting, but not half as interesting at looking at things that came true without being predicted. Even fairly recently:
    - the internet
    - social networking
    - smart phones
    - open source projects

    Though some of those might have been predicted in some form this was typically without the prediction of the impact those things had on society.

  • by wjc_25 ( 1686272 ) on Wednesday February 10, 2010 @09:55PM (#31094386)
    I would argue that their bias is a little more subtle. Yes, they want grant money - who can blame them? - but on a deeper, perhaps unconscious level, they want to be important. Everyone does. And what makes an AI expert important? The idea that AI is going to take over the world, cause a huge impact, etc. So we have this idea that AI will be the equal of the human brain some time soon even though the neuroscientists (for all their talk, which has similar motivation) still don't understand quite how the brain works.

    It's all well and good to say the brain is a finite object that can be emulated. But a fruit fly is also a finite object, and one that's a hell of a lot smaller than a brain, and we're far from emulating one of those.
  • Re:No way. (Score:3, Insightful)

    by Dahamma ( 304068 ) on Wednesday February 10, 2010 @10:21PM (#31094626)

    The problem is a byte of RAM has nothing to do with a synapse - a synapse is NOT like a transistor.

    A single synapse can be an amazingly complicated biochemical construction, made up of of different receptors, neurotransmitter vesicles, ion pumps/channels, etc - all potentially modified or controlled by various other enzymes, hormones, or other molecules that influence the process through a whole range of different interactions. And that doesn't even include the fact that synapses can interact with each other in various ways as well - the structure is critical, and not representable in a *byte*.

    It could require megabytes or more to model each synapse. That's exabytes (or more?) of data. That's a good 100 years of capacity doubling every 18 months. A bit further out than "pretty soon".

  • Re:No way. (Score:3, Insightful)

    by Toonol ( 1057698 ) on Wednesday February 10, 2010 @10:28PM (#31094690)
    Right. I don't think we COULD model a single synapse accurately right now. I doubt we have a good enough understanding. A little while ago I was reading a theory that there are quantum effects that play into some of the interactions. I don't know if that's true, but it certainly COULD be. If it exists, there's no reason evolution wouldn't make use of it.

    Now, complete simulation of a neuron may not be essential for modeling a structure made of neurons; you don't need to completely model each star to model the evolution of a galaxy, for instance. Still, I would bet that a brain involves far more, and more subtle, interactions than any galaxy.
  • by DahGhostfacedFiddlah ( 470393 ) on Wednesday February 10, 2010 @11:03PM (#31095002)

    If it doesn't have emotions and we mistreat it then it will logically see us as a threat to its own survival and try to eliminate us.

    I agree with many of your sentiments, but I think they're still too anthropocentric. We evolved in an environment where survival was very nearly the prime directive (just after "pass along your genes"). Strong AI will be developed in a lab. We could create the "smartest" computer in the world, but who would feed it goals, and the lengths it would go to achieve those goals?

    If an AI is tasked with finding a Theory of Everything, and someone decides to take an axe to its circuits, will it determine that the axe is a threat to its goal, and act accordingly? Or will it simply interpret it as another in a long series of alterations to its circuits? Or perhaps it will ignore it altogether, considering it irrelevant.

    Because ultimately, those options were programmed in by a human. Our strong AI - the first ones at least - aren't going to be independent life forms with their own dreams and desires. They will be tools to help us solve problems, and I think they will be well-understood by many, many computer scientists. When something unexpected happens, the program will be debugged, and altered to prevent the unexpected behaviour.

    If there is a robot apocalypse, it won't be because we didn't treat our creations right, but because some 13-year-old hacker in Russia said "I wonder what happens if I do this".

  • Re:Space shows (Score:1, Insightful)

    by Anonymous Coward on Wednesday February 10, 2010 @11:05PM (#31095024)

    Do you humor chimpanzees for leading to our evolutionary branch? Sure we keep a few in zoos, but what about all the others whose habitat we regularly destroy for our own selfish needs? Do you think an AI would act differently? Is there some threshold of intelligence where suddenly you care about humoring ants, or does nearly everyone not pay them any attention and try to eradicate them if they get in the way or become pests?

  • Re:When? (Score:3, Insightful)

    by Gorphrim ( 11654 ) on Wednesday February 10, 2010 @11:41PM (#31095298)
    "simulations of the human brain accurate down to the individual neuron could easily achieve this"

    aye, there's the rub
  • Re:AI first (Score:5, Insightful)

    by Cassius Corodes ( 1084513 ) on Wednesday February 10, 2010 @11:55PM (#31095402)
    100 years ago, as an average person you could not possibly earn enough to go anywhere. Today you can - you just have to get a visa for some countries (not many as a US citizen) - not only that but you can travel in a day and much more cheaply around the world.

    100 years ago in many couldn't the majority couldn't read, couldn't vote - and many had very little rights. Racism, moralism and sexism were rampant. Not to mention you wouldn't have time to do much as you would be working 10-12 hours a day 6 days a week. If you were poor the rule of law was mostly a joke.

    Food was healthier? You have to be kidding - no freezing, no preservatives doesn't mean a hippy paradise - it means you diet was limited to what could be grown near you and even that was often half-spoilt.

    While I have my own reservations about the state of education today - you cannot be seriously suggesting that the average person was smarter or more informed 100 years ago.
  • Re:AI first (Score:3, Insightful)

    by Cassius Corodes ( 1084513 ) on Thursday February 11, 2010 @12:00AM (#31095448)
    I have to assume you have not read a lot of history to say things like this. I cannot think of one area where we were more advanced 100 years ago. People often idealise what life was like in the past because they have trouble imagining life without all the things we take for granted today. If you went back in time 100 years you would feel so out of place as if you were from a different planet.
  • Re:AI first (Score:1, Insightful)

    by Anonymous Coward on Thursday February 11, 2010 @12:04AM (#31095474)

    Actually, it does.

  • Re:When? (Score:4, Insightful)

    by localman ( 111171 ) on Thursday February 11, 2010 @12:20AM (#31095610) Homepage

    It very well might be never, as there seems to be an enormous misunderstanding of what intelligence is, and how it can be used.

    Consider a computer that is as just as powerful as the human mind -- orders of magnitude more powerful than any computer today. What do you do with it? You have to teach it. And we _suck_ at teaching. We have 6 billion human-level super computers on the world right now, with another 300,000 arriving daily, and we have no idea what to do with them. What is one more, made of silicon, going to offer us?

    Intelligence isn't just some simple value like tensile strength. It's about modeling and remodeling the world, drawing distinctions between similar things, seeing similarities where things are distinct, assigning values... things that are not straightforward and measurable. Anything simpler than that has already been achieved by current computers. For useful intelligence beyond that, there's usually not even clear right and wrong answers, only different results because of different models and values. Crank up the processing power by a factor of 10 (i.e. the power of an efficiently communicating ten human team) and you still don't have anything useful unless it has a very accurate model of the world. And why would it have a better model than a well chosen group of humans?

    I don't know, I'm kind of disappointed by what seems like significant naivety in AI research. I know there is some impressive work being done, but it seems like a lot of the talk in articles like this is a bunch of sci-fi induced Pavlovian foolishness.

  • Re:AI first (Score:4, Insightful)

    by iserlohn ( 49556 ) on Thursday February 11, 2010 @12:22AM (#31095622) Homepage

    The people that nearly bankrupted the world are far from undereducated simpletons. Most in fact, were educated in the most prestigious of institutions of higher learning. Then they join the citadels of greed, with some select institutions transforming them into the "masters of the universe".

    Tragically, the undereducated simpletons support them and vote for them against their own self interest.

  • Re:Let's see. (Score:3, Insightful)

    by Idiomatick ( 976696 ) on Thursday February 11, 2010 @03:33AM (#31096790)
    The point was that subs move fast than fish but they don't swim. Similarly computer do tons and tons of things faster than people but don't think. (That's how I read it anyways)
  • Re:When? (Score:5, Insightful)

    by mikael_j ( 106439 ) on Thursday February 11, 2010 @03:34AM (#31096800)

    Excuse me, but are you saying "Strong AI can never happen because it conflicts with my personal superstitions."? Because that sure is what I'm seeing when I read your post...

    (btw, you presented what I suppose you could call an hypothesis, that somehow there is an immaterial "higher" part to human consciousness, now please give some supporting evidence which isn't either in an ancient collection of tribal stories or based upon interpretations of reality based on said collection of stories.)

    /Mikael

  • Re:When? (Score:5, Insightful)

    by phoenix321 ( 734987 ) * on Thursday February 11, 2010 @04:49AM (#31097186)

    The poor could stop having that many children, now that we have drastically reduced childhood mortality through oodles of foreign aid. But they won't listen, they keep having more and more.

    Scarce resources are scarce, and the more people competing for them, the more fierce this will become.

    What will the comparatively rich West do with the comparatively poor South that is multiplying rapidly to become even poorer per capita every minute?

    What will you do with all those 3.000.000 tons of copper left? Will you distribute them equally, so every person gets several grams of it and people can accumulate more by having even MORE children? Will you increase the price of it, so only the loathed rich Whites can have it? Will you tax it to hell, so the bastardly rich Whites cannot waste it?

    This is the ultimate test of character:

    You have X billion people, but only YX pieces/grams/barrels of bread/copper/gold/oil.

    No matter what you do, it will be too few of that resource to make do for everyone. What allocation mode do you choose?
    You have to allocate it fair, or people will torch the palace. It will have to be manageable, or your civil servants eat all the benefits of that mode. It will have to be sustainable, or systemic problems will bring the allocation out of balance. It will have to be successful, or competing nations with a different allocation mode will wipe the floor with your crumbled economy.

    How will you distribute it then?

    Evenly Per Head (=Communism),
    (no one will have enough of the resource to get anything out of it, people will breed like rabbits to have more allocated to their family, family structures will hollow out that style within 20 years, see China, People's Republic Of until 1980; Germany, Federal Republic of, and Kingdom, United since 1985: Welfare-Queening increased twentyfold, mass immigration transforming the country faster than the World War, half the babies born in welfare-stratum)

    Centrally Planned For A Country The Size Of Two Continents (=Socialism),
    (Your civil servants will allocate the most of it for themselves and their family. Black market and family structures will then supersede central authority, see Union, Soviet)

    By Market Price Through Greedy Amoral Stock Exchanges (=Capitalism),
    (Evil, evil, evil, evil, evil. Will let poor children die, will have people working for MONEY their whole life, bah)

    For Your Race Only, Führer Decides Where (=National Socialism),
    (The Party is of course allocating all the resource to itself, but that doesn't matter since anyone is also a Party Member.. Will breed like rabbits due to the idea of strenghtening the Master Race Gene Pool through "Kinder für den Führer" aka Mutterkreuz. Will balance for a while as they exterminate millions of their minorities, their unwanted but then they have a million soldiers and nothing left to eat. Has then the Hobson's choice of attacking Russia in winter, letting Russia attack in next summer or collapse under their excess male children. The military inventions skyrocket, literally and otherwise, the rest is dark. Until the Russians come, then it becomes darker.)

    For Those In Power And Their Most Noble Sons (=Monarchy)
    (Worked for several centuries and now we know why: the Master/Slave philosophy beats and outsmarts the Tit-For-Tat strategy every time.Will become awkward when millions of people are sent to kill each other after Monarch A insulted Monarch B. Works quite a while, but those living in dirt poor conditions will attract horrible diseases that kill a third of the population including large parts of the Royal Family. Illusion of HighBorneNess is hard to uphold after that, so the rabble drives you out to make a new choice in allocation mode)

    Choices, choices, my dear readers.

    To make it more succinct and the implications crystal-clear and razor-sharp: you control a small town that has acquired a rapidly fatal disease. The town has 3000 inhabitants but only 2000 vaccine doses in store. There is no

  • Re:AI first (Score:4, Insightful)

    by Xest ( 935314 ) on Thursday February 11, 2010 @04:59AM (#31097212)

    I do generally agree with you, but I can see why some would say that some things were worse.

    Music, Dance and other cultural elements weren't something commercialised, that you could be sent to jail for copying.

    More prominently though I would argue the issue of sexuality has actually gotten far less liberal in recent times. The age of consent has arrived, and gotten ever higher in some countries, homosexuality was much more widely accepted historically than it is now.

    Also, people were generally healthier because they didn't have cars, didn't have TVs and so forth.

    Really, it depends on your viewpoint, whilst the age of consent is a good thing in protecting young children, it's a clear form of oppression in countries where it's as high as 21, or even arguably 18. Similarly, I suppose all the homophobes in the world might prefer things now, but certainly I'd argue a less liberal world in this respect is a bad thing.

    Oh, and my country still had the largest empire on Earth back then too.

    Okay, okay, I was only kidding about the last one- that certainly wasn't a good thing for many people living under it!

  • by LordZardoz ( 155141 ) on Thursday February 11, 2010 @05:03AM (#31097224)

    When it comes to predicting the impact of a sentient AI on human civilization, there is never any shortage for alarmism. I am not an expert, but I am a programmer. And I believe three things to be true with respect to AI.

    1) Until we have a better understanding of why humans are sentient in the first place, we are probably not going to get any closer to recreating that phenomenon in a computer program.

    2) A Turing Complete AI is about as far off as the discovery of a room temperature super conductor or a form of fusion suitable for large scale power generation. We may be close, but probably not *that* close.

    3) I seriously doubt that any AI that we are going to be able to create with anything resembling current computer technology is going to have a thought process even close to our own.

    Think about it for a moment. Human intelligence is shaped as much by our 5 senses, our capability to create and understand language, our emotions, our ability to affect our surroundings and observe those effects, and to communicate with one another as it is our capability for logic and math. The factors that will shape an A.I. are so different as to create the possibility that a Human Intelligence and an Artificial Intelligence may not even be able to meaningfully communicate.

    Will the first sentient AI be hosted on a single computer, or will it be a gestalt effect encompasing the entire internet?
    Will the sentient AI be aware of time in anything even close to the way that we are?
    Will the sentient AI even be capable of 'wanting' anything, given that it will have no need for sleep?
    Will the sentient AI be able to comprehend the nature of its existence as a program, and be able to manipulate its own variables by choice?
    Will the sentient AI fear its own termination, or not really care knowing it can easily be reloaded?

    I would say that being threatened by a computer based AI that is better able to perform 'intellectual work' is about as reasonable as being threatened by cheetah's because they are better at running really goddamn fast.

    I will admit that the idea of AI's eliminating paying jobs of a particular sort is an interesting problem to consider, but not that different from considering what will happen when we can create robots capable of performing all types of manual labour. Will that result in world wide poverty, or will it result in world wide prosperity ala StarTrek?

    END COMMUNICATION

  • spooky prescient (Score:3, Insightful)

    by epine ( 68316 ) on Thursday February 11, 2010 @05:12AM (#31097256)

    Which three men in a tub assumed 20 years ago, too, and it didn't happen.

    The first rule of thumb is never to believe a prediction by anyone who writes grant applications for a livelihood, which covers most living scientists.

    Computers will acquire a patchwork of amazing abilities over the next three decades. I'm not sure it's particularly useful to measure this against a three year old. Right now we're further along on "fly airplane" than "tie shoes". If there was a Turing test to declare whether a task is simple or not, humans would fail.

    A Google data center with 100,000 CPU nodes is already pretty far up the cognitive scale, but it's not a form of cognition we've bothered to define as such. The most important intelligence will be assisted intelligence: what humans accomplish in collaboration with their tools. The tools will become increasingly amazing, at first on a patchwork basis, and then the seams will become increasingly unclear.

    Right now social networking sites predict what we might find interesting on fairly trivial low-dimensional criteria. Netflix must be the all-time champion of the drunken I-fought-with-my-wife-tonight 1-5 rating. Could the data set possibly be less rich or more corrupt? And already we squeeze something out. Just wait until the computers know everything about us and the ability of the computer/network to anticipate our cognitive whims becomes spooky prescient.

    On another front, some of the fruits of neurology are now coming on line. I have no idea whether this stuff works or not. Typical how we trip over our own shoelaces, trying to get speech recognition to work *before* mastering auditory grouping, which strikes me as far more fundamental.

    From Audience [audience.com] based on research by Lloyd Watts

    Audience is the first company to deliver a commercial product based on the science of [a]uditory [s]cene [a]nalysis, which entails the grouping of components in a complex mixture of sound into sources. Just as the human auditory system can readily ignore background noises while focusing on a voice of interest, [our stuff achieves] noise suppression up to 30 dB for both stationary and non-stationary noise sources to provide [adjective of awesomeness] voice quality within even the [pertinent superlative].

  • by mcvos ( 645701 ) on Thursday February 11, 2010 @06:19AM (#31097588)

    Just going to point out we already have holographic storage. There is just no commercial products that do it yet. Contrast with fusion and Turing test passing AI.

    We already have fusion. It just costs more energy than it generates. It's just a matter of increasing the efficiency a bit more. Contrast that with Turing test passing.

  • Re:When? (Score:5, Insightful)

    by icebraining ( 1313345 ) on Thursday February 11, 2010 @06:31AM (#31097670) Homepage

    The poor could stop having that many children, now that we have drastically reduced childhood mortality through oodles of foreign aid. But they won't listen, they keep having more and more.

    Religious leaders telling them that birth control is evil doesn't help either.

  • by Alomex ( 148003 ) on Thursday February 11, 2010 @06:34AM (#31097688) Homepage

    What else do you expect a computer to do if not execute an algorithm? It's a *computer*. It *computes*. The question is if a large enough collection of such simple algorithms interacting with each other will create the illusion of intelligence. When it comes to chess the answer is yes, so much so that Kasparov insisted Deep Blue was fed moves by a human.

    You speak of a mythical "intelligence" when for all we know we ourselves might well be a collection of simple algorithms being executed in our heads.

  • Re:No way. (Score:3, Insightful)

    by Yvanhoe ( 564877 ) on Thursday February 11, 2010 @07:06AM (#31097858) Journal
    The technology doesn't go from zero. Just see how :
    - Google translate web pages and correct erroneous entries
    - Microsoft Word spots grammatical mistakes
    - Theorem provers are used frequently
    - package managers' ability to maintain interacting packets in a correct version

    About what consciousness is, the progress made have been overwhelming but the media don't like this kind of deep issue and don't make a lot of articles on what we know about it (mainly : it is a psychological construct, nothing more, as can be seen throurgh its many dysfunctions. No magics there, sorry)
    Our understandings and mimicking of the processes of learning, of visual conceptualization, of spatial sense, of semantical links, all become better. Granted, it went slower than expected, but it is a steady progress and the presence of a threshold where it becomes exponentially faster when you can "make" exponentially more "minds" to work on the problem seems quite logical.
  • Re:AI first (Score:3, Insightful)

    by Xest ( 935314 ) on Thursday February 11, 2010 @07:39AM (#31098026)

    "How could you be sent to jail for copying, if you couldn't copy it?"

    Really? you can't understand how a performance might be copied without some kind of equipment to do it for you?

    "Yeah, before when girls were married to whoever their parents chose was extremely liberal."

    What has that got to do with sexuality? That's entirely irrelevant to anything I said.

    "And homosexuality? Tell me again why Oscar Wilde was imprisoned? Or are you talking about before, when the inquisition merely burned them?"

    Try looking further back, or to different regions. In fact, homosexuality was deeply rooted in Roman culture for example.

    "Are you kidding me? Do you know what the average life expectancy was, back then? Or the child mortality rates? Life expectancy in the US has risen more than 25 years during the 20th Century!"

    You don't really seem to understand the relevance of medicine here. People now are certainly not healthier- we have a greatly higher proportion of people with obesity, asthma and so forth. All that has changed is that we've gotten better at keeping the unhealthy alive, that doesn't mean that people are more healthy though. Better medicine increasing survivability does not imply that people are more healthy, it just means it's easier to survive when unhealthy.

    I'm not sure why you seem to have taken so much offence to my post, I was merely pointing out that not everything is better, it clearly isn't, particularly in an era where the effects of overpopulation are becoming quite clear in many different ways from severe deforestation, over-fishing, pollution of vital water sources and so on. I would still rather live in the modern world personally, but that doesn't mean I'm unable to recognise that not everything is better.

  • by Theovon ( 109752 ) on Thursday February 11, 2010 @09:57AM (#31098896)

    When will AI surpass human intelligence? As soon as we figure out how to do artificial intelligence the way popular culture conceives of it.

    There are two main areas of AI research, as I see it:

    (1) Engineered intelligence. These systems learn, but they learn in carefully controlled structures, like Markov models and mapping functions in genetic algorithms.

    (2) Emergent intelligence. These are based on evolving systems of simpler structures, like neural nets, and those little cooperating robots you keep hearing about. In some ways, since the intelligent behavior evolved over time, this is more akin to natural intelligence then artificial intelligence.

    Neither group has really accomplished a hell of a lot. Speech recognition and computer vision still suck ass. Group (1) has been dominant since the idea of AI was developed, and frankly, they're not a millimeter closer to understanding how to build up a system that is intelligent, where you understand all the parts you built with. Group (2) is making some progress, but then they're left with a system they don't understand because they didn't engineer it.

    Dorks like Kurtzweil seem to think that as soon as we can fit as much compute power into one chip as we GUESS is in the brain, we'll magically get sentient robots. That's bullshit. We need software systems that learn and adapt, and we just haven't figured out how to make those.

  • Re:When? (Score:5, Insightful)

    by monoqlith ( 610041 ) on Thursday February 11, 2010 @10:04AM (#31098974)

    When people follow only the wrong half of the sermon, is it the Pope's fault?

    In a word: yes. You don't need religion to convince people to stop killing each other or to use birth control . But we do have religion, and lots of people turn to it for consolation and instruction. And that means people who are in positions of religious power have a moral responsibility to spread accurate information and to stop promoting this reckless over-expansion of humankind.

    Every time the Pope utters the words, "Using condoms is a sin and/or ineffective," he must know that he is, merely by speaking, pushing millions of people that much closer to death and drastically exacerbating the population problem. This is reckless behavior.

  • by srobert ( 4099 ) on Thursday February 11, 2010 @11:50AM (#31100262)

    You're right about his romanticizing what life was like 100 years ago. I need to kick back and watch TV and have a cold soda from the fridge after work. I also want to take a hot shower when I get home. On the weekends I might enjoy camping or fishing. None of those were available 100 years ago. Life was pretty bleak unless you were one of the robber barons. But 40 years ago, Mom was at home. Dad put in a 40 hour week at the factory. The working class was entitled to a pretty good share of the wealth that they were creating. Now between Mom and Dad, the family puts in 80+ hours on the job. College degrees just to have comparable living standards. Where the hell is my flying car? Where did we go wrong?
    At least part of today's 10% unemployment rate stems from the fact that we use machines to do what people used to do. Imagine how many of us will be unemployed when we don't need any human beings who can think. How will you earn a living then?

  • Re:No way. (Score:2, Insightful)

    by Wandering Idiot ( 563842 ) on Friday February 12, 2010 @05:51AM (#31110882)
    I'm tired of this constant internet trope that the matter of existence is solved and after we die thats all there is.

    It's not really an "Internet trope" so much as the normal modern scientific materialist viewpoint.


    That regularly repeated idea is as much a matter of faith as a heaven populated by buxom virgins waiting on you hand and foot unto eternity.

    Not really. It's not an absolute certainty, because those don't exist (ironically definitive statement), but it seems the most likely explanation of what happens at death. The brain is clearly linked to the phenomenon of "mind" as shown by vast amounts of evidence, and in fact the mind appears to be wholly dependent on the brain. There is no evidence that the mind can exist outside the physical substrate of the brain, although theoretically it should by possible to transfer or recreate it in a different physical substrate that has the same structure.


    Our CURRENT understanding of physics and thermodynamics allows for the possibility of the recombination of matter to form this very existence again at some time in the very distant future.

    If by this existence you mean this planet and it's people, it's technically true that due to quantum uncertainty it would technically be possible for our world to be recreated exactly as it is now, memories and all, by random chance, but it is also vastly improbable. As in, has little chance of happening before the heat death of the universe. At least under the Copenhagen interpretation of quantum mechanics- under the Everett-DeWitt "Many-Worlds" version it might be more reasonable, but I lack the technical knowledge to say for sure.

    I find it interesting that you're not advocating dualism per se, just a materialistic version of reincarnation that would be no different from creating a clone of someone and somehow copying over their memories. I don't think having a doppelganger far in the future is really most people's conception of an "afterlife", though- for one thing, it still takes place in the same physical universe, and there's no continuity between the two beings. (If you're deriving this from Nietzsche's concept of the eternal recurrence, it's my understanding that it was more of a philosophical illustration, not a serious scientific theory)


    Given a long enough time frame the probability of any allowable combination of matter occurring approaches 1. Denying the possibility of some existence after death denies the observed universe and is far less scientific a position than its proponents pretend.

    I think the question here would be whether it's really the same existence, as the original consciousness would have no bearing on the recreated one aside from coincidental similarity in your scenario.


    So stop it with this fatalistic nihilism you dopes.

    Why? Said "fatalistic nihilism", as you label the reasonable idea that once our bodies stop functioning our consciousness ceases permanently (reasonable given that our consciousness can be made to cease temporarily by something as minor as a knock on the head) seems like the best explanation at present. And who are you calling a dope, dummy?


    Just because the superstitions of the past were wrong doesn't mean some of the transcendent ideas of humanity are by extension wrong.

    You do realize you haven't actually proposed anything actually transcendent, right? This universe being a computer simulation and our consciousnesses being allowed to exist in another part of the simulation after "death" would be closer.


    And in the event that a many-worlds interpretation of quantum physics is an appropriate framework: The only branches of existence we as individuals will be aware of are those wherein we are conscious. If there exists a probability that we can remain conscious in some fashion it follows that the only worlds we will be aware of are the ones in which we are conscious. The

I've noticed several design suggestions in your code.

Working...