Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Robotics Sci-Fi Science

Artificial Intelligence at Human Level by 2029? 678

Gerard Boyers writes "Some members of the US National Academy of Engineering have predicted that Artificial Intelligence will reach the level of humans in around 20 years. Ray Kurzweil leads the charge: 'We will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029. We're already a human machine civilization, we use our technology to expand our physical and mental horizons and this will be a further extension of that. We'll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons.' Mr Kurzweil is one of 18 influential thinkers, and a gentleman we've discussed previously. He was chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering. The experts include Google founder Larry Page and genome pioneer Dr Craig Venter."
This discussion has been archived. No new comments can be posted.

Artificial Intelligence at Human Level by 2029?

Comments Filter:
  • Oblig. (Score:5, Funny)

    by Anonymous Coward on Saturday February 16, 2008 @11:27PM (#22449964)
    I for one welcome our broadly supple, emotionally intelligent overlords.
  • No chance (Score:4, Insightful)

    by Kjella ( 173770 ) on Saturday February 16, 2008 @11:29PM (#22449976) Homepage
    I mean it could happpen but this is so far from the current state of the art, I think we're talking 50-100 years forward in time. We have the brute powers of computers but nowhere near the sophistication in software or neural interfaces to do anything like this.
  • Hrmmmm (Score:3, Interesting)

    by BWJones ( 18351 ) * on Saturday February 16, 2008 @11:31PM (#22449982) Homepage Journal
    I'll be meeting with Kurzweil in April.... Speaking as a neuroscientist who is doing complex neural reconstructions, I think he's off his timeline by at least two decades. Note that we (scientists) have yet to really reconstruct an actual neural system outside of an invertebrate and are finding that the model diagrams grossly under-predict the actual complexity present.

    • Re:Hrmmmm (Score:5, Insightful)

      by DynaSoar ( 714234 ) on Sunday February 17, 2008 @12:01AM (#22450202) Journal
      And as a cognitive neuroscientist, I say he's off the mark entirely. As per Minsky, a fish swims under water; would you say a submarine swims?

      What exactly is the "level of humans"? Passing the Turing test? (Fatally flawed because it's not double blind, btw.) Part of human intelligence includes affective input; are we to expect intelligence to be like human intelligence because it includes artificial emotions, or are we supposed to accept a new definition of intelligence without affective input? Surely they're not going to wave the "consciousness" flag. Well, Kurzweil might. Venter might follow that flag because he doesn't know better and he's as big a media hog as Kurzweil.

      I think it's a silly pursuit. Why hobble a perfectly good computer by making it pretend to be something that runs on an entirely different basis? We should concentrate on making computers be the best computers and leave being human to the billions of us who do it without massive hardware.
      • Re: (Score:3, Insightful)

        by Jeff DeMaagd ( 2015 )
        We should concentrate on making computers be the best computers and leave being human to the billions of us who do it without massive hardware.

        The thing is, Kurzweil is trying to achieve immortality, which is pretty much predicated on the ability to simulate his brain. I don't know if that's coloring his predictions or not, and it really doesn't say anything about whether there can be a machine that can do a full scan of an entire human brain. I don't know if he'll live that long. He'll be over 80 years o
      • Re: (Score:3, Interesting)

        by risk one ( 1013529 )

        you're misquoting Edsger Dijkstra. He said: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim". I'm not sure Minsky would agree.

        The way I interpret Dijkstra here, is that he meant when a submarine starts to look sufficiently like a fish, we will call its actions 'swimming'. When it has the exact same range and 'functional ability' as a fish, but moves by spinning its rotors, we don't call it swimming. Thus the human criterion for intelli

    • Re: (Score:3, Funny)

      by timeOday ( 582209 )
      Please do not take this personally, but I don't think neuroscience is particularly important to AI. Yes, biology is horribly complex. But airplanes surpassed birds long ago, even though airplanes are much simpler and not particularly bio-inspired. Granted, birds still surpass airplanes in a few important ways (they forage for energy, procreate, and are self-healing... far beyond what we can fabricate in those respects) but airplanes sure are useful anyways. I don't think human-identical AI would have mu
      • Re:Hrmmmm (Score:5, Interesting)

        by LurkerXXX ( 667952 ) on Sunday February 17, 2008 @12:42AM (#22450372)
        What aircraft corner as fast as barn swallows?

        There are still many things we can learn from biology that can be translated to machines. The translations don't have to be 1:1 for us to make use of them. The way birds as well as insects make use of different shapes in surfaces during wing beats have translated into changes in some aricraft designs. They weren't directly incorporated the same way, but they taught us important lessons that we could then implement in different ways but with a similar outcome.

        I think Neuroscience does have a lot to teach us about how to do AI.
        • Re:Hrmmmm (Score:4, Insightful)

          by Flicker ( 4495 ) on Sunday February 17, 2008 @01:11AM (#22450524)
          How many barn swallows can fly at 40,000 ft? Just what are you comparing?
        • by YetAnotherBob ( 988800 ) on Sunday February 17, 2008 @11:46AM (#22453548)
          Any aircraft the size of a barn swallow.

          Your question displays a lack of understanding. Not of biology, but of physics. Square cube law specifically. Aircraft don't corner as fast as small birds. the reason isn't any magic of biology, it's simple momentum.

          The larger any object is, the more it weighs. Make it twice as big, it weighs eight times as much. packs eight times as much momentum. A large bird doesn't turn s fast as a small bird. Same is true of planes. Same is true of ships. A buss won't corner as fast as sports cars either.

          A typical aircraft is 1000 times bigger than a swallow. It's a million times heavier. It packs a million times the momentum. It's not that the swallows design is better, or that there is some biological magic. It's just a question of size. It's true the other way too. A mosquito can turn a lot quicker than a barn swallow. Barn swallows catch mosquitoes because they can fly faster. Guess what, the aircraft you were so dismissive of can fly a lot faster than that barn swallow too. Visit a large airport. Swallows get killed by aircraft every day. They can't get out of the way in time. A barn swallow that was as large as a chicken would be ripped apart by the stresses if it were able to corner as fast as a real barn swallow. That's the real reason that chickens don't turn well in flight. (Yes, chickens can fly for short distances.) Momentum.

          Your problem appears to be that you just don't understand scale. It is a wonderful thing when you do. You see reasons all around us, for all kinds of things.

          So, yes, we should study biology. But, we should also remember the physics. The tricks the mosquito uses just won't work for a passenger jet. Nor will the barn swallows turns be good for the passengers on that jumbo jet. Still, some things will be useful. We just don't know what. Who would have thought that studying a sharks skin would help racing yachts. Personally, I hope that we get a lot of surprises. That's where the fun in science is.

          I don't expect AI research to give us human type intelligence in a machine. Ever. That doesn't mean we shouldn't try. We don't know what we will get, or what it will make possible. We can't know before the fact. Studying birds didn't give us aircraft that can corner in a second or two, it did give us jumbo jets that can take us half way around the world in an easy chair. That took a lot of other things too.

          The Wright brothers succeeded where Lilenthal failed. Not because they understood birds better, but because in the meantime the internal combustion engine was developed. AI will be the same. Right now, we don't even know what we need in order to make this work. There will be surprises.
    • I agree... (Score:5, Insightful)

      by Jane Q. Public ( 1010737 ) on Sunday February 17, 2008 @12:24AM (#22450284)
      As an party "outside" the field but interested, I agree with all of you here so far, except that of course you disagree on timelines. :o)

      "Artificial Intelligence" in the last few decades has been a model of failure. The greatest hope during that time, neural nets, have gone virtually nowhere. Yes, they are good at learning, but they have only been good at learning exactly what they are taught, and not at all at putting it all together. Until something like that can be achieved (a "meta-awareness" of the data), they will remain little more than automated libraries. And of course at this time we have no idea how to achieve that.

      "Genetic algorithms" have enormous potential for solving problems. Just for example, recently a genetic algorithm improved on something that humans had not improved in over 40 years... the Quicksort algorithm. We now have an improved Quicksort that is only marginally larger in code size, but runs consistently faster on datasets that are appropriate for Quicksort in the first place.

      But genetic algorithms are not intelligent, either. In fact, they are something of the opposite: they must be carefully designed for very specific purposes, require constant supervision, and achieve their results through the application of "brute force" (i.e., pure trial and error).

      I will start believing that something like this will happen in the near future, only when I see something that actually impresses me in terms of some kind of autonomous intelligence... even a little bit. So far, no go. Even those devices that were touted as being "as intelligent as a cockroach" are not. If one actually were, I might be marginally impressed.
    • Blue Brain Project (Score:3, Interesting)

      by vikstar ( 615372 )
      The blue brain project [bluebrain.epfl.ch] is already simulating a cluster of 10,000 neurons known as a neucortical column. Althought quite good already (in terms of biological realism), their simulation model is still incomplete with a few more years work to get the neurons working like in real life. With more computational power to increase neuron count and better models they will be able to one day simulate an entire mammalian brain.
      • Re: (Score:3, Informative)

        by BWJones ( 18351 ) *
        Althought quite good already (in terms of biological realism),

        While this project is verrry cool, they are not even remotely close to biological realism. Sorry...

        their simulation model is still incomplete with a few more years work to get the neurons working like in real life.

        That is just it. We are finding that real biological systems from complete neural reconstructions are far more complex with many more participating "classes" of neurons with much more in the way of nested and recurrent collateral co
    • Re: (Score:3, Insightful)

      by podperson ( 592944 )
      I also think that we're unlikely to equal human intelligence except as a curiosity long after we've obtained the necessary technology. Instead, we'll produce AIs with wildly different abilities from humans (far better in some things, such as arithmetic, or remembering large slabs of data, and probably worse in others). Calibrating an AI to be "equal" to a human will be a completely separate and not especially useful endeavor, and it will be something tinkerers do later.

      And I suspect that the necessary insig
  • Exponential AI? (Score:5, Interesting)

    by TheGoodSteven ( 1178459 ) on Saturday February 16, 2008 @11:31PM (#22449984)
    If artificial intelligence ever gets to the point where it is greater than humans, won't it be capable of producing even better AI, which would in turn create even better AI, and so on? If AI does reach the level of human intelligence, and eventually surpasses it, can we expect an explosion in technology and other sciences as a result?
    • Re:Exponential AI? (Score:5, Informative)

      by psykocrime ( 61037 ) <mindcrime&cpphacker,co,uk> on Saturday February 16, 2008 @11:34PM (#22450026) Homepage Journal
      If artificial intelligence ever gets to the point where it is greater than humans, won't it be capable of producing even better AI, which would in turn create even better AI, and so on? If AI does reach the level of human intelligence, and eventually surpasses it, can we expect an explosion in technology and other sciences as a result?

      That's the popular hypothesis [wikipedia.org].
    • Re:Exponential AI? (Score:4, Interesting)

      by wkitchen ( 581276 ) on Sunday February 17, 2008 @12:05AM (#22450216)
      This positive feedback effect happens to a considerable extent even without machines that have superintelligence, or even what we'd usually consider intelligence at all. It's happening right now. And has been happening for as long as humans have been making tools. Every generation of technology allows us to build better tools, which in turn helps us develop more sophisticated technology. A great example from fairly recent history, and that is still ongoing, is the development of CAD/CAM/CAE tools, particularly those used for design of electronic hardware (schematic capture, PCB, HDL's, programmable logic compilers, etc.), and the parallel development of software development tools. Once computers became good enough to make usable development tools, those tools helped greatly with the creation of more sophisticated computer technology, which supported better development tools.

      Superintelligence may speed this up, but the effect is quite dramatic already.
  • by Bill, Shooter of Bul ( 629286 ) on Saturday February 16, 2008 @11:32PM (#22449994) Journal
    The farther out you make a projection, the less likely it is to be true. With this one in particular, I just don't see it being a focus of research. Yes we will have increase levels of intelligence in cars toasters and ball point pens, but the intelligence will be in a supporting role to make the devices more useful to us. There isn't a need for a human like intelligence inside a computer. We have enough ones inside human bodies.

    Also, I will not be ingesting nano bots to interact with my neurons, I'll be injecting them into my enemies to disrupt their thinking. Or possibly just threatening to do so to extract large sums of money from various governmental organisations.
    • by Jugalator ( 259273 ) on Sunday February 17, 2008 @12:15AM (#22450256) Journal

      There isn't a need for a human like intelligence inside a computer.
      And even if there was (and I think this is key to the fallacy in this prediction), we wouldn't have the theories backing the hardware. We will most likely get some super fast hardware within these years, but what's much less certain is if AI theories will have advanced enough by then, and if the architecture will be naturally parallelized enough to take advantage of them. Because while we don't know much about how the human brain reasons, we do know that to make it at an as low temperature as 37 degrees Celsius in an as small area as our cranium (it's pretty damn amazing when you consider this!), it needs to be massively parallelized. And, again, we don't really even have the theories yet. We don't know how the software should best be written.

      That's why we even in this day and age of 2008l, we're essentially running chatbots based on Eliza since 1966. Sure, there's been refinements and the new ones are slightly better, but not by much in a grand scheme. A sign of this problem is that they are giving their answers to your questions in a fraction of a second. That's not because they're amazingly well programmed; it's because the algorithms are still way too simple and based on theories from the sixties.

      If the AI researches claiming "Oh, but we aren't there yet because we haven't got hardware nearly good enough yet", why aren't we even there halfway, with at least far more clever software than chatbots working on a reply to a single question for an hour? Sure, that would be impractical, but we don't even have the software for this that uses hard with even the boundaries of our current CPU's.

      So at this point, if we'd make a leap to 2029 right now, all we'd get would be super fast Eliza's (I'm restricting my AI talk of "general AI" now, not in heuristic antispam algorithms, where the algorithms are very well understood and doesn't form a hurdle). The million dollar question here is: will we before 2029 have made breakthroughs in understanding the human brain well enough in how it reasons along with constructing the machines (biological or not as necessary) to approximate the structure and form the foundation on which the software can be built?

      I mean, we can talk traditional transistor-based hardware all day and how fast it will be, but it will be near meaningless if we don't have the theories in place.
    • Re: (Score:3, Insightful)

      Cars and toasters are NOT "intelligent"!! Not even to a small degree. Just plain... not.

      Yes, they do more things that we have pre-programmed into them. But that is a far cry from "intelligence". In reality, they are no more intelligent that an old player piano, which could do hundreds of thousands of different actions (multiple combinations of the 88 keys, plus 3 pedals), based on simple holes in paper. Well, we have managed to stuff more of those "holes" (instructions) into microchips, and so on, but b
    • Projection length (Score:4, Insightful)

      by Myria ( 562655 ) on Sunday February 17, 2008 @01:13AM (#22450534)

      The farther out you make a projection, the less likely it is to be true.

      I predict that the Sun will become a white dwarf within 10,000,000,000 years. Predicting 10 billion years instead of 5 billion years actually makes it more likely to be true.
  • by Yartrebo ( 690383 ) on Saturday February 16, 2008 @11:32PM (#22450000)
    How are we so sure that advances in computers will continue at such a rapid pace. Computer miniaturization is hitting against fundamental quantum-mechanical limits and it's crazy to expect 2008-2028 to have progress quit as rapid as 1988-2008.

    Short of major breakthroughs on the software end, I don't expect AI to be able to pass a generalized Turing Test anytime soon, and I'm pretty certain the hardware end isn't going to advance enough to brute-force our way through.
  • by RyanFenton ( 230700 ) on Saturday February 16, 2008 @11:33PM (#22450008)

    Artificial intelligence would be a nice tool to use to reach towards, or to use to understand ourselves... but rare is there a circumstance that demands, or is worth the risks involved with making a truly intelligent agent.

    The real implication to me, is that it will be possible to have machines capable of running the same 'software' that runs in our own minds. To be able to 'back up' people's states and memories, and all the implications behind that.

    Artificial intelligence is a nice goal to reach for - but it is nothing compared the the siren's call of memories being able to survive the traditional end of existence, cellular death.

    Ryan Fenton
    • Re: (Score:2, Insightful)

      It would come in pretty handy for space exploration.
    • Re: (Score:3, Insightful)

      by Bob3141592 ( 225638 )

      The real implication to me, is that it will be possible to have machines capable of running the same 'software' that runs in our own minds. To be able to 'back up' people's states and memories, and all the implications behind that.

      That presumes you can understand how human thought is made. It presumes real human intelligence can be modeled and implemented by a digital process, which may not be possible. I doubt that even quantum digital computers could do it. It might be possible in the future to simulate our neural machinery without realy knowing how it works, a high-fidelity digital form of a completely analog process, but then you couldn't know what to expect as the result. The way the program was coded and the inputs given it w

  • For over 40 years, the field of AI has been *littered* with predictions of the type: "We will be able to mimic Human levels of xxx" (substitute for XXX any of the following: contextual understanding, reasoning, speech, vision, non-clumsy motoric ability).

    So far _not one_ of those claims has come true, with the possible exception of the the much-vaunted "robotic snake".

    So ... I'd say: less claims, fewer predictions, and more work. Let me know when you've got anything worthwhile to show.

    Not to be outdon

    • by timeOday ( 582209 ) on Sunday February 17, 2008 @12:24AM (#22450286)

      So ... I'd say: less claims, fewer predictions, and more work. Let me know when you've got anything worthwhile to show.
      Has it occurred to you that all of us already work, to some extent, at the direction of computers? Think of the tens of thousands of pilots and flight attendants... what city they sleep in, and who they work with, is dictated by a computer which makes computations which cannot fit inside the human mind. An airline could not long survive without automated scheduling.

      Next consider the stock market. Many trades are now automated, meaning, computers are deciding which companies have how much money. That ultimately influences where you live and work, and the management culture of the company you work for.

      We are already living well above the standard that could be maintained without computers to make decisions for us. Of course as humans we will always take the credit and say the machines are "just" doing what we told them, but the fact is we could not could not carry out these computations manually in time for them to be useful.

      • by golodh ( 893453 )
        @timeOday

        Has it occurred to you that all of us already work, to some extent, at the direction of computers? Think of the tens of thousands of pilots and flight attendants... what city they sleep in, and who they work with, is dictated by a computer which makes computations which cannot fit inside the human mind. An airline could not long survive without automated scheduling. Next consider the stock market. Many trades are now automated, meaning, computers are deciding which companies have how much money.[

  • by zappepcs ( 820751 ) on Saturday February 16, 2008 @11:34PM (#22450024) Journal
    Good news: This could herald a lot of good stuff, increased unemployment, greater reliance on computers, newer divides in the class strata of society, further confusion on what authority is and who controls it, as well as greater largess in the well meaning 'we are here to help' phrase department.

    Bad news: After reviewing the latest in the US political scene, getting machines smarter than humans isn't going to take so much as we thought. My toaster almost qualifies now. 'You have to be smarter than the door' insults are no longer funny. Geeks will no longer be lonely. Women will have an entire new group of things to compete with. If you think math is hard now, wait till your microwave tells you that you paid too much for groceries or that you really aren't saving money in a 2 for 1 sale of things you don't need. Married men will now be third smartest things in their own homes, but will never need a doctor (bad news for doctors) since when a man opens his mouth at home to say anything there will now be a wife AND a toaster to tell him what is wrong with him.

    oh god, this list goes on and on.
  • by httpcolonslashslash ( 874042 ) on Saturday February 16, 2008 @11:35PM (#22450028)
    As soon as they make robots that can have sex like humans...what's the point in inventing anything else? All scientists will be busy "researching" their robots.
  • 2029? (Score:2, Insightful)

    by olrik666 ( 574545 )


    Just in time for AI to help me drive my new fusion-powered flying car!

    O.
  • wrong (Score:5, Insightful)

    by j0nb0y ( 107699 ) <jonboy300NO@SPAMyahoo.com> on Saturday February 16, 2008 @11:35PM (#22450034) Homepage
    He obviously hasn't been paying attention to AI developments. The story of AI is largely a story of failure. There have been many dead ends and unfulfilled predictions. This will be another inaccurate prediction.

    Computers can't even defeat humans at go, and go is a closed system. We are not twenty years away from a human level of machine intelligence. We may not even be *200 years* away from a human level of machine intelligence. The technology just isn't here yet. It's not even on the horizon. It's nonexistent.

    We may break through the barrier someday, and I certainly believe the research is worthwhile, for what we have learned. Right now, however, computers are good in some areas and humans are good in others. We should spend more research dollars trying to find ways for humans and computers to efficiently work together.
    • Re: (Score:3, Informative)

      Excuse me.. who are you? You're saying RAY KURZWEIL hasn't been paying attention to AI developments? And you're modded insightful?

      http://en.wikipedia.org/wiki/Ray_kurzweil

      "Everybody promises that AI will hit super-human intelligence at 20XX and it hasn't happened yet! It never will!" ... well guess what? It'll be the last invention anybody ever has to make. Great organizations like the Singularity Institute http://en.wikipedia.org/wiki/Singularity_Institute [wikipedia.org] really shouldn't be scraping along on such poor

  • we aren't even close to the processing power of the human brain.

    i'll make a prediction of my own - this guy is after funding.

    • Re: (Score:3, Interesting)

      by bnenning ( 58349 )
      we aren't even close to the processing power of the human brain.

      We aren't that far off. Estimates for the computational power of the human brain are around 10**16 operations per second. Supercomputers today do roughly 10**14, and Moore's Law increases the exponent by 1 every 5 years. Even if we have to simulate the brain's neurons by brute force and the simulation has 99% overhead, we'll be there in 20 years. (Assuming Moore's Law doesn't hit physical limits).
      • Re: (Score:3, Insightful)

        by shura57 ( 727404 ) *
        You can similarly compare the temperature of the human brain and then observe that the machines have long bypassed it. Does it make machines smarter? I don't think so.

        The brain is so insanely parallel and the neurons are not just digital gates, more like computers in themselves. The machines of today are a far cry from the brain in how they are built. But sure, you can compare them by some meaningless parameter to say that we're close. How about the clock frequency: neurons are 1kHz devices, and modern CPU
  • I have little doubt we already have the components necessary to simulate a human-like brain, one way or another, right now. But I that's not enough. You need to know how to put it together, how to set it up to be educated somewhat-like-a-human, how to get it some semblance of human-like sensory input (at least for vision/hearing centers, if you're interested in either of those things) and then you need to train it for years and years. So, 21-years-off is too optimistic, I think, by at least an order of magn
  • Don't do it! (Score:4, Insightful)

    by magarity ( 164372 ) on Saturday February 16, 2008 @11:37PM (#22450054)
    (most) People can go out to get more education to advance from a menial job to a more skilled one when taken over by a robot but wtf do we do if the machines are as smart as we are? Who is going to hire any people to do even the most advanced thinking jobs when the machine that works for electricity 24/7 can do it? This kind of thing will bring on the luddite revolution in a hurry.
  • Retarded (Score:2, Insightful)

    by mosb1000 ( 710161 )
    I think these nonsense predications are best described as retarded. You can't predict something that is beyond our current technological capability, since it depends on breakthroughs being made that are impossible to predict. These breakthroughs could come tomorrow, or they could never come at all. I don't know why I'm posting this. Even talking about this fantastic nonsense is a waste of time.
  • Until we figure out how a water buffalo can be an individual at one spatial scale, and part of a herd as a texture at another scale... just in vision... we won't have smart computers.
  • by flyneye ( 84093 ) on Saturday February 16, 2008 @11:42PM (#22450088) Homepage
    " Artificial Intelligence will reach the level of humans"
    Buddy,I've been around more than four decades.I've yet to see more than a superficial level of intelligence in humans.
    Send your coders back to the drawing board with a loftier goal.

  • by denoir ( 960304 ) on Saturday February 16, 2008 @11:43PM (#22450094)
    It is not too much of an overstatement to say that the field of AI has not significantly progressed since the 1980's. The advancements have been largely superficial with better and more efficient algorithms being created but without any major insights and much less a road map for the future. While methods that originated as AI research are more common in real-world applications, the research and development of new concepts has made a grinding halt - not that it was ever a question of smooth continuous progress.

    It might seem like the lack of AI development is a temporary problem and altogether a peripheral issue. It is however neither - it is a fundamental problem and it affects all software development.

    Early in the history of computing, software and hardware development progressed at a similar pace. Today there is a giant and growing gap between the rate of hardware improvements and software improvements. As most people involved in the study of the field of software engineering are aware of, software development is in a deep crisis.

    The problem can be summarized in one word: complexity. The approach to building software has largely been based on traditional engineering principles and approaches. Traditional engineering projects never reached the level of complexity that software projects have. As it turns out humans are not very good at handling and predicting complex system.

    A good example of the problems facing software developers is Microsoft's new operating system Windows Vista. It took half a decade to build and cost nearly 10 billion dollars. At two orders of magnitude higher costs than the previous incarnation it featured relatively minor improvements - almost every single new radical feature (such as a new file system) that was originally planned was abandoned. The reason for this is that the complexity of the code base had become unmanageable. Adequate testing and quality assurance proved to be impossible and the development cycle became painfully slow. Not even Microsoft with its virtually unlimited resources could handle it.

    At this point, it is important to note that this remains an unsolved problem. It would have not been solved by a better structured development process or directly by better computer hardware. The number of free variables in such a system are simply too great to be handled manually. A structured process and standardized information transfer protocols won't do much good either. Complexity is not just a quantitative problem but at a certain level you'll get emergent phenomena in the system.

    Sadly artificial intelligence research which is supposed to be the vanguard of software development is facing the same problems. Although complexity is not (yet) the primary problem there manual design has proved very inefficient. While there are clever ideas that move the field forward on occasion there is nothing to match the relentless progress of computer hardware. There exists no systematic recipe for progress.

    Software engineering is intelligent design and AI is no exception. The fundamental idea persists that it takes a clever mind to produce a good design. The view, that it takes a very intelligent thing to design a less intelligent thing is deeply entrenched on every level. This clearly pre-Darwinian view of design isn't based on some form of dogma, but a pragmatism and common sense that aren't challenged where they should be. While intelligent design was a good approach while software was trivial enough to be manageable, it should have become blindingly obvious that it was an untenable approach in the long run. There are approaches that take the meta level - neural networks, genetic algorithms etc, but it is thoroughly insufficient. All these algorithms are still results of intelligent design.

    So what Darwinian lessons should we have learned?

    We have learned that a simple, dumb optimization algorithm can produce very clever designs. The important insight is that intelligence can be traded for time. In a short in

    • by denoir ( 960304 ) on Sunday February 17, 2008 @12:34AM (#22450332)
      This is a sort of continuation of the parent post.

      The comedian Emo Philips once remarked that "I used to think my brain was the most important organ in my body until I realized what was telling me this."

      We have tendency to use human intelligence as a benchmark and as the ultimate example of intelligence. There is a mystery surrounding consciousness and many people, including prominent philosophers such as Roger Penrose, ardently try to keep it that way.

      Given however what we through biological research actually know about the brain and the evolution of it there is essentially no justification for attributing mystical properties to our data processing wetware. Steadily with increased capabilities of brain scanning we have been developing functional models for describing many parts of the brain. For other parts that need still more investigation we do have a picture, even if rough.

      The sacred consciousness has not been untouched by this research. Although far from a final understanding we have a fairly good idea, backed by solid empirical evidence that consciousness is a post-processing effect rather than being the first cause of decision. The quantity of desperation can be seen in attempts to explain away the delay between conscious response and the activations of other parts of the brain. Penrose for instance suggests that yes, there is an average 500 ms delay, but that is compensated by quantum effects that are time symmetric - that the brain actually sees into the future, which then is delayed to create a real-time decision process. While this is rejected as absurd by a majority of neuroscientists and physicists, it is a good example of how passionately some people feel about the role of the brain. It is however painstakingly clear that just like we were forced to abandon an Earth-centered universe we do need to abandon the myth of the special place of human consciousness. The important point here is that once we rid ourselves of the self-imposed veil of mystery of human intelligence we can have a sober view on what artificial intelligence could be. The brain has developed through an evolutionary optimization process and while getting a lot of benefits it has taken the full blow of the limitations and problems with this process and also its context.

      Evolution through natural selection is far from the best optimizing method imaginable. One major problem with it is that it is a so called "greedy" algorithm - it does not have any look ahead or planning capabilities. Every improvement, every payoff needs to be immediate. This creates systems that carry a lot of historical baggage - an improvement isn't made as a stand-alone feature but as a continuation of the previous state. It is not a coincidence that a brain cell is a cell like any other - nucleus and all. Nor is it a cell because it is the optimal structure for information processing. It was what could be done by modifying the existing wetware. It is not hard to imagine how that structure could be improved upon if not limited by the biological building blocks that were available to the genetic machinery.

      Another point worth making is that our brains are optimized not for the modern type of information processing that humans engage in - such as writing software for instance. Humans have changed little in the last 50,000 years in terms of intellectual capacity but our societies have changed greatly. Our technological progress is a side effect of the capabilities we evolved that increased survivability when we roamed the plains of Africa in small family hunter-gatherer groups. To assume the resulting information processing system (the brain) would the ultimately optimal solution for anything else is not justifiable.

      There has been since the 1950's ongoing research to create biologically inspired computer algorithms and methods. Some of the research has been very successful with simplified models that actually did do something useful (artificial neural networks for instance). Progress has however been agonizi

      • Re: (Score:3, Interesting)

        by hawkfish ( 8978 )

        Penrose for instance suggests that yes, there is an average 500 ms delay, but that is compensated by quantum effects that are time symmetric - that the brain actually sees into the future, which then is delayed to create a real-time decision process. While this is rejected as absurd by a majority of neuroscientists and physicists, it is a good example of how passionately some people feel about the role of the brain.

        On the other hand, Dean Radin (while barking mad in some ways) has done an experiment that su

    • by doug141 ( 863552 ) on Sunday February 17, 2008 @01:00AM (#22450464)
      The Singularity is Near has a rebuttal of your first paragraph. Any sucessful part of AI research spins off into its own well-functioning discipline... optical character recognition, dictation software, text-to-speech, etc... they were sci-fi "AI" in 1980 and now they are working technologies. AI research is the umbrella under which only the unsolved problems still lie, and thus is always undone.
  • Predictions like this have been made in past, and not even come close. This one is no different. The bottom line is that humans process some information in a non-representational way, while computers must operate representationally. So even if the computation theory of mind is true, a microchip can't mimick it. Hubert Dreyfus has wrote a great deal on this topic, and provides extremely compelling arguments as to why we'll never have human type AI. Of course, AI can do a lot of "smart" things and be extremel
    • Re:Don't think so (Score:5, Insightful)

      by bnenning ( 58349 ) on Sunday February 17, 2008 @12:36AM (#22450348)
      Predictions like this have been made in past, and not even come close. This one is no different.

      The difference is that in 20 years we may have sufficiently powerful hardware that the software can be "dumb", that is, just simulating the entire physical brain.

      The bottom line is that humans process some information in a non-representational way, while computers must operate representationally.

      What prevents a computer from emulating this "non-representational" processing? Or is the human brain not subject to the laws of physics?
  • by The One and Only ( 691315 ) * <[ten.hclewlihp] [ta] [lihp]> on Saturday February 16, 2008 @11:51PM (#22450150) Homepage
    It's one thing to predict when a building project will be finished or when we'll reach a certain level of raw processing power because these things proceed by predictable means. But strong AI requires us to make theoretical advances. Theoretical advances don't proceed like a building project--someone has to have a clever idea, fully develop and understand it himself and convince others of it. And it won't occur to someone all at once, so we'll need incremental advances, all of which will happen unpredictably.
  • by glwtta ( 532858 ) on Sunday February 17, 2008 @01:35AM (#22450666) Homepage
    At least not yet. I can't believe that the sort of bullshit that Ray Kurzweil keeps peddling gets taken so seriously.

    There is a lot of talk about computers surpassing, or not surpassing, humans at various tasks - does it not bother anyone that computers don't actually posses any intelligence? By any definition of intelligence you'd like? Every problem that a computer can "solve" is in reality solved by a human using that computer as a tool. I feel like I'm losing my mind reading these discussions. Did I miss something? Has someone actually produced a sentient machine? You'd think I would have seen that in the papers!

    What's the point of projecting that A will surpass B in X if the current level of X possessed by A is zero? There seems to be an underlying assumption that merely increasing the complexity of a computational device will somehow automatically produce intelligence. "If only we could wire together a billion Deep Blues," the argument seems to go "it would surpass human intelligence." By that logic, if computers are more complex than cars, does wiring together a billion cars produce a computer?

    Repeat after me - The current state of the art in artificial intelligence research is: fuck all. We have not produced any artificial intelligence. We have not begun to approach the problems which would allow us to start on the road to producing artificial intelligence.

    Before you can create something that surpasses human levels of intelligence, one would think you'd need to be able to precisely define and quantify human intelligence. Unless I missed something else fairly major, that has not been done by anyone yet.
  • by freedom_india ( 780002 ) on Sunday February 17, 2008 @03:55AM (#22451376) Homepage Journal

    We will have both the hardware and the software to achieve human level artificial intelligence
    What he means is that with the steadily reducing levels of Human Intelligence over the past 5 decades, as depicted in http://www.fourmilab.ch/documents/IQ/1950-2050/ [fourmilab.ch] shows that by year 2029 the human intelligence will meet machine AI which will remain as constant as always and would continue to ask "Do you want to quit? Yes/No" every time i quit Word.

    Maybe that's why Google is hoarding all the remaining three digit IQ scores so that there is no shortage of IQ.

    In other news, lots of flying chairs were heard swishing around Redmond Campus at Microsoft when the CEO heard google was cornering the market on Human IQs.

    Abrams starts a new Serial: LOST IQ.
  • by melted ( 227442 ) on Sunday February 17, 2008 @04:38AM (#22451574) Homepage
    And I work on AI and machine learning day in and day out. I'd put the goal post at 50 years, and that's an optimistic estimate. There are scant few research centers that do "general AI" research. Even fewer actually talk to neuroscientists, thus dismissing one viable (though extremely complex and costly) avenue of research. The fact remains, however, that at this point we don't have the required sophistication in any of the areas that presumably would be required to build a "thinking" machine. We can't process human language well enough (and therefore speech recognition and textual information sources are pretty much useless), we can't process visual information well enough either (segmentation, recognition, prediction, handling a continuous visual stream), we don't know the cognitive mechanisms below high level abstract reasoning, and even at a high level our abilities are weak (try to build a classifier that will recognize sarcasm, for example), finally even if we could do all that, we wouldn't be able to store the resulting data efficiently enough (in terms of required space and retrieval speed), because we have no idea how to do it.

    That said, a lot of stuff can happen in 50 years, and I bet that once some of the major problems get solved, there will be an insane stream of money pouring into this field to accelerate the research. Just imagine the benefits an "omniscient" AI trader would bring to a bank. The question is, do we want this to happen? This will be far more disruptive a technology than anything you've ever seen.
  • My gut feeling... (Score:3, Interesting)

    by CTachyon ( 412849 ) <chronos AT chronos-tachyon DOT net> on Sunday February 17, 2008 @11:51AM (#22453584) Homepage

    Warning: rambling post ahead.

    My gut feeling is that, from strictly a hardware perspective, we're already capable of building a human-level AI. The problem is that, from a software perspective, we've focused too much on approaches that will never work.

    As far as I'm concerned, the #1 problem is the Big Damn Database approach, which is basically a cargo cult [wikipedia.org] in disguise. Though expert systems are useful in their niches, "1. Expert system 2. ??? 3. AI!" is not a workable roadmap to the future. I'm certain that it's far easier to start with an ignorant AI and teach it a pile of facts than it is to start with a pile of facts and teach it to develop a personality.

    The #2 problem is the Down To The Synapse approach. This, unlike BDD, could quite possibly create "A"I if given enough hardware. But I think that, while DTTS will lead to a better understanding of medicine, it won't advance the AI field. It won't lead to an improved understanding of how human cognition works — it certainly won't teach us anything we didn't already know from Phineas Gage [wikipedia.org] and company [ebonmusings.org].

    Even if we go to all the trouble of developing a supercomputer capable of DTTS emulation of a human brain — so what? If we ask this emulated AI to compute 2+2, millions of simulated synapses will fire, trillions of transistors will flip states, phenomenal amounts of electricity will pour into the supercomputer, just for the AI to give the very same answer that a simple circuit consisting of a few dozen transistors could've answered in a tiny fraction of the time, using the amount of electricity stored on your fingertip when you rub your shoes on the carpet during winter. And that's not even a Strong AI question. That's not to say that working DTTS won't be profound in some sense, but we know we can build it better, yet we won't have the faintest idea of where to go next.

    That brings me to my core idea — goals first, emotions [mit.edu] close behind. Anyone who's pondered the "is/ought" problem in philosophy already knows the truth of this, even if they don't know they know the truth of it. The people building cockroach robots were on the right track all along; they're just thinking too small. MIT's Kismet [mit.edu], for instance, gives an idea of where AI needs to head.

    That said, I think building a full-on robot like Kismet is premature. A robot requires an enormous number of systems to process sensory data, and those processing systems are largely peripheral to the core idea of AI. If we had an AI already, we could put the AI in the robot, try a few things, and ask the AI what works best. So, ideally, I think we need to look at a pure software approach to AI before we go off building robot bodies for them to inhabit.

    And how to do that? I think Electric Funstuff [gamespy.com]'s Sim-hilarities [electricfunstuff.com] captures the essence of that. If we give AIs a virtual world to live in — say, an MMO — then that removes a lot of the need for divining meaning from sensory input, allowing a sharper focus on the "intelligence" aspect of AI. Start with that, grow from there, and I can definitely see human-level AI by 2029.

"If it ain't broke, don't fix it." - Bert Lantz

Working...