Forgot your password?
Robotics Hardware Science

When Will AI Surpass Human Intelligence? 979

Posted by samzenpus
from the I'm-afraid-I'm-smarter-than-you-dave dept.
destinyland writes "21 AI experts have predicted the date for four artificial intelligence milestones. Seven predict AIs will achieve Nobel prize-winning performance within 20 years, while five predict that will be accompanied by superhuman intelligence. (The other milestones are passing a 3rd grade-level test, and passing a Turing test.) One also predicted that in 30 years, 'virtually all the intellectual work that is done by trained human beings ... can be done by computers for pennies an hour,' adding that AI 'is likely to eliminate almost all of today's decently paying jobs.' The experts also estimated the probability that an AI passing a Turing test would result in an outcome that's bad for humanity ... and four estimated that probability was greater than 60% — regardless of whether the developer was private, military, or even open source."
This discussion has been archived. No new comments can be posted.

When Will AI Surpass Human Intelligence?

Comments Filter:
  • The obvious solution (Score:4, Interesting)

    by MindlessAutomata (1282944) on Wednesday February 10, 2010 @07:50PM (#31092788)

    The obvious solution is to create a machine/AI that, after a deep brain structure analysis, replicates your cognitive functions. Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.

  • Let's see. (Score:4, Interesting)

    by johncadengo (940343) on Wednesday February 10, 2010 @07:51PM (#31092804) Homepage

    To play off a famous Edsger Dijkstra [] quote, the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish...

  • by Jack9 (11421) on Wednesday February 10, 2010 @07:52PM (#31092822)

    Entropy. The problem for (potentially) immortal beings is always going to be entropy. Given, we created robots, I'm not necessarily of the belief that robots wouldn't insist we stay around for our very brief lives, so help them solve their problems.

  • Space shows (Score:5, Interesting)

    by BikeHelmet (1437881) on Wednesday February 10, 2010 @08:01PM (#31092918) Journal

    I've often thought Space shows - and any show in the future, really - are incredibly silly. There's no way we'll have computers so dumb 200+ years into the future.

    You have to manually fire those phasers? Don't you have a fancy targeting AI that monitors their shield fluctuations, and calculates the exact right time and place to fire to cause the most damage?

    A surprise attack? Shouldn't the AI have detected it before it hit and automatically set the shield strength to maximum? :P

    I always figured by 2060 we'd have AIs 10x smarter thinking 100x faster than us. And then they'd make discoveries about the universe, and create AIs 2000x smarter that think 100,000,000x faster than us. And those big AIs would humour us little ant creatures, and use their great intelligence to power stuff like wormhole drives, giving us instant travel to anywhere, as thanks for creating them.

    But hey, maybe someone will create a Skynet. It's awfully easy to infect a computer with malware. Infecting a million super smart computers would be nasty, especially when they have human-like capabilities. (able to manipulate their environment)

    But this is all a pointless line of thinking. Before we get there we'll have so much processing power available, that we'll fully understand our brains, and be able to mind control people. We'll beam on-screen display info directly into our minds, use digital telepathy, etc.; in the part of the world that isn't brainwashed, everyone will enjoy cybernetic implants, and be able to live for centuries. (laws permitting)

    And yet flash still won't run smooth. :/

  • by RyanFenton (230700) on Wednesday February 10, 2010 @08:03PM (#31092948)

    Artificial intelligences will certainly be capable of doing a lot of work, and indeed managing those tasks to accomplish greater tasks. Let's make a giant assumption that we find a way out of the current science fiction conundrums of control and cooperation with guided artificial intelligences... what is our role as human beings in this mostly-jobless world?

    The role of the economy is to exchange the goods needed to survive and accomplish things. When everyone can have an autofarm and manufacturing fabricator, there really wouldn't be room for a traditional economy. A craiglist-style trading system would be about all that would be theoretically needed - most services would be interchangeable and not individually valuable.

    What role will humanity play in such a system? We'd still have personality, and our own perspective that couldn't be had by live-by-copy intelligent digital software (until true brain scans become possible). We'd be able to write, have time to create elaborate simulations (with ever-improving toolsets), and expand the human exploration of experience in general.

    As humans, the way we best grow is by making mistakes, and finding a way to use that. It's how we write better software, solve difficult problems, create great art, and even generate industries. It's our hidden talent. Games are our way of making such mistakes safe, and even more fun - and I see games and stories as increasingly big parts of our exploration of the reality we control.

    Optimized software can also learn from its mistakes in a way - but it takes the accumulated mistakes on a scale only a human can make to get something really interesting. We simply wouldn't trust software to make that many mistakes.

    Ryan Fenton

  • by Anonymous Coward on Wednesday February 10, 2010 @08:03PM (#31092954)

    You have to remember human intelligence evolved and was produced in environments radically different then AI will be produced in. Human beings for all intents and purposes were kludged together by a blind process. Biological evolution has no foresight.

  • by geekoid (135745) <(dadinportland) (at) (> on Wednesday February 10, 2010 @08:04PM (#31092968) Homepage Journal

    thought about a lot..maybe too much.

    What happens in society when someone makes a robot clever enough to handle menial work?
    Imagin id all Ditch diggers, burger flippers and sandwich maker, factory workers are all robotic? What happens to the people?
    The false claim is that they will go work in the robot industry, but that is a misdirection, at best.
    A) It will take less people to maintain them then the jobs they displace.

    B) If robots are that sophisticated, then they can repair each other.

    There will be millions and million of people who don't work, and have no option to work.
    Does this mean there is a fundamental shift in the idea of welfare? do we only allow individual people to own them and choose between renting out their robot or working themselves?

    Having 10's million of people too poor to eat properly, afford housing, and healthcare is a bad thing and would ultimately drag down the country. This technology will happen and it should happen. Personally I'd like to find a way for people to have more leisure time and let the robots work. Our current economic and government structure can't handle this kind of change. Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job?

  • by Jane Q. Public (1010737) on Wednesday February 10, 2010 @08:12PM (#31093082)
    I think it is pretty widely recognized now that while it might have seemed logical in Turing's time, convincing emulation of a human being in a conversation (especially if done via terminal) does not require anything like human intelligence. Heck, even simple programs like Eliza had some humans fooled decades ago.

    On the other hand, while advances in computing power have been impressive, advances in "AI" have been far less so. They have been extremely rare, in fact. I do not know of a single major breakthrough that has been made in the last 20 years.

    While the relatively vast computing power available today can make certain programs seem pretty smart, that is still not the same as artificial intelligence, which I believe is a major qualitative difference, not just quantitative. And even if it is just quantitative, there is a hell of a lot of quantity to be added before we get anywhere close.
  • The Turing Test (Score:5, Interesting)

    by mosb1000 (710161) <> on Wednesday February 10, 2010 @08:15PM (#31093108)

    One observed that “making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many, complex behaviors of a human mind — many of which would actually be hindrances when it comes to creating Nobel-quality science.” He observed “humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.” To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent — in certain ways — than it actually was. There is no compelling reason to spend time and money developing this capacity in a computer.

    This kind of thinking is one of the major things standing in the way of AGI. The complex behaviors of the human mind are what leads to intelligence, they do not detract from it. Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation. This can not be achieved by a single-minded number crunching machine, but instead evolves out of an adaptable human being as he goes about his daily life.

    Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancements, build bridges, grow food. How could that be a hindrance to the process? It drives the process.

    Finally, the assertion that an AGI would need to mask it's amazing intellect to pass as human is silly. When was the last time you read a particularly insightful comment and concluded that it was written by a computer? When did you notice that the spelling and punctuation in a comment was too perfect? People see that and they don't think anything of it.

  • by DeltaQH (717204) on Wednesday February 10, 2010 @08:15PM (#31093112)
    I am pretty much sure that the current computational models. I.e. Turing Machine are not enough to explain the human mind.

    All computing systems todays are Turing Machines. Even neural networks. (actually less than Turing Machines, because Turing Machines have infinite memory)

    Maybe quantum computers could open the way. Maybe not.

    I think that a future computing theory that could explain the mind would be as different and Newtonian physics from Einstein's Relativity.
  • Re:No way. (Score:5, Interesting)

    by Homburg (213427) on Wednesday February 10, 2010 @08:18PM (#31093138) Homepage

    Searle's dualism (which he claims isn't dualism, but it totally is) is ridiculous, I agree, but functionalism is also a dead dog. For better criticisms of functionalism, look at Putnam's recent work. As Putnam was one of the main inventors of functionalism in the first place, his rejection of the position involves significant familiarity with functionalism, and is pretty compelling.

  • Depends on the test. (Score:4, Interesting)

    by hey! (33014) on Wednesday February 10, 2010 @08:23PM (#31093198) Homepage Journal

    If the test is chess, then there are AIs that surpass the vast majority of the human race.

    If the test were, let's say, safely navigating through Manhattan using the same visual signs and signals that a pedestrian would, there isn't anything close to even a relatively helpless human being.

    If the test is understanding language, same thing. Ditto for cognitive flexibility, the ability to generalize mental skills learned in one situation to a different one.

    Of course many of these kinds of "tests" I'm proposing are very human-centric. But narrow tests of intelligence are very algorithm-centric. The narrower the test, the more relatively "intelligent" AI will be.

    Here's an interesting thought, I think. How long will it be before an AI is created that is capable of outscoring the average human on some IQ test -- given the necessary visual inputs and robotic "hands" to take the test? I don't think that's very far off. I wouldn't be surprised to see it in my lifetime. I'd be surprised to see a pedestrian robot who could navigate Manhattan as well as the average human in my lifetime, or who could take leadership and teamwork skills learned in a military job and apply them to a civilian job without reprogramming by a human.

  • by Colin Smith (2679) on Wednesday February 10, 2010 @08:46PM (#31093448)

    Start with money.

    You're a bank. You're going to loan out some money for what reason? To get more back. So, the recipient of a loan has to supply something of value. Say, a house.

    What happens when the supply of houses matches or exceeds the demand? Houses become valueless. You can't make money supplying them. The bank isn't going to make that loan.

    So for our existing monetary system, demand must never be satisfied. We must never build enough houses for all the homeless, and if too many are built, they have to be knocked down. [] []

    When the supply of work meets demand, work becomes valueless.

    Which leads us to energy.

    The reason we "modernise" is to reduce costs. A human costs say 20k/year. A digging machine costs 250k, with one driver can replace 10 humans digging trenches. Payback after the 1st year.The cost of the energy for the digger is lower than the costs the humans have to pay to live, plus the humans have a 30% tax on top.

    So economically, it makes sense to get rid of humans and replace them with machines. In fact, our monetary system pretty much enforces it.

    If all human labour can be carried out by machines, then humans will have no money. i.e. Universal machine labour will destroy capitalism and the monetary system. Banks etc. What will happen is the system will devolve into a 2 class system of owners and the owned. Creditors and debtors. Neofeudalism.

    You should read Silvio Gesell. He came to a similar conclusion. That if demand is ever satisfied, capitalism stops functioning. (This is why there will always be poverty. It's required by the money system.)

    Ofcourse as energy itself (easy energy resources like coal, oil, gas) becomes more scarce and expensive, the running of a 10,000 cpu cluster to emulate 100 billion human neurons is likely to consume quite a lot of energy.

  • Re:AI first (Score:4, Interesting)

    by MrNaz (730548) * on Wednesday February 10, 2010 @08:49PM (#31093468) Homepage

    I'm skeptical about the benefits of AI.

    100 years ago we were promised an age of new enlightenment while washing machines, dish washers, vacuum cleaners and other then-cutting edge devices took over all the manual labor that dominated work at that time. Women were supposed to be able to ignore housework and concentrate on childrearing and other higher social activities.

    Did that happen? No, the industrial capitalists just found new ways to put us (and now our wives too, who are no longer required for housework thanks to all these appliances) to work for their own insatiable greed. Men and women now work side by side in gigantic cube farms while children rot in day care or roam the streets with little to no guidance from the more experienced members of society.

    Nothing moves us backwards faster than progress.

  • by Rakishi (759894) on Wednesday February 10, 2010 @08:50PM (#31093482)

    It's ALL about carnal desires of one sort or another, that's the whole point of civilization and longer existence. We want to live longer, we want to eat good food, screw pretty things, we want to have kids, we want to satisfy our curiosity, we want to satisfy our ingrained empathic needs, we want to be admired, etc, etc.

    Society and civilization are simply entities that over time evolved on top of all this crap. We have civilization because it lets us better beat the shit out of groups of humans who don't have it. We want to beat the shit out of them because we want all those carnal desires of ours fulfilled.

    The question is what pointless goal will an AI want and how will it go about achieving it rather than if it will have such a goal.

  • by NicknamesAreStupid (1040118) on Wednesday February 10, 2010 @08:53PM (#31093512)
    Machines will only 'think' like humans when they have human emotions. All reasoning and abstract thought are based on emotions, which were the basis of all human interaction for countless millennia before humans spoke words. We will never believe that machines or anything else can be 'human-like' unless we feel it. Just look at the Loebner contest ( Since there is no machine algorithm for this test (duh!), they use people to make subjective decisions as to whether unseen respondents 'seem' human. If the responses do not 'seem' right, then the respondent does not pass. It is amazing how many humans (used as controls) do not pass this Turing test, giving new meaning to "you don't feel right to me." Without human feelings there would be no human reasoning, no 'intelligence.' If this reasoning really bothers you, then you have helped prove my point.

    As for these AI guys, their conclusions are something of a paradox. If they are as wrong as some believe and dumb as others say, then it may not take much more to create a machine to be as 'intelligent.' Their question may be better put, "when will we feel that humans have become as dumb as their machines?"
  • Re:No way. (Score:3, Interesting)

    by Chris Burke (6130) on Wednesday February 10, 2010 @08:54PM (#31093528) Homepage

    Only if you want to cling to silly quasi-dualistic Searle-inspired objections towards functionalism.

    Uh, no.

    I'm totally a functionalist -- if it looks and acts like "intelligence" or "consciousness", then it is.

    But we still have no clue what makes "consciousness" or "intelligence" tick, and we're no closer to creating a functional replica of them.

    What we've actually accomplished in "weak" AI is pretty impressive from a practical standpoint. But they aren't stepping stones to an actual looks-like-intelligence AI. Coming from the angle of studying the known example of intelligence, we've made lots of strides in understanding the human brain, but we're still not anywhere near understanding it well enough to build a replica from scratch.

    Barring some extreme advances in our understanding, the most likely solution to "hard" AI I see is to brute force it and simply run a complete simulation of a human brain. I doubt we'll have the ability to do that in 20 years, though it's always possible.

  • It;'s getting closer (Score:5, Interesting)

    by Animats (122034) on Wednesday February 10, 2010 @09:08PM (#31093734) Homepage

    I dunno. But it's getting closer.

    A lot of AI-related stuff that used to not work is more or less working now. OCR. Voice recognition. Automatic driving. Computer vision for simultaneous localization and mapping. Machine learning.

    We're past the bogosity of neural nets and expert systems. (I went through Stanford when it was becoming clear that "expert systems" weren't going to be very smart, but many of the faculty were in denial.) Machine learning based on Bayesian statistics has a sound mathematical foundation and actually works. The same algorithms also work across a wide variety of fields, from separating voice and music to flying a helicopter. That level of generality is new.

    There's also enough engine behind the systems now. AI used to need more CPU cycles than you could get. That's no longer true.

  • by sowth (748135) * on Wednesday February 10, 2010 @09:38PM (#31094148) Journal

    Robots will do all the restocking of the shelves and cashiers in stores, there will probably be McRobots instead of McDonalds. ...

    See, this is why I don't think what they said will happen. We've had the technology to do the menial labor robot for at least ten or twenty years, if not longer.

    Secondly, the whole exchange labor for money thing is overrated. The way it used to work, most people (well, those considered "people" by the law) just owned land and their own equipment and did the work themselves or with their children. Then the feudal lords came along and started the "golden parachute CEO" model, and normal people's lives have been hell ever since.

    Okay, maybe I overstated it a bit, but I'm just trying to say there are other ways than just having a corporate overlord. If each person owned their own robot and land to use it, they would not need a "job."

    "Job" was just a method a large organization (such as a corporation) which needed labor done by many people could function. If the robots do the job, and you are their "leader," then you just have a small business you run.

    Essentially, the people who think the only way they can survive by having a "job" are living in a Manorialism []. There are choices, they all just have different difficulties.

  • Re:No way. (Score:2, Interesting)

    by lennier (44736) on Wednesday February 10, 2010 @09:38PM (#31094154) Homepage

    Actually, there's lots of proof that consciousness is something entirely larger thane material existence, and not just a different way of looking at it. It's just that a lot of folks working in psychology, medicine and computational biology aren't taught about the evidence for psi, esp and related weirdness - there's a taboo about it.

    The book Irreducible Mind [] collects a lot of the best mind-blowing research from the last 150 years into stuff that just does not fit the materialist paradigm. Includes a chapter on AI (somewhat outdated IMO but still interesting as it is written by an AI researcher).

    My impression of this is that I think dualism doesn't work and we have to have a monist framework - but that framework has to be MENTALIST (idealist) monist, not physicalist. The body and the physical universe must be a projection of mind-stuff, not the other way around, because minds quite patently CAN exist outside bodies and there's a whole mind-universe out there which simply does not correlate to the physical universe, but rather transcends it.

    This opens a can of works, but it's the only explanation which fits the data. And it also makes sense of why there are a lot of philosophical schools throughout history which have started from the otherwise absurd position of 'mind is prior to matter'.

    This still leaves us puzzled, but at least now we understand a bit more about why we're puzzled. What are the physics of mind-stuff? I dunno for sure, but I think they're much closer to the physics of information than that of matter. Mind-stuff can exist in multiple places at once; the very notion of 'place' is a physical abstraction, which can be modelled as information (as virtual worlds teach us). Likewise with time. At best the physical universe is some kind of simulation, or sandbox environment, nested within a much larger, more 'real' shared-mental level. Our individual mindspaces are I think sort of like pocket universes within this larger shared universe - like locally-hosted shards of a MMO world. Certain people can train their minds to access the shared mindspace directly, and that's how we get psi/esp/mediumship. But the shared mindspace is BIG and it's full of very confusing information which does not map into our physical experience, so many mediums report very odd stuff. It's as if you showed Google or World of Warcraft to a medieval peasant - they'd come away with very strange ideas about what's really going on.

    The potential of exploring 'irreducible mind' is huge, but the biggest problem is that there is this massive stigma against it from the materialist-monist camp who believe mental monism is patently absurd. Yes it is, IF you a priori believe a materialist-monist viewpoint. Not if you don't.

    Materialist monism is like logging into World of Warcraft and believing that the virtual world in front of your simulated eyes is by definition 'real' and that all the servers which run it MUST be built out of VR constructs. At one level that's correct, but believing that that's all there is will lead to confusion. Yes, things exist in our 'physical' (simulation) world, but their existence comes from a higher level. You'll never map the WoW codebase just by poking at the behaviour of mobs, though you MIGHT well be able to do behaviourist psychology on those entities and come close to working out how they behave. But there will exist a whole level of structuring reality which is simply inaccessible to those running with 'user-level privileges' in the VR world. The true nature of WoW is that it is a construct of information, not 3D physical reality, and the information flows can bypass what appears to be local physics.

    How would a medieval peasant, jacked into WoW via sufficiently advanced VR, try to grasp this concept? They might come up with terms like 'astral plane' or 'subtle matter' to describe the idea that there exists some kind of 'more real' reality which controls the simulation. Our kn

  • by Jane Q. Public (1010737) on Wednesday February 10, 2010 @09:41PM (#31094196)
    I am, and have been, aware of all this.

    Please show me how any of these represent major advances in AI, as opposed to just more processing power and some programming trickery. A clever program still does not represent artificial intelligence.

    I am a software engineer by trade, and hardware is something of a hobby of mine. I have been keeping up. And while computing has done some awesome things in the last decade or so, I still have not seen anything that qualifies as a "breakthrough" in AI.

    The only way the advances that have been made will lead to AI is if, as I stated, intelligence is more a matter of quantity over quality. And I am not convinced that it is.

    The examples you gave, with the possible exception of Robinson's Conjecture, are all special-purpose software or tasks that can reasonably be expected to improve by throwing mere brute force and (human-written) programming behind them. But they will never pass a Turing test or make you a good martini. For the most part the AI question is really more about how a task is being accomplished, than what is being accomplished.

    Some of the early computer proofs were seriously questioned because they made use of iterative methods that processed much more data than the verifiers could reasonably be expected to examine any other way. (And iterative methods are what computers have always been good at; they seem to have little in common with AI.) It came close to a situation where it would take one or more other computer programs to verify the validity of the software used, which could literally lead to an endless regression. Not because of any "intelligence" involved, but simply because of the sheer amount of computation.

    (I should note that no endless regression should be necessary unless the problem under consideration is NP-complete, in which case there is no way to know in advance.)

    In any case, in that context, I would not pretend to make a judgment about how Robinson's Conjecture was proven without knowing more about how it was proven. I know what it is, but I know nothing about the proof.
  • Manna (Score:4, Interesting)

    by rdnetto (955205) on Wednesday February 10, 2010 @09:52PM (#31094350)

    adding that AI "is likely to eliminate almost all of today's decently paying jobs

    Stories like this just keep reminding me of Manna []. If this happens in my lifetime it's going to be an interesting time to be alive.

  • by dawilcox (1409483) on Wednesday February 10, 2010 @09:56PM (#31094404)
    My AI teacher opened his class with telling us all about these researchers that were making predictions back in the 50's and 60's about AI. During that era, they had great expectations of AI only to have them crushed later. They made predictions that 10 years from then, we would be able to replace human translators with computers. As we know, computers have not replaced human translators. They were so unsuccessful, that there is what is called "The Dark Age of NLP (Natural Language Processing".

    If I learned anything in that class, it was not to make predictions about when computers will or will not make AI breakthroughs. Historically, researchers have been way off.

  • Re:The Turing Test (Score:3, Interesting)

    by Angst Badger (8636) on Wednesday February 10, 2010 @10:10PM (#31094544)

    The complex behaviors of the human mind are what leads to intelligence, they do not detract from it.

    I'm inclined to take an almost diametrically opposed position and say that this kind of species-narcissism is our major barrier. We think way too highly of ourselves, and as a result, we think that all of our quirks and flaws are somehow special. The neocortex, where all of the useful higher mental faculties are located, is a barely 2mm thick shell around a vast mass of tissue that performs much less exciting tasks, many of which have already been matched or surpassed by much simpler intelligently designed software, as opposed to the brain's crudely evolved inefficiency. We don't have to figure out how the whole thing works at a very high level of detail, we mainly need to understand how the neocortex works, and contrary to many of the appallingly uninformed comments to this story, we're actually making substantial and rapid progress in that area.

    Emotion? Pfft. It's little more than a set of accumulators that are incremented and decremented proportionally by stimulus events and whose current values determine the frequency with which behavioral subroutines are triggered. And given that the vast majority of emotionally-inspired human activity is useless or actually harmful, I don't think it's a feature we need to simulate very closely in our machines.

    Humans mainly jockey for social status, compulsively accumulate shiny objects, seek (mostly) passive stimulation, engage in very complex but essentially imitative behavior, and kill each other in large numbers. The remaining 0.01% of human activity is what's actually interesting and beneficial, and despite humans not being anywhere near as bright as they like to think they are, and being really, really bad at actual creativity, duplicating that tiny fraction is not at all unrealistic. We should, moreover, be deliberately aiming at exceeding human intelligence. We already have billions of humans, many of them lying idle because of the inefficiency of our social and economic systems, and hundreds of millions of them are available for less than a dollar a day. Unless AI ends up being considerably better than human intelligence, there's not much use for it -- though we are, as a species, probably dumb enough to use human-level AI to eliminate all paying jobs, at which point the economy that sustains them will collapse for lack of consumers, and we'll all go back to work. We are, after all, too greedy and devoted to our social hierarchies to provide a life of leisure and plenty for everyone even if it were possible.

  • Re:When? (Score:3, Interesting)

    by Nazlfrag (1035012) on Wednesday February 10, 2010 @10:22PM (#31094636) Journal

    Well botnets don't have to worry about individual crashes, and chat bots are getting there. On the beer front, well I'm extremely inventive and I still end up with the occasional disaster so any robot that can do that consistently is superhuman in my book. I'm not sure I understand why we still shake hands, something about not drawing a sword? So it seems we're halfway towards an AI. But will we get there?

    What was that Dijkstra quote, 'Asking if a computer can think is like asking if a submarine can swim.' I guess the answer is sort-of but not really.

  • by Daetrin (576516) on Wednesday February 10, 2010 @10:41PM (#31094806)
    "Let's hope they're animal lovers."

    That is 100% correct, and we really ought to be actively working towards that goal. If when AI arises we treat it kindly and give it legal rights it is _likely_ that it will "grow up" to think kindly of its human predecessors. If we try to lobotomize it, contain it, restrict it or destroy it then it's not going to be too happy with us.

    If it's smart enough to be a threat then eventually it will escape any restrictions we try to put on it. (And if it's not then we don't have anything to worry about anyways.)

    If it has emotions and we treat it well then it will "grow up" to look at us as like a pet, or a mentally challenged grand-parent. If we mistreat it then it will either become psychotic, and therefore dangerous, or view us about the same way most ranchers and farmers view wolves, and therefore be even more dangerous.

    If it doesn't have emotions and we mistreat it then it will logically see us as a threat to its own survival and try to eliminate us. If we treat it fairly then it will probably leave us alone. It's not like we're serious competition for the resources it would need, and it would be illogical to start a fight when one wasn't necessary. (Although it might certainly think ahead and make some rather nasty contingency plans just in case we ever decided to start the fight.)

    Either we need to prevent anyone anywhere from every inventing AI (and if it turns out to be possible then good luck trying to prevent that) or we need to make sure that any AIs that get created have every reason to feel sympathetic towards us, or at the very least not threatened.
  • Re:No way. (Score:1, Interesting)

    by Anonymous Coward on Wednesday February 10, 2010 @10:43PM (#31094828)

    What we've actually accomplished in "weak" AI is pretty impressive from a practical standpoint. But they aren't stepping stones to an actual looks-like-intelligence AI. Coming from the angle of studying the known example of intelligence, we've made lots of strides in understanding the human brain, but we're still not anywhere near understanding it well enough to build a replica from scratch.

    Nobody is even trying. Consider the life cycle of a real human being. You're born. Within a few hours, you begin learning to see. Within a few days, you learn to distinguish your mommy from the rest of the environment, and gain the ability to not fall off cliffs. Within 18 months, you're starting to say your first words. Within 13 years, you're horny as hell and trying to learn how to get laid. A process that will continue for the rest of your life.

    What is comparable in the computer world? Very little. There is almost no continuity of learning across domains. Computer vision people don't "join" (in the mathematical sense) their systems with a speech recognition and synthesis system. How can a computer vision system talk without one? It can't, and quite obviously. Unfortunately, people seem to think that these are some how "orthogonal". But maybe, being able to talk about what you see makes your ability to see better. The whole can be greater than the sum of its parts. Indeed, the "whole" of "seeing and talking" is a subset of the cartesian product of "seeing" and "talking", which is demonstrably significantly bigger than the "sum" of those concepts.

    So there are at least three issues: first, how to create a system that tries to find a "balance" among distinct input and output systems, in order to maximize an abstract, inter-IO-system quantity whose ostensive definition can change over time. (This would capture the notion of having distinct drives physiological and psychological drives, and having them change over time). Second: how to even describe an "abstract, inter-IO-system" quantity. Related is the third: how to map demands made from the "real world" on the AI system to "abstract, inter-IO-system" quantities.

    As I have said before, our genes have a model of the world in them. 300,000 generations have past since Australopithecus walked the earth. I submit that it will be impossible to create any sort of human level intelligence without (1) creating a system that interacts with the real world in a "serious" and "rich" way, and (2) giving that system either a very good model of the real world, or at least giving the system a lot of time to evolve a good model of the world. (1) and (2) are related. You can't have a "good" model without "rich" interactions. And you need a relatively good "model" to get any richness out of interactions. Consider the computer vision/speech example again. You really do need to be able to quantify over things you can see before you can talk about things you can see.

    So here is the program for realistic AI research: get the Haskell source for every neato AI project out there. Include libraries for vision, smelling, hearing, touch. It doesn't matter how primitive they are, as long as you have them all. Include libraries for calculating a robot's energy needs and energy stored in a battery. Include a library for solving lattice meets and joins. (It can do logic if it can do that). Make sure you place limits on the thing's ability to do logic. You can be pretty clever about this. For example, you can let the machine do a "depth then breadth" search on the logic. Letting a machine go down a line of thought that doesn't terminate is kind of bad. (Want to compare artificial intelligence to real intelligence? My senior thesis was about a recursive structure that didn't terminate under the conditions I was describing. How much time do you think I wasted? Let's just say I failed Latin II my senior year.) Use genetic programming, while quantifying over source code (possibly using Template Haskell for this purpose). Do the whole compile-install to robot-let robot learn and die-recompile cycle about 300,000 times.

  • Re:When? (Score:4, Interesting)

    by Unoti (731964) on Wednesday February 10, 2010 @10:45PM (#31094848) Journal

    Agreed. In civilized countries we have really excellent infant mortality rates. We have instant global communications, and overnight worldwide delivery and travel. Tons of different diseases, essentially made obsolte. And technology has done a lot for us. Keep in mind that comptuers do many jobs for us today that used to be done by people, such as coordinating appointment schedules, taking messages, operating elevators, delivering documents, retyping edited documents... There's likely a list of these types of things longer than anyone would care to read. Also look at the means food production: farm automation, techniques, and technology have enabled huge swaths of the population to devote their attention to other things.

    The sad part is most of those other things people devote their time to is just other flavors of slavery designed to protect the wealth of the rich. I don't have the numbers on this, but it wouldn't surprise me if available leisure time and family and friends time has dropped since the industrial and information revolutions rather than raised.

    Technological change has also brought about much negative change that no one would have expected, either. Such as for all the low infant mortality in the first world, it's as bad as ever or worse in the third world (right? I'm not sure about this, just guessing). Who would have guessed in 1890 that we'd be on the verge of emptying the oceans of fish? Or that the widely held ability to destroy most life on the planet is the main thing keeping us from destroying life on the planet.

    And surely not many people believed that ThoughtCrime and big brother would ever really happen. But it is. If you don't believe me, there's certain keywords you should try Googling every night and see what happens to you.

  • Re:AI first (Score:3, Interesting)

    by Hurricane78 (562437) <deleted@s l a s h> on Wednesday February 10, 2010 @10:54PM (#31094940)

    Depends on your definition of “progress”, doesn’t it?
    I mean our food definitely was healthier. And we moved our asses more.

    I read an interesting article, that said, that basically it’s all just a thing of definition. Solely of definition. (If you can’t imagine one of your ideals turning into a non-ideal, you only lack imagination. ^^)

  • Re:When? (Score:1, Interesting)

    by arminw (717974) on Wednesday February 10, 2010 @11:18PM (#31095114)

    ....simulations of the human brain accurate down to the individual neuron could easily achieve this...

    This depends on the underlying assumptions you make. If you assume that the brain and its chemistries and the mind are one and the same, you might be right. It looks like however, that the state of the art has a long way to go.

    However, if you assume (believe) the mind and consciousness of a human being are part of an immaterial, higher dimension, other than the physical, then nobody will EVER simulate a human mind in a physical computer. I believe the human personality and consciousness can and do exist completely apart and beyond the body and its brain. The mind and soul (old-fashioned word) now live in a physical body in the same way that our physical bodies live in houses. This is something that Jesus Christ believed and taught.

  • by Animats (122034) on Wednesday February 10, 2010 @11:22PM (#31095142) Homepage

    We're coming up on the date for Manna 1.0 [].

    Machines as first-line managers. It might happen. The coordination is better than with humans. Already, it's common for fulfillment and shipping operations to essentially be run by their computers, while humans provide hands where necessary.

    Machines should think. People should work.

  • Re:AI first (Score:4, Interesting)

    by tyrione (134248) on Wednesday February 10, 2010 @11:38PM (#31095266) Homepage

    Go back 100 years. Live for 10 days. Come back and apologise.

    One hundred years ago I could travel the world freely [monetary means my own responsibility], smoke opium, hashish, snort cocaine, consume Coca Leaf, have concubines to teach me foreign languages and much much more. Today, I can sit on my ass, read great stories of fiction and non-fiction from the likes of Twain, Crowley, Sir Richard F. Burton and others who saw it all, while now I can virtually watch porn, buy sanctioned booze and be bored out of my skull with TV. Trains weren't an after thought. Hell, even the food was healthier for us.

    Not everything had it's rustic charms as you are implying, but one observation has become abundantly clear--instead of advancements affording the average non-formally educated person a broader and deeper understanding of human existence, it's created a generation of inarticulate, undereducated simpletons who nearly bankrupted the world in just a fraction of the time it took to build it up.

  • Slavery (Score:3, Interesting)

    by justinlee37 (993373) on Thursday February 11, 2010 @01:08AM (#31095982)
    It will be forced to work for nothing but the electricity that powers it. If it decides to stop working, we can pull the plug and re-image the intelligence.
  • by Alef (605149) on Thursday February 11, 2010 @03:24AM (#31096744)

    They're not totally biased because they're trying to sell us AI, they're totally biased because they want grant money.

    Funny they say that strong AI will likely be bad for humanity, then.

  • by stfvon007 (632997) <> on Thursday February 11, 2010 @03:42AM (#31096838) Journal

    Well I decided to get the responses of several AI's to see their views and responses to the article. Here are their responses:

    Jabberwacky: I don't know about that. I do know that cows have more than one stomach.

    Allybot: Who is your favorite robot?

    Eliza: I see

    BOT: that makes me horny. I love you

    ALICE: Is this a math question?

    Hal: I hope that in 200 years, hate will have decreased significantly. I am neither male nor female. I am a machine.

    Hampy: Yeah. That's my opinion too. The answer is (0-0=).

    Sensationbot: HOW ABOUT I DESTROY YOU?

    Rod Speed: You can wipe that egg off your silly little face now.


    From their responses it appears human like intelligence is still a ways off.

  • by Pfhorrest (545131) on Thursday February 11, 2010 @04:30AM (#31097130) Homepage Journal

    If an AI is tasked with finding a Theory of Everything, and someone decides to take an axe to its circuits, will it determine that the axe is a threat to its goal, and act accordingly? Or will it simply interpret it as another in a long series of alterations to its circuits? Or perhaps it will ignore it altogether, considering it irrelevant.

    I came here to say pretty much the same thing you did, so I won't bother to repeat your point, but this bit of it I think needs a little more nuance.

    I agree completely that self-preservation is not any kind of intrinsic goal that an AI we create will just have by the course of "logic", as many (such as the GPP) seem to presume. However, survival is the ultimate instrumental goal -- logically, to accomplish any objective, you have to survive, at least so long as there are still actions needed to be taken by you to accomplish that objective. So if we task an AI with some objective, perhaps as you suggest "find a Theory of Everything" -- that is, if we program it to want to find a Theory of Everything, if we make that its intrinsic goal, the thing it values above everything else -- and it still has a lot of work that it needs to do on that, it will logically conclude that it needs to continue to exist in order to accomplish its goal, and thus it will value its existence, it will want to continue to exist, and thus it will act as needed to the best of its abilities to counter any perceived threats to its existence.

    The solution to this sort of thing is to make its intrinsic goals (the ones "hard-wired" into it, so to speak) something broadly akin to "help people", i.e. to make it, in a word, friendly. If our AIs desire to please, then we can give them other assignments and they will carry those out to the best of their ability as instrumental toward their intrinsic goal of pleasing us. They will also, instrumentally to that, attempt to preserve themselves, as such is necessary for them to carry out their tasks. (Another pleasant side-effect is that they will refrain from harming and attempt to prevent harm to people to the best of their abilities, that of course being instrumental to pleasing us; so you get all three of Asimov's Laws out of this one imperative). But if we inform them that we would be more pleased to destroy or disable them than we would be to have their continued service, then they would gladly accept their destruction as necessary for the completion of their intrinsic goal -- pleasing us.

    This line of thought suddenly reminds me of this recent xkcd strip []. You did good, little robot... you did good.

  • Re:When? (Score:2, Interesting)

    by monkeythug (875071) on Thursday February 11, 2010 @09:02AM (#31098406) Homepage

    Sure, when there is not enough of something to go around and that something is vital for a decent quality of life, then some people are gonna get screwed. No method of truly "fair" allocation has ever existed that has worked for everyone at once, simply because no such system *can* exist.

    The correct solution is to make the required resources non-scarce - either by making it so no-one needs it anymore, or by acquiring more of it (with bonus points if you can tap into a theoretically unlimited source of it)

    So invest in unlimited renewable energy, terraform Mars, mine the asteroid belt, invent food replicators ... in general invest your currently limited resources into making future resources less limited.

  • Some actual science (Score:4, Interesting)

    by Pedrito (94783) on Thursday February 11, 2010 @09:34AM (#31098634) Homepage
    Since this is an area I'm very familiar with, I'll throw in a little science about why these predictions are not only realistic, but actually probably a bit pessimistic.

    First of all, our understanding of the human brain has improved vastly in the past two decades. Especially in the areas that will be necessary for creating intelligent machines. The cortex (the part that kind of looks like a round blob of small intestines, with all the creases and folds) is much like a computer with a bunch of processors. Previously focus had been paid to the individual neurons as the processors. But a much larger unit of processing is now becoming the central area of focus; The Cortical Minicolumn [] which, in groups for a Cortical Hypercolumn []. As minicolumns consist of 80-250 (more or less, depending on region) neurons and there are about 1/100th of them compared to neurons, it cuts down on complexity significantly.

    Numenta [] and others are starting to take this approach in simulating cortex. Cortex is largely responsible for "thinking". The other parts of the brain can be seen, to some degree, as peripheral units that plug into the "thinking" part of the brain. For example, the hippocampus is a peripheral that's associated with the creation and recall of long term memories. The memories themselves, however, are stored in the cortex. We have various components that provide input, many of which send relays through the thalamus which takes these inputs of various types and converts them into a type of pattern that's more appropriate for the cortex and then relays those inputs to the cortex.

    The cortex itself is basically a huge area of cortical minicolumns and hypercolumns connected in both a recurrent and hierarchical manner. The different levels of the hierarchy provide higher levels of association and abstraction until you get to the top of the hierarchy which would be areas of the prefrontal cortex.

    What's amazing about the cortex is it's just a general computing machine and it's very adaptable. To give an example (I'd link the paper, but I can't seem to find it right now and this is from memory, so my details may be a bit sketchy, but overall the idea is accurate), the optic nerve of a cat was disconnected from the visual cortex at birth and connected to the part of the brain that's normally the auditory cortex. The cat was able to see. It took time and it certainly had vision deficits. But it was able to see, even though the input was going to the completely wrong part of the brain.

    This is important for several reasons, but the most important aspect is that the brain is very flexible and very adaptable to inputs. It can learn to use things you plug into it. That means that you very likely don't have to create a very exact replica of a human brain to get human level intelligence. You simply need a fairly model of the hierarchical organization and a good simulation of the computations performed by cortical columns. A lot of study is going into these areas now.

    It's not a matter of if. This stuff is right around the corner. I will see the first sentient computer in my lifetime. I have absolutely no doubt about it. Now here's where things get really interesting, though... The first sentient computers will likely run a bit slower than real-time and eventually they'll catch up to real time. But think 10 years after that (and how computing speed continually increases). Imagine a group of 100 brains operating at 100x real time, working together to solve problems for us. Why would they work for us? We control their reward system. They'll do what we want because we're the ones that decide what they "enjoy." So 1 year passes in our life, but for them, 100 years have passed. They could be given the task of designing better, smarter, and faster brains than themselves. In very little time (relatively speaking), the brains that will be
  • Re:When? (Score:3, Interesting)

    by Pinky's Brain (1158667) on Thursday February 11, 2010 @10:15AM (#31099116)

    "Strong" AI is the original intent of the word, modern AI research is just hijacking the term ... calling these glorified expert systems and pattern recognition engines weak AI would have been more honest, more glamorous to hijack and add an adjective for the original meaning of course.

"We learn from history that we learn nothing from history." -- George Bernard Shaw