Artificial Intelligence at Human Level by 2029? 678
Gerard Boyers writes "Some members of the US National Academy of Engineering have predicted that Artificial Intelligence will reach the level of humans in around 20 years. Ray Kurzweil leads the charge: 'We will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029. We're already a human machine civilization, we use our technology to expand our physical and mental horizons and this will be a further extension of that. We'll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons.' Mr Kurzweil is one of 18 influential thinkers, and a gentleman we've discussed previously. He was chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering. The experts include Google founder Larry Page and genome pioneer Dr Craig Venter."
Oblig. (Score:5, Funny)
Re:Oblig. (Score:5, Insightful)
Speaking as an engineer and a (~40-year) programmer:
Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.
Odds are downright terrible for "intelligent nanobots", we might have hardware that can do what a cell can do, that is, hunt for (possibly a series of) chemical cues and latch on to them, then deliver the payload -- perhaps repeatedly in the case of disease-fighting designs -- but putting intelligence into something on the nanoscale is a challenge of an entirely different sort that we have not even begun to move down the road on; if this is to be accomplished, the intelligence won't be "in" the nano bot, it'll be a telepresence for an external unit (and we're nowhere down *that* road, either -- nanoscale sensors and transceivers are the target, we're more at the level of Look, Martha, a GEAR! A Pseudo-Flagellum!)
The problem with hand-waving -- even when you're Ray Kurzweil, whom I respect enormously -- is that one wave out of many can include a technology that never develops, and your whole creation comes crashing down.
I love this discussion. :-)
Re:Oblig. (Score:4, Interesting)
Note that it is not necessary to build 'perfect robots'. People think, and yet they are not perfect. They make mistakes, yet navigate through life. So we do not have to make flawless logic brains. The way people work is that we try to find good if not optimal solutions to problems, but we do not always exhaustively search for the perfect solution. Thus many problems in life can be solved in different ways than you would expect. We do not have to build a machine that finds the optimal solution to a traveling salesman problem in order to make a system that can walk from the kitchen to the front door. It just has to be able to get there reasonably optimally. Also, we do not have to replicate the human brain in order to think much like a human, we merely have to come up with functional systems that can provide similar functions. For instance, the human brain has the amygdala, which can be likened to an interrupt controller for emotional responses. Well, that functionality can be done in a hardware-software system that reasons about priorities of tasks and goals depending on their current 'value' of urgency to the 'brain'.
Many current researchers in many cases are missing the mark. For example, as good as it is, the widely-used AI textbook by Stuart Russell and Peter Norvig (who heads research at Google) has major omissions. It does not dwell on key things needed to bridge between AI and human psychology. Other things like the OCC model of emotion used in AI is incomplete and incorrect in parts. A new approach has been needed, and one I've been developing for decades in stealth mode. I'm writing a 5-volume book set on it. I want it to be the Knuth set of AI.
I'm in the process of patenting the mechanizations of my underlying technologies, and trying to cut deals with companies making multicore processors so their architectures support the thread swapping needed to make virtual neural nets practical. Once we get a 1024 simplified-core processor that supports virtual NNs, it'll be a lot easier to build a machine with many of these that does for NNs what disk swapping does for OSs, than to build a billion-neural processor hardwired machine. And easier to do visual perception systems properly too. So Ray is right. If I can drive certain companies to build the right silicon, we can get there by or before 2029. My current software does what I said, but it's too slow on current hardware. Needs new processors and new system architectures, and it will take 20 years to get the infrastructure all built up. But not a lot more.
Re:Oblig. (Score:5, Interesting)
That's a *huge* claim; if it is true, you have AI now. Because -- as I explained in a previous post in this thread -- speed is absolutely irrelevant. If you can demonstrate your claim that your software operates now, no matter *how* slowly it operates, you are at the end of your funding issues, not to mention any other issues you may face in life. Which -- to be frank -- is why I doubt your claim. At the point you explicitly claim to be at, I'd already own a mega-yacht and be pulling up next to a lot of potential love.
But good luck, and I really mean that. I'd much rather be wrong and see you bring this right to the table, even if you have completely blown the financial potentials of the development process.
Re:Oblig. (Score:4, Interesting)
I define consciousness and awareness within a pre-determined architecture, not entirely self-organizing from scratch. The visual front-end in particular is very rigid, but I think it is okay because there is little need for an organism to self-evolve completely new architectures, but rather to be able to run cascaded pattern-recognizers. See the work of Biederman for examples of this. The VFE feeds deeper processing doing cognition, and there is feedback from that to the VFE for training. Just as a baby learns to see, and recognize shapes, and build up from that. The front end is trained as the cognitive end learns and grows too.
AI does not spontaneously rise alone from massive databases, either. I view that approach as useless and a false trail. However, human intelligence does depend on belief systems and knowledge, and those continually grow as we mature from infancy. But to create the equivalent of an 18 year old, you have to have what amounts to 18 years of accumulation of knowledge about the world, and draw upon that. And there is a key but proprietary subtlety about that I'm devoting an entire volume to, that is the key to humanlike AI. That volume is essentially a doctoral thesis about consciousness reworked for use by a design staff. As for funding, no yachts yet. But I'm real, sane, and not a charlatan, and have explained my technology to my patent attorney. I expect to be hiring staff within two years. I posted on Slashdot not for glory but to counter all the nay-sayers who haven't a clue what is achievable.
Re: (Score:3, Insightful)
Usually when someone makes such fantastic claims, like being very close to cracking AI, or trying to become AI's Don Knuth, the person is either clearly trying to be ironic, or leaves the distinct impression of being a bit unhinged.
As you seem to be both sincere and making a lot of sense, I have a message for you:
Stop. Stop right now. If you do not, Skynet will destroy us all.
Thank you.
Re:Oblig. (Score:5, Funny)
Yes, but what do they mean by "human level intelligence", in particular, which human are we talking about? I mean, if "human level intelligence" means "as smart as George W. Bush", then I wouldn't trust that machine to handle my taxes, let alone any really critical tasks.
Re:Oblig. (Score:4, Insightful)
Whatever definition of intelligence you choose, it probably includes learning and reasoning components. We have some effective learning algorithms, provided your domain is very specific and you have boat loads of training data. We have next to no good reasoning algorithms. Complete search is a dead duck and incomplete search is not very reliable. Worse, search algorithms get seriously confused when the data base is inconsistent (humans are good at maintaining several incompatible world models simultaneously). And that's all before you consider that we have no psychological models of human reasoning that are anywhere near being specific enough to guide an implementation project (please don't mention "Society of Mind"). Finally, there is precious little funding out there for this kind of research, which is a shame, but there you go.
Re: (Score:3, Insightful)
That's probably because we have discovered little about the brain's structure and function.
> all that has to happen here is for that trend to continue
Well the field of AI in the last 40 years has made practically zero progress towards human-like intelligence , but I agree with you - the trend will likely continue.
Re: (Score:3, Insightful)
No, it is *probably not*. It *may* be, but since *nothing else* has presented us with that kind of problem, the odds of the brain doing so are pretty darned slim. You are postulating a heretofore never-achieved discovery in the course of determining how a mundane (by every indication) biological system, constrained as far as we know by the same physics and chemistry everything else is, operates. Considering the *f
Re: (Score:3, Interesting)
Re:Oblig. (Score:4, Insightful)
Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.
At present hardware will crash if a few bits get in the wrong places or if they're stored incorrectly, one of the things about organic lifeforms is that our consciousness doesn't cease to exist if one of our neurons misfires, at worst we get a seizure or possibly a hallucination. Any machine that's going to surpass us would have to turn those wrong bits into something meaningful without human intervention. Even if they are just unexpected rather than outright wrong.
It may very well be that computer technology will solve that problem, but quite a bit of what we are comes from these random misfirings and unpredictable unreliable results. Modeling what humans are presently like, or even modeling what humans are like at the point when this becomes realistic is far easier than creating something that will outdo us by intellect.
I'm somewhat skeptical when you say that nothing a person's brain can do which cannot be modeled by software, when it comes to talking, moving building, following instructions and things of that nature, I see no reason why a machine couldn't be taught to do those things as well as we do. But when it comes to more subtle things, things which require creativity, sometimes things which require for a deliberate violation of typical common sense, I'm skeptical that a machine could be taught to do so.
I'm especially skeptical about that considering that we don't even know most of the things which the human brain does or how it does it. We know many things, and we know enough to greatly benefit ourselves, but there are still a fair number of things which we don't understand about the brain. It is not a simple organ to understand, just in the last 10 years the amount of information gained about it is sufficient for me to suggest that you shouldn't suggest that there isn't a part of the brain which cannot be simulated.
I really don't want to suggest that it is impossible for us to create something that surpasses our own selves, but doing so would require things which we haven't even dreamed up yet, teaching a computer AI to be capable of meaningful creativity isn't something which is yet even on the most distant horizon, none of the programming languages or tool kits that are available presently offer that sort of capability in anything which resembles a reasonable number of lines of code.
Re: (Score:3, Insightful)
Re:Oblig. (Score:5, Interesting)
Maybe they aren't. But when you say a few centuries, I can't agree anymore. Let's imagine one century. Now we're hitting 1.12589991 × 10^15 times. A human brain is CERTAINLY within that complexity range. The caveat here is can we maintain the doubling rate for a full century? Well...Ray thinks we'll do far better than that (his "law of accelerating returns"), I'm not convinced we'll even be able to sustain the rate -- I think honestly we're looking at a plateau maybe 10, 20 years down the line, and will look back at computing as an S-curve until the next big breakthrough which nobody can predict. In my view the last couple "next big breakthrough"s happened at convenient times to make it look like we weren't following an S curve but we're just getting sharper and sharper, but I don't see any reason why the next one should happen just as conveniently. But since it's unpredictable, I could blindsided by it and it could happen next week.
Language isn't far off at all, we just about have it already. Emotions are nebulous and some people will move the goalposts forever, while some may prematurely be convinced by a video game character. I'm not necessarily convinced they are the hardest part of this. I don't know how to make them, but I don't know how to do the rest of this too. I just often see emotion being listed as the be all and end all most difficult task and I've never seen any reason to believe that to be so.
Re: (Score:3, Interesting)
I have made the same estimation personally few years ago
Re: (Score:3, Insightful)
What I see happening is that the current kind of development will slow
Re:Oblig. (Score:5, Interesting)
Well, let's look at the rate of general progress in computing. In 1971, we were putting 2300 transistors on a chip. They ran at a few hundred KHz. In a fairly smooth progression, we've gotten to 3 GHz, where we're likely to stay, and today, we're at about two billion transistors on a chip, with no end in sight as to how far that can go. This is not Moore's law; Moore's law is about how many fit into a particular space; this is about how many can be integrated into a functional unit. That's 36 years. Thirty six years from now, that ability to "simulate a few cells" should grow just in the *normal* scheme of things into an ability to simulate a billion or so cells without any trouble. But there's more to this. Not everything in a cell needs to be simulated; for instance, metabolic processes such as waste generation and removal don't, nor do breakdown, aging, impacts by free radicals, all of that. Part of what needs to be done between here and the goal is streamline the simulation so that it is operating in the zone of mentation and not biological imperatives. I suspect, and yes indeed this is just my opinion, that the simulation will be much easier when we understand just what it is we need to simulate.
This all leaves out the issue of non-simulating intelligence, where the thinking is not patterned after human mechanisms; this could arise from evolutionary software or something along those lines. And of course, one of the reasons that all this is kind of a holy grail anyway, only the first intelligence is difficult; the second... Nth is just a matter of copying a machine state.
As for language, that's solved in the I/o sense -- synthesis and "listening" are both satisfactorily complete. Intelligent discussion can only be expected from an intelligent machine, so that's only as far away as machine intelligence is.
Small animals, I'm of the opinion, are a lot more intelligent than most people give them credit for. They just have a different intelligence. I am sure that we will go through the small animal level on the way to our level, and beyond; the thing is, if you can do the one, you can do the other. There's no indication of a significant difference in the wetware, there's just more of it and it is arranged somewhat differently. No reason to expect anything different from hardware designed to do the same job.
Why? Small animals do both. Those aren't even the hard things. The hard things are introspection and self-awareness. Those are the ones we have not even a theory for, today. In any case, your ideas are certainly in with a lot of good company; but not me. I think we're only one discovery - algorithmic in nature - from AI. Self-awareness may turn out to be a property that self-organizes and arises without any special prodding from us; that would be marvelous, not to mention fortuitous, but hardly impossible - again, that's how nature did it.
Here's why I think we're just an algorithm away. If you left a question that absolutely required intelligence on a counter, and went back to pick it up the next day, and the answer was there -- you would agree that an intelligence had answered the question. If a human could answer it in one second, or an AI could answer it in 23 hours, it's still just as intelligent an answer when you pick it up. The point is that speed really isn't the issue. The issue is the process, that is, the algorithm. So it turns out that in terms of speed, number of transistors, etc, that's really not the limiting factor for developing intelligen
Re:Oblig. (Score:5, Interesting)
http://vesicle.nsi.edu/users/izhikevich/human_brain_simulation/Blue_Brain.htm#Simulation%20of%20Large-Scale%20Brain%20Models [nsi.edu]
Simulation of 1 second took 50 days on a cluster of 27 PCs (~4.3M times slower than realtime). Eugene is a pretty smart guy, except not as prominent as Kurzweil. Here is Eugene's estimate for AI timeline:
http://vesicle.nsi.edu/users/izhikevich/human_brain_simulation/why.htm [nsi.edu]
You may also want to google for Henry Markram and his current project.
Re:Oblig. (Score:4, Interesting)
Slight nitpick, currently we are at one billion transistors on a chip, not two, but that doesn't really change the point you are making.
A bigger issue that I have with what you've described is that simulating a brain is not the same as "solving" AI. The problem that Kurzweil has is that he refuses to accept that there is a difference. Sure, if they are the same then strong AI is inevitable and it's merely a question of building fast enough hardware. But why assume that they are the same thing?
Twenty years from now we may have hardware that can simulate an entire human brain; and yet we may be no closer to solving any of the problems in understanding how to solve the many problems in AI. The mental sleight-of-hand that Kurzweil applies to this argument is: Once we can simulate a brain we have AI, therefore the AI can design he next generation, therefore we will reach the singularity. This argument is a logical fallacy because it assumes being able to run the system, and knowing how to design the system / how the system works are equivalent.
Everything that we know complex and dynamic systems tells us that this is not so; given a simulation of the brain it is reasonable to assume that intelligence is the ultimate emergent property of the system. Understanding this property and how to refine it is the hardest problem that mankind has ever undertaken. Currently we don't really know how to pose the question, let alone how to arrive at an answer. To assume that some kind of standard engineering methodology will solve this in 20 years is wild speculation.
As always with AI, the hardware will be available but nobody yets knows if we can write the software to run on it.
Re: (Score:3, Interesting)
Do you realize how wrong that statement is? There is a tremendous difference in quality (not quantity) between a normal animal, like a young Chimpanzee, and a young human with about the same mental volume. (OK. Make it a young gorilla.) The difference is that humans are wired to communicate symbolically with GRAMMAR!! Chimpanzees can "sort of" learn grammar. But they don't b
Re: (Score:3, Interesting)
That is incorrect. Language is the ability to communicate feelings, goals, results. It is not "speech." Some birds do indeed have the capability of speech, that is, they can make the same sounds we can, closely enough as to make no difference. Apes, however, have demonstrated actual communications using symbols, and even dogs have recently been found to have a consistent, though very small, vocabulary
Re:Oblig. (Score:4, Insightful)
That is incorrect. Language is the ability to communicate feelings, goals, results. It is not "speech." Some birds do indeed have the capability of speech, that is, they can make the same sounds we can, closely enough as to make no difference. Apes, however, have demonstrated actual communications using symbols, and even dogs have recently been found to have a consistent, though very small, vocabulary. Elephants and other animals have demonstrated the ability to think in the abstract (the "recognize one's self in the mirror and operate on the information thus provided experiments.) Lemurs use calls to communicate safety and status. Don't confuse the lack of vocal apparatus with an inability to communicate. They're not the same thing at all.
As for the rest, I think you've got it, essentially, but we disagree on scales. We'll see.
Er... no.
Language is much more than that: it is a system of symbols that can even be used to describe any other symbolic system, and which can be extended at need and at will; animal communication shows little or no indication of that.
Nobody in their reight mind would deny that animals can communicate, and even that they can communicate very well.
However, that alone does not make them capable of using a language.
The cognitive leap a simple verbing of a noun requires is beyond any other animal.
Re:Oblig. (Score:5, Interesting)
We already know that it's possible to contain 100% of real-time human brain functions in a casing 10cm by 10cm by 10cm and weighing under five pounds. Now we have to build one from the ground up with potentially slower, yet better understood technology. The problem, unfortunately, isn't related to hardware. I have no doubt that processor power will soon be sufficient for our needs, but without software that can think on the level of a human, it's just another personal platform to play Duke Nukem Forever on.
Re:Oblig. (Score:5, Insightful)
Most of us know jack about the algorithms that allow us to catch a baseball in flight, yet we can still do it. Furthermore, a person from 10000 BC with no math at all by today's standards could do it just as well as we can. Implementing solutions does not always require a complete understanding of what you've done. You can even be wrong and it'll still work for other reasons. So hard-pegging this to what we "know" could be a severe error.
That's a very bold statement, especially since (a) that's the way nature does it for all its intelligences, high and low, so we know the process works in the general case, and (b) as you say, we don't know many things yet, so claiming that we "know" what won't work seems to be disingenuous or at the very least not well thought out.
I think it is important not to conflate the fact that we don't understand something with the idea that it will be difficult once figured out or discovered as a consequence of some fortuitous sequence of events. That's been shown again and again not to be the case. It *may* be so, but it is by no means certain to be so, and for that matter, it isn't indicated by the complexity of the brain's hardware. The brain is considerably more formidable as a mass of immensely complex moderated connectivity than it is as a collection of cellular-level mystery machines, and a good deal of the complexity at the cellular level is almost certainly irrelevant to the task of thought -- keeping the cell alive is probably in no way related to non-pathological mental operation, yet there's a lot of hardware and systems involved in the task.
Re: (Score:3, Insightful)
You (and most proponents of AI) have failed to answer any of the philosophical/me
Re: (Score:3, Interesting)
And no, simply copying the brain structure will not the answer.
That's a very bold statement, especially since (a) that's the way nature does it for all its intelligences, high and low, so we know the process works in the general case, and (b) as you say, we don't know many things yet, so claiming that we "know" what won't work seems to be disingenuous or at the very least not well thought out.
Copying the structure of the brain in all particulars would produce a brain and so we would still have failed at producing AI. At best we could claim is to have copied natural intelligence artificially.
I agree with your claim of disingenuity or at least a lack of forethought, though. In theory, if we understood the brain better, we would have a better understanding of the problem. We are currently finding quantum mechanisms for various things in our body (including smell and hearing!) and we still have
Re: (Score:3, Insightful)
Uh.. since we have no idea what the process is yet, this statement is meaningless. Therefore all you're making is a statement of optimism, and there's absolutely no basis for this. We have no idea what consciousness is, and can't define it outside of subjective internal experience. Therefore, there's no reason for the optimism shown both in the original article, and by all the people in here commentin
Re:Oblig. (Score:4, Insightful)
No, you have missed my point. An algorithm or algorithms is certainly required, and I never meant to imply otherwise. Human understanding of said algorithm(s), however, is explicitly not required. And there are many paths that lead to such a situation. Whether one of those will take us to a form of AI remains to be seen, which is what I was saying.
It is one thing to understand the mechanism required for operation -- it is quite another to understand the state it is in. I think you are confusing the latter with the former; the former is relatively trivial, and the latter is not required any more than a complete understanding of the state of everything involved at NASA is required in order to create, launch and recover the space shuttle. Complex systems are holistic, mostly co-operative combinations of subsystems, and as long as someone, somewhere, understands (or understood at one time, or possessed an adequate analogy to, or approximation of) the subsystems, or even the subsystems that make up the subsystems, that's sufficient to develop a fully functional macro system. And -- most importantly -- it only has to be done once, because of the unusual copyable nature of the result.
Re: (Score:3, Insightful)
On the other hand, maki
Re: (Score:3, Funny)
On the other hand, making a machine with human intelligence is (literally) as easy as making a baby
You need to be made to understand that we don't really "make" babies. All we really do is supply the raw materials to our prebuilt baby-making equipment and let them do the work. While we can currently observe pretty much the entire process (and observing the first part of the process is in fact one of the major drivers of the internet) we still can't mimic it. Get back to me when we can make a baby without using sperm, ovum, or womb.
Re: (Score:3, Insightful)
No it's not, it's a lack of theism. Many religious people seem to find it really difficult to get their head around. Religion and gods have absolutely nothing to do with our lives. We don't sit down every morning and pray to the void. We simply accept reality for what it is and don't see anything in our every day lives that needs a special explanation.
Re: (Score:3, Interesting)
I know this is very much off topic from the main point of the story, but I can't let this stay here unanswered.
I realize that those who are atheistic in natur
Re: (Score:3, Insightful)
clarification about atheism (Score:3, Insightful)
For one thing, it does not compete with religion, and many strongly religious people (in every major religious tradition) have the same humanistic convictions and take t
We have flying cars (Score:3, Insightful)
The thing is, we don't actually *want* flying cars. Ground transport is sufficient for most situations, and it's far more economical to cluster together long range transport.
No chance (Score:4, Insightful)
A stupid competition, and Kurzweil won? (Score:3, Interesting)
Not only is he saying that there will be artificial intelligence in only 21 years, but he is saying that the computers on which the new AI runs will be so small they can travel like cells in our bloodstream, and do useful work based on an extremely adva
Hrmmmm (Score:3, Interesting)
Re:Hrmmmm (Score:5, Insightful)
What exactly is the "level of humans"? Passing the Turing test? (Fatally flawed because it's not double blind, btw.) Part of human intelligence includes affective input; are we to expect intelligence to be like human intelligence because it includes artificial emotions, or are we supposed to accept a new definition of intelligence without affective input? Surely they're not going to wave the "consciousness" flag. Well, Kurzweil might. Venter might follow that flag because he doesn't know better and he's as big a media hog as Kurzweil.
I think it's a silly pursuit. Why hobble a perfectly good computer by making it pretend to be something that runs on an entirely different basis? We should concentrate on making computers be the best computers and leave being human to the billions of us who do it without massive hardware.
Re: (Score:3, Insightful)
The thing is, Kurzweil is trying to achieve immortality, which is pretty much predicated on the ability to simulate his brain. I don't know if that's coloring his predictions or not, and it really doesn't say anything about whether there can be a machine that can do a full scan of an entire human brain. I don't know if he'll live that long. He'll be over 80 years o
Re:Hrmmmm (Score:5, Funny)
Re: (Score:3, Interesting)
you're misquoting Edsger Dijkstra. He said: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim". I'm not sure Minsky would agree.
The way I interpret Dijkstra here, is that he meant when a submarine starts to look sufficiently like a fish, we will call its actions 'swimming'. When it has the exact same range and 'functional ability' as a fish, but moves by spinning its rotors, we don't call it swimming. Thus the human criterion for intelli
Re: (Score:3, Interesting)
Not only that, the GP (as many AI enthusiasts do) forgets that the synaptical connections are electro-chemical, not just purely electrical, and thus a whole new dimension of chemical communication enters the fray, complete with different functions of different neuro-transmitters at different synapses of the same cell, which can alter the functions of the said cell both short-term and long-term.
The more fair comparison to a neuron is not that of a transistor or even a logic gate but to a whole complete embe
Re: (Score:3, Funny)
Re:Hrmmmm (Score:5, Interesting)
There are still many things we can learn from biology that can be translated to machines. The translations don't have to be 1:1 for us to make use of them. The way birds as well as insects make use of different shapes in surfaces during wing beats have translated into changes in some aricraft designs. They weren't directly incorporated the same way, but they taught us important lessons that we could then implement in different ways but with a similar outcome.
I think Neuroscience does have a lot to teach us about how to do AI.
Re:Hrmmmm (Score:4, Insightful)
What aircraft corner as fast as barn swallows? (Score:5, Insightful)
Your question displays a lack of understanding. Not of biology, but of physics. Square cube law specifically. Aircraft don't corner as fast as small birds. the reason isn't any magic of biology, it's simple momentum.
The larger any object is, the more it weighs. Make it twice as big, it weighs eight times as much. packs eight times as much momentum. A large bird doesn't turn s fast as a small bird. Same is true of planes. Same is true of ships. A buss won't corner as fast as sports cars either.
A typical aircraft is 1000 times bigger than a swallow. It's a million times heavier. It packs a million times the momentum. It's not that the swallows design is better, or that there is some biological magic. It's just a question of size. It's true the other way too. A mosquito can turn a lot quicker than a barn swallow. Barn swallows catch mosquitoes because they can fly faster. Guess what, the aircraft you were so dismissive of can fly a lot faster than that barn swallow too. Visit a large airport. Swallows get killed by aircraft every day. They can't get out of the way in time. A barn swallow that was as large as a chicken would be ripped apart by the stresses if it were able to corner as fast as a real barn swallow. That's the real reason that chickens don't turn well in flight. (Yes, chickens can fly for short distances.) Momentum.
Your problem appears to be that you just don't understand scale. It is a wonderful thing when you do. You see reasons all around us, for all kinds of things.
So, yes, we should study biology. But, we should also remember the physics. The tricks the mosquito uses just won't work for a passenger jet. Nor will the barn swallows turns be good for the passengers on that jumbo jet. Still, some things will be useful. We just don't know what. Who would have thought that studying a sharks skin would help racing yachts. Personally, I hope that we get a lot of surprises. That's where the fun in science is.
I don't expect AI research to give us human type intelligence in a machine. Ever. That doesn't mean we shouldn't try. We don't know what we will get, or what it will make possible. We can't know before the fact. Studying birds didn't give us aircraft that can corner in a second or two, it did give us jumbo jets that can take us half way around the world in an easy chair. That took a lot of other things too.
The Wright brothers succeeded where Lilenthal failed. Not because they understood birds better, but because in the meantime the internal combustion engine was developed. AI will be the same. Right now, we don't even know what we need in order to make this work. There will be surprises.
I agree... (Score:5, Insightful)
"Artificial Intelligence" in the last few decades has been a model of failure. The greatest hope during that time, neural nets, have gone virtually nowhere. Yes, they are good at learning, but they have only been good at learning exactly what they are taught, and not at all at putting it all together. Until something like that can be achieved (a "meta-awareness" of the data), they will remain little more than automated libraries. And of course at this time we have no idea how to achieve that.
"Genetic algorithms" have enormous potential for solving problems. Just for example, recently a genetic algorithm improved on something that humans had not improved in over 40 years... the Quicksort algorithm. We now have an improved Quicksort that is only marginally larger in code size, but runs consistently faster on datasets that are appropriate for Quicksort in the first place.
But genetic algorithms are not intelligent, either. In fact, they are something of the opposite: they must be carefully designed for very specific purposes, require constant supervision, and achieve their results through the application of "brute force" (i.e., pure trial and error).
I will start believing that something like this will happen in the near future, only when I see something that actually impresses me in terms of some kind of autonomous intelligence... even a little bit. So far, no go. Even those devices that were touted as being "as intelligent as a cockroach" are not. If one actually were, I might be marginally impressed.
HAH... not there... (Score:3, Informative)
About a year ago, I found a link (from a reputable source, IIRC) to a site from a company that claimed to be doing significant work with genetic algorithms. As an example, they had a description (and even a graphic demo) of their modified quicksort vs. a regular quicksort. Accordng to their lit., it showed marginal improvements over quicksort by ensuring (in some non-obvious way) that each element in the dataset was only
Blue Brain Project (Score:3, Interesting)
Re: (Score:3, Informative)
While this project is verrry cool, they are not even remotely close to biological realism. Sorry...
their simulation model is still incomplete with a few more years work to get the neurons working like in real life.
That is just it. We are finding that real biological systems from complete neural reconstructions are far more complex with many more participating "classes" of neurons with much more in the way of nested and recurrent collateral co
Re: (Score:3, Insightful)
And I suspect that the necessary insig
Re:Hrmmmm (Score:5, Interesting)
The only thing that humans do that AI doesn't (well) is automatically follow a few paths, rather then look at the whole picture. As an example, it has been shown (sorry no reference right now) that some chess grandmasters look only at a couple of moves and then calculate all the possible combinations from there rather then examine every possible move. This drastically speeds up the calculation, however it does miss moves that could be considered the "best". So while this act of "feeling" which is the best move is a good approximation done by humans, it isn't an optimal or maximal play.
As for the article, I don't agree with all of what he says (the idea of nanobots doing what Kurzweil says scares me and I doubt it will be legal to do this), but I do agree with the 2029 prediction, that is if proper resources are given to that particular problem. Replicating humans is a goal in AI for some researchers, but not all of them. Personally, I couldn't care less if there exists a robot that perfectly resembles a human, as long as there are intelligent computers systems that can do the problems that humans find hard (such as finding patterns in very large sets of data or solving complex mathematical equations).
*Technically, it isn't a low EV play if there is a high probability of the opponent folding. In which case, playing the highest EV play naturally involves bluffing if it can be assumed that the opponent will fold to a bet.
Exponential AI? (Score:5, Interesting)
Re:Exponential AI? (Score:5, Informative)
That's the popular hypothesis [wikipedia.org].
Re: (Score:3, Insightful)
Re:Exponential AI? (Score:4, Interesting)
Superintelligence may speed this up, but the effect is quite dramatic already.
20 years is too long to predict (Score:4, Insightful)
Also, I will not be ingesting nano bots to interact with my neurons, I'll be injecting them into my enemies to disrupt their thinking. Or possibly just threatening to do so to extract large sums of money from various governmental organisations.
Re:20 years is too long to predict (Score:5, Insightful)
That's why we even in this day and age of 2008l, we're essentially running chatbots based on Eliza since 1966. Sure, there's been refinements and the new ones are slightly better, but not by much in a grand scheme. A sign of this problem is that they are giving their answers to your questions in a fraction of a second. That's not because they're amazingly well programmed; it's because the algorithms are still way too simple and based on theories from the sixties.
If the AI researches claiming "Oh, but we aren't there yet because we haven't got hardware nearly good enough yet", why aren't we even there halfway, with at least far more clever software than chatbots working on a reply to a single question for an hour? Sure, that would be impractical, but we don't even have the software for this that uses hard with even the boundaries of our current CPU's.
So at this point, if we'd make a leap to 2029 right now, all we'd get would be super fast Eliza's (I'm restricting my AI talk of "general AI" now, not in heuristic antispam algorithms, where the algorithms are very well understood and doesn't form a hurdle). The million dollar question here is: will we before 2029 have made breakthroughs in understanding the human brain well enough in how it reasons along with constructing the machines (biological or not as necessary) to approximate the structure and form the foundation on which the software can be built?
I mean, we can talk traditional transistor-based hardware all day and how fast it will be, but it will be near meaningless if we don't have the theories in place.
Re: (Score:3, Insightful)
Yes, they do more things that we have pre-programmed into them. But that is a far cry from "intelligence". In reality, they are no more intelligent that an old player piano, which could do hundreds of thousands of different actions (multiple combinations of the 88 keys, plus 3 pedals), based on simple holes in paper. Well, we have managed to stuff more of those "holes" (instructions) into microchips, and so on, but b
Projection length (Score:4, Insightful)
I predict that the Sun will become a white dwarf within 10,000,000,000 years. Predicting 10 billion years instead of 5 billion years actually makes it more likely to be true.
Some major assumptions (Score:4, Interesting)
Short of major breakthroughs on the software end, I don't expect AI to be able to pass a generalized Turing Test anytime soon, and I'm pretty certain the hardware end isn't going to advance enough to brute-force our way through.
To heck with Artificial Intelligence! (Score:5, Insightful)
Artificial intelligence would be a nice tool to use to reach towards, or to use to understand ourselves... but rare is there a circumstance that demands, or is worth the risks involved with making a truly intelligent agent.
The real implication to me, is that it will be possible to have machines capable of running the same 'software' that runs in our own minds. To be able to 'back up' people's states and memories, and all the implications behind that.
Artificial intelligence is a nice goal to reach for - but it is nothing compared the the siren's call of memories being able to survive the traditional end of existence, cellular death.
Ryan Fenton
Re: (Score:2, Insightful)
Re: (Score:3, Insightful)
The real implication to me, is that it will be possible to have machines capable of running the same 'software' that runs in our own minds. To be able to 'back up' people's states and memories, and all the implications behind that.
That presumes you can understand how human thought is made. It presumes real human intelligence can be modeled and implemented by a digital process, which may not be possible. I doubt that even quantum digital computers could do it. It might be possible in the future to simulate our neural machinery without realy knowing how it works, a high-fidelity digital form of a completely analog process, but then you couldn't know what to expect as the result. The way the program was coded and the inputs given it w
Re: (Score:3, Insightful)
Just as dead as the meat copy of you from 5 minutes ago? What magic makes your body 5 minutes in the future "you" instead of just a random copy? You do know that all your atoms get replaced every few years, and that when you sleep deeply or get put under general anesthesia almost all of your brain activity ceases, right? I have no problem with going to sleep at night and waking up in a slightly diff
Probably false alarm ... again (Score:2, Interesting)
So far _not one_ of those claims has come true, with the possible exception of the the much-vaunted "robotic snake".
So ... I'd say: less claims, fewer predictions, and more work. Let me know when you've got anything worthwhile to show.
Not to be outdon
Re:Probably false alarm ... again (Score:5, Interesting)
Next consider the stock market. Many trades are now automated, meaning, computers are deciding which companies have how much money. That ultimately influences where you live and work, and the management culture of the company you work for.
We are already living well above the standard that could be maintained without computers to make decisions for us. Of course as humans we will always take the credit and say the machines are "just" doing what we told them, but the fact is we could not could not carry out these computations manually in time for them to be useful.
*sighs* What to say ... (Score:3, Informative)
This is good news and bad news... (Score:5, Funny)
Bad news: After reviewing the latest in the US political scene, getting machines smarter than humans isn't going to take so much as we thought. My toaster almost qualifies now. 'You have to be smarter than the door' insults are no longer funny. Geeks will no longer be lonely. Women will have an entire new group of things to compete with. If you think math is hard now, wait till your microwave tells you that you paid too much for groceries or that you really aren't saving money in a 2 for 1 sale of things you don't need. Married men will now be third smartest things in their own homes, but will never need a doctor (bad news for doctors) since when a man opens his mouth at home to say anything there will now be a wife AND a toaster to tell him what is wrong with him.
oh god, this list goes on and on.
AI may not get that far (Score:3, Funny)
Re: (Score:2)
Piers Anthony had some interesting ideas on this
2029? (Score:2, Insightful)
Just in time for AI to help me drive my new fusion-powered flying car!
O.
wrong (Score:5, Insightful)
Computers can't even defeat humans at go, and go is a closed system. We are not twenty years away from a human level of machine intelligence. We may not even be *200 years* away from a human level of machine intelligence. The technology just isn't here yet. It's not even on the horizon. It's nonexistent.
We may break through the barrier someday, and I certainly believe the research is worthwhile, for what we have learned. Right now, however, computers are good in some areas and humans are good in others. We should spend more research dollars trying to find ways for humans and computers to efficiently work together.
Re: (Score:3, Informative)
http://en.wikipedia.org/wiki/Ray_kurzweil
"Everybody promises that AI will hit super-human intelligence at 20XX and it hasn't happened yet! It never will!" ... well guess what? It'll be the last invention anybody ever has to make. Great organizations like the Singularity Institute http://en.wikipedia.org/wiki/Singularity_Institute [wikipedia.org] really shouldn't be scraping along on such poor
nonsense (Score:2)
i'll make a prediction of my own - this guy is after funding.
Re: (Score:3, Interesting)
We aren't that far off. Estimates for the computational power of the human brain are around 10**16 operations per second. Supercomputers today do roughly 10**14, and Moore's Law increases the exponent by 1 every 5 years. Even if we have to simulate the brain's neurons by brute force and the simulation has 99% overhead, we'll be there in 20 years. (Assuming Moore's Law doesn't hit physical limits).
Re: (Score:3, Insightful)
The brain is so insanely parallel and the neurons are not just digital gates, more like computers in themselves. The machines of today are a far cry from the brain in how they are built. But sure, you can compare them by some meaningless parameter to say that we're close. How about the clock frequency: neurons are 1kHz devices, and modern CPU
Garbage. (Score:2)
Don't do it! (Score:4, Insightful)
Retarded (Score:2, Insightful)
Quite simply BS (Score:2)
Whatever Could They Mean? (Score:5, Funny)
Buddy,I've been around more than four decades.I've yet to see more than a superficial level of intelligence in humans.
Send your coders back to the drawing board with a loftier goal.
The End of Intelligent Design (Score:5, Interesting)
It might seem like the lack of AI development is a temporary problem and altogether a peripheral issue. It is however neither - it is a fundamental problem and it affects all software development.
Early in the history of computing, software and hardware development progressed at a similar pace. Today there is a giant and growing gap between the rate of hardware improvements and software improvements. As most people involved in the study of the field of software engineering are aware of, software development is in a deep crisis.
The problem can be summarized in one word: complexity. The approach to building software has largely been based on traditional engineering principles and approaches. Traditional engineering projects never reached the level of complexity that software projects have. As it turns out humans are not very good at handling and predicting complex system.
A good example of the problems facing software developers is Microsoft's new operating system Windows Vista. It took half a decade to build and cost nearly 10 billion dollars. At two orders of magnitude higher costs than the previous incarnation it featured relatively minor improvements - almost every single new radical feature (such as a new file system) that was originally planned was abandoned. The reason for this is that the complexity of the code base had become unmanageable. Adequate testing and quality assurance proved to be impossible and the development cycle became painfully slow. Not even Microsoft with its virtually unlimited resources could handle it.
At this point, it is important to note that this remains an unsolved problem. It would have not been solved by a better structured development process or directly by better computer hardware. The number of free variables in such a system are simply too great to be handled manually. A structured process and standardized information transfer protocols won't do much good either. Complexity is not just a quantitative problem but at a certain level you'll get emergent phenomena in the system.
Sadly artificial intelligence research which is supposed to be the vanguard of software development is facing the same problems. Although complexity is not (yet) the primary problem there manual design has proved very inefficient. While there are clever ideas that move the field forward on occasion there is nothing to match the relentless progress of computer hardware. There exists no systematic recipe for progress.
Software engineering is intelligent design and AI is no exception. The fundamental idea persists that it takes a clever mind to produce a good design. The view, that it takes a very intelligent thing to design a less intelligent thing is deeply entrenched on every level. This clearly pre-Darwinian view of design isn't based on some form of dogma, but a pragmatism and common sense that aren't challenged where they should be. While intelligent design was a good approach while software was trivial enough to be manageable, it should have become blindingly obvious that it was an untenable approach in the long run. There are approaches that take the meta level - neural networks, genetic algorithms etc, but it is thoroughly insufficient. All these algorithms are still results of intelligent design.
So what Darwinian lessons should we have learned?
We have learned that a simple, dumb optimization algorithm can produce very clever designs. The important insight is that intelligence can be traded for time. In a short in
The sacred brain and other myths (Score:5, Interesting)
The comedian Emo Philips once remarked that "I used to think my brain was the most important organ in my body until I realized what was telling me this."
We have tendency to use human intelligence as a benchmark and as the ultimate example of intelligence. There is a mystery surrounding consciousness and many people, including prominent philosophers such as Roger Penrose, ardently try to keep it that way.
Given however what we through biological research actually know about the brain and the evolution of it there is essentially no justification for attributing mystical properties to our data processing wetware. Steadily with increased capabilities of brain scanning we have been developing functional models for describing many parts of the brain. For other parts that need still more investigation we do have a picture, even if rough.
The sacred consciousness has not been untouched by this research. Although far from a final understanding we have a fairly good idea, backed by solid empirical evidence that consciousness is a post-processing effect rather than being the first cause of decision. The quantity of desperation can be seen in attempts to explain away the delay between conscious response and the activations of other parts of the brain. Penrose for instance suggests that yes, there is an average 500 ms delay, but that is compensated by quantum effects that are time symmetric - that the brain actually sees into the future, which then is delayed to create a real-time decision process. While this is rejected as absurd by a majority of neuroscientists and physicists, it is a good example of how passionately some people feel about the role of the brain. It is however painstakingly clear that just like we were forced to abandon an Earth-centered universe we do need to abandon the myth of the special place of human consciousness. The important point here is that once we rid ourselves of the self-imposed veil of mystery of human intelligence we can have a sober view on what artificial intelligence could be. The brain has developed through an evolutionary optimization process and while getting a lot of benefits it has taken the full blow of the limitations and problems with this process and also its context.
Evolution through natural selection is far from the best optimizing method imaginable. One major problem with it is that it is a so called "greedy" algorithm - it does not have any look ahead or planning capabilities. Every improvement, every payoff needs to be immediate. This creates systems that carry a lot of historical baggage - an improvement isn't made as a stand-alone feature but as a continuation of the previous state. It is not a coincidence that a brain cell is a cell like any other - nucleus and all. Nor is it a cell because it is the optimal structure for information processing. It was what could be done by modifying the existing wetware. It is not hard to imagine how that structure could be improved upon if not limited by the biological building blocks that were available to the genetic machinery.
Another point worth making is that our brains are optimized not for the modern type of information processing that humans engage in - such as writing software for instance. Humans have changed little in the last 50,000 years in terms of intellectual capacity but our societies have changed greatly. Our technological progress is a side effect of the capabilities we evolved that increased survivability when we roamed the plains of Africa in small family hunter-gatherer groups. To assume the resulting information processing system (the brain) would the ultimately optimal solution for anything else is not justifiable.
There has been since the 1950's ongoing research to create biologically inspired computer algorithms and methods. Some of the research has been very successful with simplified models that actually did do something useful (artificial neural networks for instance). Progress has however been agonizi
Re: (Score:3, Interesting)
On the other hand, Dean Radin (while barking mad in some ways) has done an experiment that su
Kurzweil's rebuttal from his book... (Score:5, Insightful)
Re:The End of Intelligent Design (Score:4, Informative)
Can anyone name an important algorithm or representation from this decade?
There's been substantial progress in trainable computer vision systems in the last decade. Computer vision is finally starting to work on real-world scenes. SLAM algorithms work now. Texture matchers work. There really has been progress in those areas.
Don't think so (Score:2)
Re:Don't think so (Score:5, Insightful)
The difference is that in 20 years we may have sufficiently powerful hardware that the software can be "dumb", that is, just simulating the entire physical brain.
The bottom line is that humans process some information in a non-representational way, while computers must operate representationally.
What prevents a computer from emulating this "non-representational" processing? Or is the human brain not subject to the laws of physics?
Predictions are useless in this case (Score:4, Insightful)
There is no such thing as Artificial Intelligence (Score:5, Insightful)
There is a lot of talk about computers surpassing, or not surpassing, humans at various tasks - does it not bother anyone that computers don't actually posses any intelligence? By any definition of intelligence you'd like? Every problem that a computer can "solve" is in reality solved by a human using that computer as a tool. I feel like I'm losing my mind reading these discussions. Did I miss something? Has someone actually produced a sentient machine? You'd think I would have seen that in the papers!
What's the point of projecting that A will surpass B in X if the current level of X possessed by A is zero? There seems to be an underlying assumption that merely increasing the complexity of a computational device will somehow automatically produce intelligence. "If only we could wire together a billion Deep Blues," the argument seems to go "it would surpass human intelligence." By that logic, if computers are more complex than cars, does wiring together a billion cars produce a computer?
Repeat after me - The current state of the art in artificial intelligence research is: fuck all. We have not produced any artificial intelligence. We have not begun to approach the problems which would allow us to start on the road to producing artificial intelligence.
Before you can create something that surpasses human levels of intelligence, one would think you'd need to be able to precisely define and quantify human intelligence. Unless I missed something else fairly major, that has not been done by anyone yet.
Human AI meets machine intelligence (Score:3, Informative)
Maybe that's why Google is hoarding all the remaining three digit IQ scores so that there is no shortage of IQ.
In other news, lots of flying chairs were heard swishing around Redmond Campus at Microsoft when the CEO heard google was cornering the market on Human IQs.
Abrams starts a new Serial: LOST IQ.
He must know something I don't (Score:4, Insightful)
That said, a lot of stuff can happen in 50 years, and I bet that once some of the major problems get solved, there will be an insane stream of money pouring into this field to accelerate the research. Just imagine the benefits an "omniscient" AI trader would bring to a bank. The question is, do we want this to happen? This will be far more disruptive a technology than anything you've ever seen.
My gut feeling... (Score:3, Interesting)
Warning: rambling post ahead.
My gut feeling is that, from strictly a hardware perspective, we're already capable of building a human-level AI. The problem is that, from a software perspective, we've focused too much on approaches that will never work.
As far as I'm concerned, the #1 problem is the Big Damn Database approach, which is basically a cargo cult [wikipedia.org] in disguise. Though expert systems are useful in their niches, "1. Expert system 2. ??? 3. AI!" is not a workable roadmap to the future. I'm certain that it's far easier to start with an ignorant AI and teach it a pile of facts than it is to start with a pile of facts and teach it to develop a personality.
The #2 problem is the Down To The Synapse approach. This, unlike BDD, could quite possibly create "A"I if given enough hardware. But I think that, while DTTS will lead to a better understanding of medicine, it won't advance the AI field. It won't lead to an improved understanding of how human cognition works — it certainly won't teach us anything we didn't already know from Phineas Gage [wikipedia.org] and company [ebonmusings.org].
Even if we go to all the trouble of developing a supercomputer capable of DTTS emulation of a human brain — so what? If we ask this emulated AI to compute 2+2, millions of simulated synapses will fire, trillions of transistors will flip states, phenomenal amounts of electricity will pour into the supercomputer, just for the AI to give the very same answer that a simple circuit consisting of a few dozen transistors could've answered in a tiny fraction of the time, using the amount of electricity stored on your fingertip when you rub your shoes on the carpet during winter. And that's not even a Strong AI question. That's not to say that working DTTS won't be profound in some sense, but we know we can build it better, yet we won't have the faintest idea of where to go next.
That brings me to my core idea — goals first, emotions [mit.edu] close behind. Anyone who's pondered the "is/ought" problem in philosophy already knows the truth of this, even if they don't know they know the truth of it. The people building cockroach robots were on the right track all along; they're just thinking too small. MIT's Kismet [mit.edu], for instance, gives an idea of where AI needs to head.
That said, I think building a full-on robot like Kismet is premature. A robot requires an enormous number of systems to process sensory data, and those processing systems are largely peripheral to the core idea of AI. If we had an AI already, we could put the AI in the robot, try a few things, and ask the AI what works best. So, ideally, I think we need to look at a pure software approach to AI before we go off building robot bodies for them to inhabit.
And how to do that? I think Electric Funstuff [gamespy.com]'s Sim-hilarities [electricfunstuff.com] captures the essence of that. If we give AIs a virtual world to live in — say, an MMO — then that removes a lot of the need for divining meaning from sensory input, allowing a sharper focus on the "intelligence" aspect of AI. Start with that, grow from there, and I can definitely see human-level AI by 2029.
Re:Well I'm not holding my breath (Score:5, Insightful)
Eliza and the sad state of expert systems (Score:5, Interesting)
There's still no semblance of a short-term memory, even so much as continuity between responses. It always quickly becomes obvious that each response has been prepared verbatim beforehand by a human, that the system is still performing only a keyword-canned response routine, perhaps feeding in a few variable strings.
Today we have the same stone wheels we've had for decades, and the article suggests we'll have an internal combustion engine with antilock brakes and a hood ornament in another 20 years. We'll see.
Re: (Score:3, Insightful)
Consciousness is also a strange beast. What is consciousness?
How about world abandonment? (Score:4, Funny)
Seems to me that any crazy smart AI would just beam themselves out into space to avoid us and maybe watch us from a distance occasionally for amusement.
Think of this way, when you see an anthill, it's rather curious for a while, then you get bored and go on your merry way. Unless of course you are a sociopath and want to destroy the ant hill and all the ants for fighting with other ants, or you are insane and you want to teach the ants to get along with other ants or spiders their mortal enemy or perhaps you are psychotic and want to train the ants to do your bidding. More likely you would just leave and go on to something more interesting (unless you are not that intelligent to begin with).
I fail to understand why people seem to insist that any really smart AI would want to have anything to do with us except on an occasional basis. Humans and earth aren't really that important in the bigger scheme of things (just important to us humans of course) and we'd probably not have much in common with any really advanced AI anyhow.
If humans would ever create such an AI, it would be like a bunch of ordinary joes giving birth to a super einstien. Eventually, the 'kid' would stop listening to us, go do their own thing which we would be too dumb to understand or appreciate and occasionally we'd invite it to visit to help us fix the settings on our computer because we got it messed up. It would explain to us in excruciating detail how we were using the wrong type of computer and how we needed to get up to date on technology and we'd just tell them a story about how it was in the old days, it would roll it's virtual eyes and say thanks for the tip, and go back to it's own business of which we would be blissfully ignorant...
Just think about it for a second.
Re: (Score:3, Insightful)
And this isn't to completely mark as irrelevant anything that Minsky said about AI in the 1970s or what