Artificial Intelligence at Human Level by 2029? 678
Gerard Boyers writes "Some members of the US National Academy of Engineering have predicted that Artificial Intelligence will reach the level of humans in around 20 years. Ray Kurzweil leads the charge: 'We will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029. We're already a human machine civilization, we use our technology to expand our physical and mental horizons and this will be a further extension of that. We'll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons.' Mr Kurzweil is one of 18 influential thinkers, and a gentleman we've discussed previously. He was chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering. The experts include Google founder Larry Page and genome pioneer Dr Craig Venter."
No chance (Score:4, Insightful)
20 years is too long to predict (Score:4, Insightful)
Also, I will not be ingesting nano bots to interact with my neurons, I'll be injecting them into my enemies to disrupt their thinking. Or possibly just threatening to do so to extract large sums of money from various governmental organisations.
To heck with Artificial Intelligence! (Score:5, Insightful)
Artificial intelligence would be a nice tool to use to reach towards, or to use to understand ourselves... but rare is there a circumstance that demands, or is worth the risks involved with making a truly intelligent agent.
The real implication to me, is that it will be possible to have machines capable of running the same 'software' that runs in our own minds. To be able to 'back up' people's states and memories, and all the implications behind that.
Artificial intelligence is a nice goal to reach for - but it is nothing compared the the siren's call of memories being able to survive the traditional end of existence, cellular death.
Ryan Fenton
2029? (Score:2, Insightful)
Just in time for AI to help me drive my new fusion-powered flying car!
O.
wrong (Score:5, Insightful)
Computers can't even defeat humans at go, and go is a closed system. We are not twenty years away from a human level of machine intelligence. We may not even be *200 years* away from a human level of machine intelligence. The technology just isn't here yet. It's not even on the horizon. It's nonexistent.
We may break through the barrier someday, and I certainly believe the research is worthwhile, for what we have learned. Right now, however, computers are good in some areas and humans are good in others. We should spend more research dollars trying to find ways for humans and computers to efficiently work together.
Re:Well I'm not holding my breath (Score:5, Insightful)
Don't do it! (Score:4, Insightful)
Retarded (Score:2, Insightful)
Re:To heck with Artificial Intelligence! (Score:2, Insightful)
intelligence is overrated (Score:1, Insightful)
Haven't you notice how self proscribed smart people are generally the least functional folks (e.g, lack EI or street smarts or are just really dumb).
I for one don't look forward to computers that act like rainman...
I don't even think we (as a scientific collective) even know what intelligence is....
Predictions are useless in this case (Score:4, Insightful)
Re:AI may not get that far (Score:2, Insightful)
Re:Hrmmmm (Score:2, Insightful)
now - ai can beat a human at chess. human designed, and setup the game.
20 years from now - ai can autonomously walk up to a nearby human ask them to play chess, if the invitation is not received well, the robot could make a convincing case as to why the human should change their mind, the robot could setup the board, and initiate the game. In the middle of the game, the ai, would have no way to predict, react to, or analyze after the fact an occurrence of unexpected human behavior in the form of violence, humor, insanity, irrational requests, casual misinformation, or conversation.
200 years from now, most of the above problems will be CRUDELY solved, but still polish will be lacking. ai will not be capable of higher abstract imagination.
we're about 500-1000 years from data's head. several thousand from his head and body.
the last 10% of the job takes 90% of the time.
e
Re:Hrmmmm (Score:5, Insightful)
What exactly is the "level of humans"? Passing the Turing test? (Fatally flawed because it's not double blind, btw.) Part of human intelligence includes affective input; are we to expect intelligence to be like human intelligence because it includes artificial emotions, or are we supposed to accept a new definition of intelligence without affective input? Surely they're not going to wave the "consciousness" flag. Well, Kurzweil might. Venter might follow that flag because he doesn't know better and he's as big a media hog as Kurzweil.
I think it's a silly pursuit. Why hobble a perfectly good computer by making it pretend to be something that runs on an entirely different basis? We should concentrate on making computers be the best computers and leave being human to the billions of us who do it without massive hardware.
Re:20 years is too long to predict (Score:5, Insightful)
That's why we even in this day and age of 2008l, we're essentially running chatbots based on Eliza since 1966. Sure, there's been refinements and the new ones are slightly better, but not by much in a grand scheme. A sign of this problem is that they are giving their answers to your questions in a fraction of a second. That's not because they're amazingly well programmed; it's because the algorithms are still way too simple and based on theories from the sixties.
If the AI researches claiming "Oh, but we aren't there yet because we haven't got hardware nearly good enough yet", why aren't we even there halfway, with at least far more clever software than chatbots working on a reply to a single question for an hour? Sure, that would be impractical, but we don't even have the software for this that uses hard with even the boundaries of our current CPU's.
So at this point, if we'd make a leap to 2029 right now, all we'd get would be super fast Eliza's (I'm restricting my AI talk of "general AI" now, not in heuristic antispam algorithms, where the algorithms are very well understood and doesn't form a hurdle). The million dollar question here is: will we before 2029 have made breakthroughs in understanding the human brain well enough in how it reasons along with constructing the machines (biological or not as necessary) to approximate the structure and form the foundation on which the software can be built?
I mean, we can talk traditional transistor-based hardware all day and how fast it will be, but it will be near meaningless if we don't have the theories in place.
I agree... (Score:5, Insightful)
"Artificial Intelligence" in the last few decades has been a model of failure. The greatest hope during that time, neural nets, have gone virtually nowhere. Yes, they are good at learning, but they have only been good at learning exactly what they are taught, and not at all at putting it all together. Until something like that can be achieved (a "meta-awareness" of the data), they will remain little more than automated libraries. And of course at this time we have no idea how to achieve that.
"Genetic algorithms" have enormous potential for solving problems. Just for example, recently a genetic algorithm improved on something that humans had not improved in over 40 years... the Quicksort algorithm. We now have an improved Quicksort that is only marginally larger in code size, but runs consistently faster on datasets that are appropriate for Quicksort in the first place.
But genetic algorithms are not intelligent, either. In fact, they are something of the opposite: they must be carefully designed for very specific purposes, require constant supervision, and achieve their results through the application of "brute force" (i.e., pure trial and error).
I will start believing that something like this will happen in the near future, only when I see something that actually impresses me in terms of some kind of autonomous intelligence... even a little bit. So far, no go. Even those devices that were touted as being "as intelligent as a cockroach" are not. If one actually were, I might be marginally impressed.
Re:20 years is too long to predict (Score:3, Insightful)
Yes, they do more things that we have pre-programmed into them. But that is a far cry from "intelligence". In reality, they are no more intelligent that an old player piano, which could do hundreds of thousands of different actions (multiple combinations of the 88 keys, plus 3 pedals), based on simple holes in paper. Well, we have managed to stuff more of those "holes" (instructions) into microchips, and so on, but but the machines themselves are just as stupid as they have EVER been, including back in the stone age. No intelligence. At all. Not even a little.
Do not mistake complexity for intelligence. A certain amount of complexity might be necessary for intelligence to exist, but on the other hand, things can be enormously complex without the presence of ANY intelligence. Just look at Government, for example.
Re:Don't think so (Score:5, Insightful)
The difference is that in 20 years we may have sufficiently powerful hardware that the software can be "dumb", that is, just simulating the entire physical brain.
The bottom line is that humans process some information in a non-representational way, while computers must operate representationally.
What prevents a computer from emulating this "non-representational" processing? Or is the human brain not subject to the laws of physics?
Hello? FDA anyone? (Score:2, Insightful)
Re:Don't do it! (Score:2, Insightful)
Re:Hrmmmm (Score:3, Insightful)
And I suspect that the necessary insights to produce human-like intelligence aren't going to be around for some time. We still have only a foggy idea of how a lot of human intelligence works in the existing hardware.
Re:Oblig. (Score:1, Insightful)
Re:nonsense (Score:3, Insightful)
The brain is so insanely parallel and the neurons are not just digital gates, more like computers in themselves. The machines of today are a far cry from the brain in how they are built. But sure, you can compare them by some meaningless parameter to say that we're close. How about the clock frequency: neurons are 1kHz devices, and modern CPUs are in GHz now...
Kurzweil's rebuttal from his book... (Score:5, Insightful)
Re:To heck with Artificial Intelligence! (Score:3, Insightful)
Re:Hrmmmm (Score:4, Insightful)
Re:Hrmmmm (Score:3, Insightful)
The thing is, Kurzweil is trying to achieve immortality, which is pretty much predicated on the ability to simulate his brain. I don't know if that's coloring his predictions or not, and it really doesn't say anything about whether there can be a machine that can do a full scan of an entire human brain. I don't know if he'll live that long. He'll be over 80 years old at that time, and to be frank, I don't think he looks like a healthy 60 years old now, despite his voracious vitamin intake.
Projection length (Score:4, Insightful)
I predict that the Sun will become a white dwarf within 10,000,000,000 years. Predicting 10 billion years instead of 5 billion years actually makes it more likely to be true.
Re:Oblig. (Score:5, Insightful)
Speaking as an engineer and a (~40-year) programmer:
Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.
Odds are downright terrible for "intelligent nanobots", we might have hardware that can do what a cell can do, that is, hunt for (possibly a series of) chemical cues and latch on to them, then deliver the payload -- perhaps repeatedly in the case of disease-fighting designs -- but putting intelligence into something on the nanoscale is a challenge of an entirely different sort that we have not even begun to move down the road on; if this is to be accomplished, the intelligence won't be "in" the nano bot, it'll be a telepresence for an external unit (and we're nowhere down *that* road, either -- nanoscale sensors and transceivers are the target, we're more at the level of Look, Martha, a GEAR! A Pseudo-Flagellum!)
The problem with hand-waving -- even when you're Ray Kurzweil, whom I respect enormously -- is that one wave out of many can include a technology that never develops, and your whole creation comes crashing down.
I love this discussion. :-)
Re:Exponential AI? (Score:3, Insightful)
There is no such thing as Artificial Intelligence (Score:5, Insightful)
There is a lot of talk about computers surpassing, or not surpassing, humans at various tasks - does it not bother anyone that computers don't actually posses any intelligence? By any definition of intelligence you'd like? Every problem that a computer can "solve" is in reality solved by a human using that computer as a tool. I feel like I'm losing my mind reading these discussions. Did I miss something? Has someone actually produced a sentient machine? You'd think I would have seen that in the papers!
What's the point of projecting that A will surpass B in X if the current level of X possessed by A is zero? There seems to be an underlying assumption that merely increasing the complexity of a computational device will somehow automatically produce intelligence. "If only we could wire together a billion Deep Blues," the argument seems to go "it would surpass human intelligence." By that logic, if computers are more complex than cars, does wiring together a billion cars produce a computer?
Repeat after me - The current state of the art in artificial intelligence research is: fuck all. We have not produced any artificial intelligence. We have not begun to approach the problems which would allow us to start on the road to producing artificial intelligence.
Before you can create something that surpasses human levels of intelligence, one would think you'd need to be able to precisely define and quantify human intelligence. Unless I missed something else fairly major, that has not been done by anyone yet.
Re:20 years is too long to predict (Score:1, Insightful)
Re:Oblig. (Score:2, Insightful)
Things like cold fusion, teleportation, quantum computing, virtual reality capable of universe-scale simulation, therapeutic gene engineering and nanosurgery, universal molecular constructors, interstellar flight and perhaps even Dyson spheres... all these we will get before we trully can start getting at building the AI that is human-matching. At least we know how we can handle all the other problems, the advances they require and the research that is still needed in their fields.
But sadly, we still know jack shit about how the brain works, or how to make one in silico (and I am saying that in all seriousness, fully aware of the staggering amount of current knowledge about anatomy, physiology, and psychology of the brain). Consider for instance that brain alone requires over 50% of all of our genes; any other organ (your skin, your penis, heart, kidney, lungs) needs less that 5%. Even trying to put together the very few known protein-protein interactions quickly turns into a giant clusterfuck of data, with degree of complexity growing as a factorial of the number of proteins. If we correctly assume that consciousness is the result of a kind of gestalt of the protein interactions, the anatomical wiring, the multiparallel computation by trillions of cells, and the acquired experiences, building AI is a near impossibility until we get both the revolutionary math tools and the quantum computers capable of universe simulation.
Part of the problem is (to paraphrase Ramsfeld) that we don't even know what we don't know about brain. And no, simply copying the brain structure will not the answer.
We have flying cars (Score:3, Insightful)
The thing is, we don't actually *want* flying cars. Ground transport is sufficient for most situations, and it's far more economical to cluster together long range transport.
What's the point? (Score:1, Insightful)
Re:Where is the proof of possibility (Score:3, Insightful)
Consciousness is also a strange beast. What is consciousness? Why does consciousness feel continuous, when we know it isn't? (Some people even regain consciousness after they have been pronounced dead.) Why do I still think I am the same being that I was 10 years ago, when my brain was made of completely different cells? Because of the uncertainty of these questions, I think that *what consciousness is* really doesn't matter.
Consciousness may just be part of the noise that results when a thinking being becomes self aware. But no matter what it is, I think it developed as a means to an end, rather than an end in itself. If this is so, when we create computers that can parse written information and communicate effectively, it won't really matter whether they are "conscious", and it won't matter what it would mean for such a machine to be conscious.
Re:Hrmmmm (Score:2, Insightful)
The other aspect of intelligence is harnessing a lifetime's worth of sensory experience which interfaces with our animal instincts and processing it all in real time. But the best computer for all of this is an analog computer, which has inherently greater granularity.
Re:Oblig. (Score:5, Insightful)
Most of us know jack about the algorithms that allow us to catch a baseball in flight, yet we can still do it. Furthermore, a person from 10000 BC with no math at all by today's standards could do it just as well as we can. Implementing solutions does not always require a complete understanding of what you've done. You can even be wrong and it'll still work for other reasons. So hard-pegging this to what we "know" could be a severe error.
That's a very bold statement, especially since (a) that's the way nature does it for all its intelligences, high and low, so we know the process works in the general case, and (b) as you say, we don't know many things yet, so claiming that we "know" what won't work seems to be disingenuous or at the very least not well thought out.
I think it is important not to conflate the fact that we don't understand something with the idea that it will be difficult once figured out or discovered as a consequence of some fortuitous sequence of events. That's been shown again and again not to be the case. It *may* be so, but it is by no means certain to be so, and for that matter, it isn't indicated by the complexity of the brain's hardware. The brain is considerably more formidable as a mass of immensely complex moderated connectivity than it is as a collection of cellular-level mystery machines, and a good deal of the complexity at the cellular level is almost certainly irrelevant to the task of thought -- keeping the cell alive is probably in no way related to non-pathological mental operation, yet there's a lot of hardware and systems involved in the task.
Re:Oblig. (Score:3, Insightful)
On the other hand, making a machine with human intelligence is (literally) as easy as making a baby, and humans are very adept at modifying existing tools. We already have working neural interfaces to simple prosthetics, so it's not a stretch to think that intelligence amplification or augmentation is unobtainable in the next two centuries. As we improve our own intelligence, the creation of AI will become easier. It's likely that the hard problems you've mentioned will actually be solved after humans have improved their brains, since any increase in intelligence will make those problems easier to solve, and we seem to be closer to neural improvement than we are to sustainable fusion (we actually have neural interfaces that work; we still can't sustain a thermonuclear fusion reaction).
Re:To heck with Artificial Intelligence! (Score:3, Insightful)
Just as dead as the meat copy of you from 5 minutes ago? What magic makes your body 5 minutes in the future "you" instead of just a random copy? You do know that all your atoms get replaced every few years, and that when you sleep deeply or get put under general anesthesia almost all of your brain activity ceases, right? I have no problem with going to sleep at night and waking up in a slightly different biological body that thinks it's the same person as the one that went to sleep last night. Why should I care if the body I wake up in is made out of electronics instead of meat, so long as it feels the same way?
Re:Oblig. (Score:4, Insightful)
Whatever definition of intelligence you choose, it probably includes learning and reasoning components. We have some effective learning algorithms, provided your domain is very specific and you have boat loads of training data. We have next to no good reasoning algorithms. Complete search is a dead duck and incomplete search is not very reliable. Worse, search algorithms get seriously confused when the data base is inconsistent (humans are good at maintaining several incompatible world models simultaneously). And that's all before you consider that we have no psychological models of human reasoning that are anywhere near being specific enough to guide an implementation project (please don't mention "Society of Mind"). Finally, there is precious little funding out there for this kind of research, which is a shame, but there you go.
He must know something I don't (Score:4, Insightful)
That said, a lot of stuff can happen in 50 years, and I bet that once some of the major problems get solved, there will be an insane stream of money pouring into this field to accelerate the research. Just imagine the benefits an "omniscient" AI trader would bring to a bank. The question is, do we want this to happen? This will be far more disruptive a technology than anything you've ever seen.
Re:Oblig. (Score:3, Insightful)
That's probably because we have discovered little about the brain's structure and function.
> all that has to happen here is for that trend to continue
Well the field of AI in the last 40 years has made practically zero progress towards human-like intelligence , but I agree with you - the trend will likely continue.
Re:Oblig. (Score:4, Insightful)
No, you have missed my point. An algorithm or algorithms is certainly required, and I never meant to imply otherwise. Human understanding of said algorithm(s), however, is explicitly not required. And there are many paths that lead to such a situation. Whether one of those will take us to a form of AI remains to be seen, which is what I was saying.
It is one thing to understand the mechanism required for operation -- it is quite another to understand the state it is in. I think you are confusing the latter with the former; the former is relatively trivial, and the latter is not required any more than a complete understanding of the state of everything involved at NASA is required in order to create, launch and recover the space shuttle. Complex systems are holistic, mostly co-operative combinations of subsystems, and as long as someone, somewhere, understands (or understood at one time, or possessed an adequate analogy to, or approximation of) the subsystems, or even the subsystems that make up the subsystems, that's sufficient to develop a fully functional macro system. And -- most importantly -- it only has to be done once, because of the unusual copyable nature of the result.
Re:Oblig. (Score:4, Insightful)
Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.
At present hardware will crash if a few bits get in the wrong places or if they're stored incorrectly, one of the things about organic lifeforms is that our consciousness doesn't cease to exist if one of our neurons misfires, at worst we get a seizure or possibly a hallucination. Any machine that's going to surpass us would have to turn those wrong bits into something meaningful without human intervention. Even if they are just unexpected rather than outright wrong.
It may very well be that computer technology will solve that problem, but quite a bit of what we are comes from these random misfirings and unpredictable unreliable results. Modeling what humans are presently like, or even modeling what humans are like at the point when this becomes realistic is far easier than creating something that will outdo us by intellect.
I'm somewhat skeptical when you say that nothing a person's brain can do which cannot be modeled by software, when it comes to talking, moving building, following instructions and things of that nature, I see no reason why a machine couldn't be taught to do those things as well as we do. But when it comes to more subtle things, things which require creativity, sometimes things which require for a deliberate violation of typical common sense, I'm skeptical that a machine could be taught to do so.
I'm especially skeptical about that considering that we don't even know most of the things which the human brain does or how it does it. We know many things, and we know enough to greatly benefit ourselves, but there are still a fair number of things which we don't understand about the brain. It is not a simple organ to understand, just in the last 10 years the amount of information gained about it is sufficient for me to suggest that you shouldn't suggest that there isn't a part of the brain which cannot be simulated.
I really don't want to suggest that it is impossible for us to create something that surpasses our own selves, but doing so would require things which we haven't even dreamed up yet, teaching a computer AI to be capable of meaningful creativity isn't something which is yet even on the most distant horizon, none of the programming languages or tool kits that are available presently offer that sort of capability in anything which resembles a reasonable number of lines of code.
Re:Oblig. (Score:3, Insightful)
No, it is *probably not*. It *may* be, but since *nothing else* has presented us with that kind of problem, the odds of the brain doing so are pretty darned slim. You are postulating a heretofore never-achieved discovery in the course of determining how a mundane (by every indication) biological system, constrained as far as we know by the same physics and chemistry everything else is, operates. Considering the *fact* that there is no indication for such a discovery, I'd say you are way out on a creaky limb and you should be asking yourself how you got there.
Re:Projection length (Score:2, Insightful)
They are both wrong anyway. Long before then we will have turned off the sun to stop it wasting energy, and "Starlifted" most of its mass to do something useful with.
Re:Oblig. (Score:3, Insightful)
No it's not, it's a lack of theism. Many religious people seem to find it really difficult to get their head around. Religion and gods have absolutely nothing to do with our lives. We don't sit down every morning and pray to the void. We simply accept reality for what it is and don't see anything in our every day lives that needs a special explanation.
I don't know why it's so difficult to understand. It's not much different to being disinterested in football. You'll have groups of people that are obsessed by it and cannot understand how its not a part of someone elses life.
Surely there's something that's completely irrelevant to your life? Tiddlywinks, Tabletop roleplaying games. As far as most atheists are concerned, religion is just another interest, and only relevant when it tries to force itself on our lives. Imagine how cross you'd get if tiddlywinks players got together and tried to force an hour of daily tiddlywinks playing into national school curriculums.
Re:Oblig. (Score:3, Insightful)
Also, what you're saying about scientific discovery is a tautology: when it "discovers" something irreproducible, it's not yet considered a discovery at all. When something is discovered, it's also discovered reproducibly. That's a fucking necessity for scientific discovery right there, of course science won't find anything else! Outside the scope of science lies all subjective experience, which is where actual human intelligence resides (the idea of intersubjectivity is what makes us recognise understanding, intelligence, in the other person). Your argument seems radically irrelevant.
AND! (Score:2, Insightful)
I've rtfm, there is NO science at work here, just some bloke making HUGE unsubstantiated claims. They cite no research, they don't even make any concrete claims apart from "at a human level". I've seen more technical and in depth discussions between piss heads on a parkbench.
Finally at the bottom, they namedrop a google founder to try and make this sound more believable.
Shame on the BBC covering nothing, and shame on Slashdot for posting filler.
Re:Oblig. (Score:3, Insightful)
You (and most proponents of AI) have failed to answer any of the philosophical/metaphysical questions one inevitably becomes confronted with, by using the analogy of the brain as "software" and stating that the hardware is irrelevant. I suspect there are cellular-level mysteries yet to be discovered, including possibly quantum action at a low level, that would have a strong influence on the facts of the matter here. It is a rather simple-minded and arrogant "faith" that leads you to believe we have anywhere near a good understanding of how the brain works.
Re:An alternative to AI ... us (Score:3, Insightful)
And this isn't to completely mark as irrelevant anything that Minsky said about AI in the 1970s or what he has done since then, but to note that the study of intelligence, whether from the perspective of a nano-technology/biology perspective or from a software engineering approach, is still trying to uncover the basic ground rules and understand even the sheer domain of the problem.
If you don't understand the domain... or if the size of the domain keeps expanding... you really don't even know where to begin to solve the problem. I challenge any of the researchers in this field to clearly define even what it means to have human intelligence or what even the intelligence of an earthworm really is. Let's just say that Charles Darwin was sufficiently impressed at the intelligence of an earthworm that he choose to use that species as the foundation block for his study of intelligence. (Yes, I know there are multiple species of earthworms.) And only recently is this aspect of intelligence even being reconsidered.
I do think that a proclamation that we might be able to reach the computational processing level of an earthworm in the next 20 years is reasonable, but even then you had better be extra sure that you understand even the scope and domain of that problem before you claim it is "solved". I for one am still not convinced, in spite of some pretty incredible research about the issue.
Re:Oblig. (Score:3, Insightful)
I'm an atheist, and I'm offended you lumped me with any of those groups. I could, maybe, possibly, relate to "scientists", except I don't believe science can solve *all* problems. I'm positively offended at being thrown in the same bag as "environmentalists" (will-anybody-think-of-the-children is the utmost dangerous argument), disgusted at being associated with "anti-theologists" (if anything, atheists should be closer to the live-and-let-live way of life) and refute "universalists" (by your description, I'd say they're religious in denial).
Most atheists I know are related to science because, if you observe religion from a scientific standpoint, it is improvable. Pick any one religion on planet Earth. They all have similar basis, all are supported on a few unquestionable dogmas. Even if you believe there is *a* god, rationally picking the right one from the wide choice available in all religions is an impossible task.
This does not make them an organized group. There are no prophets. There are no rituals. There are no dogmas. There are no eternal punishments nor days of doom. Atheists are a non-religion, as much as black is a non-color. It may hard to wrap your brain around it, but the absence of something is a fact in itself.
What aircraft corner as fast as barn swallows? (Score:5, Insightful)
Your question displays a lack of understanding. Not of biology, but of physics. Square cube law specifically. Aircraft don't corner as fast as small birds. the reason isn't any magic of biology, it's simple momentum.
The larger any object is, the more it weighs. Make it twice as big, it weighs eight times as much. packs eight times as much momentum. A large bird doesn't turn s fast as a small bird. Same is true of planes. Same is true of ships. A buss won't corner as fast as sports cars either.
A typical aircraft is 1000 times bigger than a swallow. It's a million times heavier. It packs a million times the momentum. It's not that the swallows design is better, or that there is some biological magic. It's just a question of size. It's true the other way too. A mosquito can turn a lot quicker than a barn swallow. Barn swallows catch mosquitoes because they can fly faster. Guess what, the aircraft you were so dismissive of can fly a lot faster than that barn swallow too. Visit a large airport. Swallows get killed by aircraft every day. They can't get out of the way in time. A barn swallow that was as large as a chicken would be ripped apart by the stresses if it were able to corner as fast as a real barn swallow. That's the real reason that chickens don't turn well in flight. (Yes, chickens can fly for short distances.) Momentum.
Your problem appears to be that you just don't understand scale. It is a wonderful thing when you do. You see reasons all around us, for all kinds of things.
So, yes, we should study biology. But, we should also remember the physics. The tricks the mosquito uses just won't work for a passenger jet. Nor will the barn swallows turns be good for the passengers on that jumbo jet. Still, some things will be useful. We just don't know what. Who would have thought that studying a sharks skin would help racing yachts. Personally, I hope that we get a lot of surprises. That's where the fun in science is.
I don't expect AI research to give us human type intelligence in a machine. Ever. That doesn't mean we shouldn't try. We don't know what we will get, or what it will make possible. We can't know before the fact. Studying birds didn't give us aircraft that can corner in a second or two, it did give us jumbo jets that can take us half way around the world in an easy chair. That took a lot of other things too.
The Wright brothers succeeded where Lilenthal failed. Not because they understood birds better, but because in the meantime the internal combustion engine was developed. AI will be the same. Right now, we don't even know what we need in order to make this work. There will be surprises.
Re:Oblig. (Score:3, Insightful)
Uh.. since we have no idea what the process is yet, this statement is meaningless. Therefore all you're making is a statement of optimism, and there's absolutely no basis for this. We have no idea what consciousness is, and can't define it outside of subjective internal experience. Therefore, there's no reason for the optimism shown both in the original article, and by all the people in here commenting from their armchairs as though AI is right around the g-d corner. It's not, and our best guess is it won't be for the foreseeable future. The problem is too fundamentally unsolved.
Re:Oblig. (Score:3, Insightful)
What I see happening is that the current kind of development will slow greatly in about 10-20 years, but during the same period multi-processors will become more common. I've seen this coming for quite some time (but now it's starting to actually show up) so it's really griped me when languages take a construction like "For each" and define it in a way that precludes parallel execution.
clarification about atheism (Score:3, Insightful)
For one thing, it does not compete with religion, and many strongly religious people (in every major religious tradition) have the same humanistic convictions and take their religion to support their humanism (and vice versa). The same goes for a belief in the results and methods of science: This belief does not crowd out religious belief, and most educated religious people in the West believe in science just as much as atheists do. Ditto for environmentalism and all the other ism's you mention.
You're right that various humanistic movements are organized, but so are chess clubs, national elections and universities. Belonging to an organized religion prevents membership in another organized religion (unless you're Japanese, who seem to have no problem with accepting several religions simultaneously), but it certainly does not prevent membership in another, non-religous organized movement.
I just want it to be clear that humanistic endeavors like the fight against poverty, for environmental conservation, for global justice, etc. are nothing like religions. Religion is a different sort of thing.
Atheists simply don't have a religion. What makes them atheists is that in them, any belief that gods of any sort exist, is absent. This does not force them to put their "faith" in any other movement in particular. I mean, to some extent, every human being with normal, human compassion has some sort of humanistic ideals. But again, that's just a result of being a moral and empathetic person, and it happens to moral people whether or not they have any faith in various gods.
Re:Oblig. (Score:4, Insightful)
That is incorrect. Language is the ability to communicate feelings, goals, results. It is not "speech." Some birds do indeed have the capability of speech, that is, they can make the same sounds we can, closely enough as to make no difference. Apes, however, have demonstrated actual communications using symbols, and even dogs have recently been found to have a consistent, though very small, vocabulary. Elephants and other animals have demonstrated the ability to think in the abstract (the "recognize one's self in the mirror and operate on the information thus provided experiments.) Lemurs use calls to communicate safety and status. Don't confuse the lack of vocal apparatus with an inability to communicate. They're not the same thing at all.
As for the rest, I think you've got it, essentially, but we disagree on scales. We'll see.
Er... no.
Language is much more than that: it is a system of symbols that can even be used to describe any other symbolic system, and which can be extended at need and at will; animal communication shows little or no indication of that.
Nobody in their reight mind would deny that animals can communicate, and even that they can communicate very well.
However, that alone does not make them capable of using a language.
The cognitive leap a simple verbing of a noun requires is beyond any other animal.
Re:Oblig. (Score:2, Insightful)
No, these books are not "sacred books of atheism". Please try again.
Firstly, it doesn't follow that an atheist believes either of these things (it seems to be that atheists usually accept logic, mathematics and science, but this isn't part of the definition). Secondly, anything in those books is accepted based on whether it is logical and can be verified to be true - not simply because it is written in that book. I also feel this quote is relevant from http://www.talkorigins.org/indexcc/CG/CG001.html [talkorigins.org] , discussing the myth that Darwin recanted: The theory of evolution rests upon reams of evidence from many different sources, not upon the authority of any person or persons.
atheism was an official state religion
Religions were outlawed, I would like to see a source that they introduced a new religion that was named "atheism"?
Mao's "Little Red Book" certainly fits the role of religious scripture with a near deification of Mao's actions
So? Call it Mao-ism then. You seriously believe that all atheists are communists and accept the teachings of Mao?
I have observed "congregations" of atheists that have come together in terms of organizing a social network for the common good.
Yeah, so do geeks, role-players and swingers. Since when did having a social meet mean anything to do with religion? Just because religion can be social doesn't imply anything social is religious!
Atheism is much broader and deeper than you are implying here, and takes on many different forms.
Which is exactly why it isn't a religion.
Re:Oblig. (Score:3, Insightful)
Usually when someone makes such fantastic claims, like being very close to cracking AI, or trying to become AI's Don Knuth, the person is either clearly trying to be ironic, or leaves the distinct impression of being a bit unhinged.
As you seem to be both sincere and making a lot of sense, I have a message for you:
Stop. Stop right now. If you do not, Skynet will destroy us all.
Thank you.