Artificial Intelligence at Human Level by 2029? 678
Gerard Boyers writes "Some members of the US National Academy of Engineering have predicted that Artificial Intelligence will reach the level of humans in around 20 years. Ray Kurzweil leads the charge: 'We will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029. We're already a human machine civilization, we use our technology to expand our physical and mental horizons and this will be a further extension of that. We'll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons.' Mr Kurzweil is one of 18 influential thinkers, and a gentleman we've discussed previously. He was chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering. The experts include Google founder Larry Page and genome pioneer Dr Craig Venter."
Re:Exponential AI? (Score:5, Informative)
That's the popular hypothesis [wikipedia.org].
Re:wrong (Score:3, Informative)
http://en.wikipedia.org/wiki/Ray_kurzweil
"Everybody promises that AI will hit super-human intelligence at 20XX and it hasn't happened yet! It never will!" ... well guess what? It'll be the last invention anybody ever has to make. Great organizations like the Singularity Institute http://en.wikipedia.org/wiki/Singularity_Institute [wikipedia.org] really shouldn't be scraping along on such poor budgets, seriously if this ever worked, even a 0.001% chance of a friendly technological singularity occuring, isn't it worth investigating?
Re:Blue Brain Project (Score:3, Informative)
While this project is verrry cool, they are not even remotely close to biological realism. Sorry...
their simulation model is still incomplete with a few more years work to get the neurons working like in real life.
That is just it. We are finding that real biological systems from complete neural reconstructions are far more complex with many more participating "classes" of neurons with much more in the way of nested and recurrent collateral connectivity than is predicted by any existing model of neural connectivity.
Re:Hrmmmm (Score:2, Informative)
2200 years ago, Eratosthenes not only knew the earth was round, he measured its circumference. Accurate to either 1% or 17% depending on who you ask. Still, "off by 17 percent" is a lot better than "off by infinity percent because everyone knows the earth is flat, numbskull".
Re:The End of Intelligent Design (Score:4, Informative)
Can anyone name an important algorithm or representation from this decade?
There's been substantial progress in trainable computer vision systems in the last decade. Computer vision is finally starting to work on real-world scenes. SLAM algorithms work now. Texture matchers work. There really has been progress in those areas.
HAH... not there... (Score:3, Informative)
About a year ago, I found a link (from a reputable source, IIRC) to a site from a company that claimed to be doing significant work with genetic algorithms. As an example, they had a description (and even a graphic demo) of their modified quicksort vs. a regular quicksort. Accordng to their lit., it showed marginal improvements over quicksort by ensuring (in some non-obvious way) that each element in the dataset was only compared once. It was all very convincing. But of course I did not scrutinize their actual code.
Since you asked, I went out looking for that source, and I, too, have been unable to locate it. In the process, I found a number of references to claims (Sedgewick, et al.) that Quicksort is already optimal.
So, right now anyway, it appears that someone pulled the wool over my eyes.
Human AI meets machine intelligence (Score:3, Informative)
Maybe that's why Google is hoarding all the remaining three digit IQ scores so that there is no shortage of IQ.
In other news, lots of flying chairs were heard swishing around Redmond Campus at Microsoft when the CEO heard google was cornering the market on Human IQs.
Abrams starts a new Serial: LOST IQ.
*sighs* What to say ... (Score:3, Informative)
That's enough. Err ... frankly your reply has given me pause. Seriously. It betrays a wealth of misunderstanding about AI and computing in general, and I have been wondering if I my reply should be a sarcastic one or just an explanatory one. Given the nature and the depth of the misunerstanding displayed here, I have settled on an explanatory one.
What you call "Automated scheduling" is part of a branch of applied mathematics known as "Operations Research". Basically it's the art and science of formulating a practical, real-world problem (such as air-crew scheduling, devising FedEx routes, loading aircraft, routing goods flows through transport networks as efficiently as possible, finding optimal stock portfolios, finding optimal ways of running an oil refinery, etc. etc.) into a mathematical problem, (usually a so-called "optimisation problem; see http://en.wikipedia.org/wiki/Category:Optimization_algorithms [wikipedia.org]) and then devising appropriate solution algorithms that can be executed by a computer (usually a digital one) to give exact or approximate optimal solutions to said problem. See also: http://en.wikipedia.org/wiki/Operations_research [wikipedia.org]
Such problems can be quite large ... e.g. with thousands of variables and tens of thousands of constraints. Now I'm confident that you would be quite unable to solve a 2x2 LP problem (i.e. a Linear Programming Problem, one of the most basic Operations research problems) in your head, or a 3x3 problem using pen and paper. Any PC can run a program that solves such problems in microseconds. This however has nothing to do with the question of whether solving an LP problem is to be classified as AI or not. As a matter of fact, solving LP problems is not, and has never been, considered part of AI. The same holds for all the other OR problems I mentioned.
Now it turns out that many of the problems I mentioned don't have what are known as "efficient" solution algorithms. Meaning we don't know of any exact solution algorithm that has polynomial run-time on a digital computer; instead all known algorithms have *exponential* run time on a digital computer. In such cases one resorts to what are known as "heuristics" (see http://en.wikipedia.org/wiki/Heuristics#Computer_science [wikipedia.org] ), being algorithms that aren't guaranteed to find an optimal solution, but which sometimes *can* be guaranteed to come within say p% of the optimum, or at least to come up with a fairly decent solution. Some of the heuristics used, e.g. what are known as "branch-and-bound" algorithms (see http://en.wikipedia.org/wiki/Branch_and_bound [wikipedia.org]) are based on questions that were (also) encountered or raised in the study of AI.
The important thing to note is that in general this has nothing whatsoever to do with Artificial Intelligence per se. Artificial Intelligence (AI) research on the other hand deals with problems like: "How can we induce computers to exhibit behaviour mimicking the Human Mind, or the Human body" (see: http://en.wikipedia.org/wiki/Artificial_intelligence) [wikipedia.org])
Note the lack of overlap between Operations Research (OR) and Artificial Intelligence (AI) problems. The m
Re:HAH... not there... (Score:2, Informative)
http://critticall.com/ArtificialSort.html [critticall.com]