Wired Founding Editor Now Challenges 'The Myth of A Superhuman AI' (backchannel.com) 284
Wired's founding executive editor Kevin Kelly wrote a 5,000-word takedown on "the myth of a superhuman AI," challenging dire warnings from Bill Gates, Stephen Hawking, and Elon Musk about the potential extinction of humanity at the hands of a superintelligent constructs. Slashdot reader mirandakatz calls it an "impeccably argued debunking of this pervasive myth." Kelly writes:
Buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence...
1.) Artificial intelligence is already getting smarter than us, at an exponential rate.
2.) We'll make AIs into a general purpose intelligence, like our own.
3.) We can make human intelligence in silicon.
4.) Intelligence can be expanded without limit.
5.) Once we have exploding superintelligence it can solve most of our problems...
If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief -- a myth
Kelly proposes "five heresies" which he says have more evidence to support them -- including the prediction that emulating human intelligence "will be constrained by cost" -- and he likens artificial intelligence to the physical powers of machines. "[W]hile all machines as a class can beat the physical achievements of an individual human...there is no one machine that can beat an average human in everything he or she does."
Kelly proposes "five heresies" which he says have more evidence to support them -- including the prediction that emulating human intelligence "will be constrained by cost" -- and he likens artificial intelligence to the physical powers of machines. "[W]hile all machines as a class can beat the physical achievements of an individual human...there is no one machine that can beat an average human in everything he or she does."
But how will I trick investors!?! (Score:5, Funny)
Anything is possible in 10-20 years, just give me all your money!
Re: But how will I trick investors!?! (Score:5, Insightful)
* Circa 1970
Today's "youth" have no perspective. It's you know your technology history you DO NOT doubt such claims. Do you have any idea how far we have progressed in less than half a human lifetime? Do you not get that the advancement of technology has far been non-linear to the point of being almost exponential?
Re: (Score:2)
With every component attribute that can be measured (amount of memory, CPU throughput, screen size, pixel depth, bus speed, network connection speed), those values have been doubling every two or less years. It is exponential.
Re: But how will I trick investors!?! (Score:4, Insightful)
"If it cannot be emulated in silicon it either means biological entities have a soul ..."
Why would it mean, if biology doesn't rely on processes that are comparable to systems that run software on silicon, that it relies on a soul.
Re: (Score:2)
Re: But how will I trick investors!?! (Score:5, Informative)
Except that the claims of strong AI 'real soon now' have been coming since the '60s. Current AI research is producing things that are good at the sorts of things that an animal's autonomic system does. AI research 40 years ago was doing the same thing, only (much) slower. The difference between that and a sentient system is a qualitative difference, whereas the improvements that you list are all quantitative.
Neural networks are good at generating correlations, but that's about all that they're good for. A large part of learning to think as a human child is learning to emulate a model of computation that's better suited to sentient awareness on a complex neural network. Most animals have neural networks in their heads that are far more complex than anything that we can build now, yet I'm not seeing mice replacing humans in most jobs.
Re: But how will I trick investors!?! (Score:5, Interesting)
Neural networks are good at generating correlations, but that's about all that they're good for.
No... What a supervised neural net does, in full generality, is to tune a massively parameterized function to minimize some measure of it's output error during the training process. It's basically a back box with a million (or billion) or so knobs on it's side than can be tweaked to define what it does.
During training the net itself learns how to optimally tweak these knobs to make it's output for a given input as close as possible to a target output defined by the training data it was presented with. The nature of neural nets is that they can generalize to unseen inputs outside of the training set.
The main limitation of neural nets is that the function it is optimizing and error measure it is minimizing both need to be differentiable, since the way they learns is by gradient descent (following the error gradient to minimize the error).
The range of problems that neural nets can handle is very large, including things such as speech recognition, language translation, natural-langauge image description, etc. It's a very flexible architecture - there are even neural Turing machines.
No doubt there is too much AI hype at the moment, and too many people equating machine learning (ML) with AI, but the recent advances both in neural nets and reinforcement learning (the ML technology at the heart of AlphaGo) are quite profound.
It remains to be seen how far we get in the next 20 (or whatever) years, but already neural nets are making computers capable of super-human performance in many of the areas they have been applied. The combination of NN + reinforcement learning is significantly more general and powerful, powering additional super-human capabilities such as AlphaGo. Unlike the old chestnut of AI always being 20 years away, AlphaGo stunned researchers by beng capable of something *now* that was estimated to be at least 10 years away!
There's not going to be any one "aha" moment where computers achieve general human-level or beyond intelligence, but rather a whole series of whittling away of things that only humans can do, or do best, until eventually there's nothing left.
Perhaps one of the most profound benefits of neural nets over symbolic approaches is that they learn their own data representations for whatever they are tasked with, and these allow large chunks of functionality to be combined in simplistic lego-like fashion. For example, an image captioning neural net (capable of generating an english-language description of a photo) in it's simplest form is just an image classification net feeding into a language model net... no need to come up with complex data structures to represent image content or sentence syntax and semantics, then figure out how to map from one to the other!
This ability to combine neural nets in lego-like fashion means that advances can be used combinatorial fashion... when we have a bag of tricks similar to what evolution has equipped the human brain with, then the range of problems it can solve (i.e. intelligence level) should be similar. I'd guess that a half-dozen advances is maybe all it will take to get a general-purpose intelligence of some sort, considering that the brain itself only has a limited number of functional areas (cortex, cerebellum, hippocampus, thalamus, basil ganglia, etc).
Re: (Score:3)
"Its"
Fucking retard can't spell. I stopped reading right there.
That's a losing battle like chiding people for using "less" instead of "fewer" or "which" when "that" would be more appropriate. You're outnumbered and outgunned and might as well go along with the trend and pry the apostrophe key off your keyboard.
Re: (Score:3)
Don't listen to HIM. He's a charlatan. I can do everything he claims he can do in 10-20 years in two years. Give ME all your money!
Re: Boners (Score:5, Funny)
Well it's easy to show superhuman AI is a myth. (Score:4, Insightful)
Because intelligence as a single-dimensioned parameter is a myth.
We already of have software with super-human information processing capabilities; and we're constantly adding more kinds of software that outperforms humans in specific tasks. Ultimately we'll have AIs that are as versatile has humans too. But "just as versatile" doesn't mean "good at the same things".
So it's probably true that software is getting smarter at exponential rates (and humans aren't getting smarter as far as I can see), but only in certain ways.
Re:Well it's easy to show superhuman AI is a myth. (Score:4, Insightful)
Right. "Human intelligence" is a strawman. Computers can't have human intelligence because they lack human perceptions, and will not have the biochemical jibberjab underpinning it.
Human intelligence is actually not that good... we are fooled all the time... hence, Trump!
Re: Well it's easy to show superhuman AI is a myth (Score:4, Insightful)
The first three assumptions in this article have already been met sufficiently well enough to debunk the Wired article. AlphaGo has displayed superhuman intelligence in the first three areas of assumptions. 1) AlphaGo exploded on the scene by beating world class Go players much faster and much earlier than expected. Exponentially is a loaded word. e^0.0000001 is an exponential growth rate. So let's not quibble about how exponential the growth rate is.
2) AlphaGo is a general purpose learning tool. Just listen to the lectures and articles penned by the DeepMind team.
3) Alphago has displayed human-like intelligence, as claimed by the Go professionals it has played. They have said that AlphaGo plays like a human player.
4) If you take the fourth assumption literally, AI's intelligence is going to expand infinitely. Talking about infinity in human terms is unreasonable. Yes, AI's intelligence will expand.
5) The fifth assumption can be argued many ways. Some problems are not solvable due to their paradoxical nature. Other problems are subjective and are uniquely unsolvable by some individuals, but not by all individuals. It is a matter of time before a general purpose AI program will solve subjective emotional problems. Whether all human beings accept the solutions is subjective and open to speculation.
The human population is composed of experts, with divisions of labor. It is not unreasonable for AI programs to have areas of expertise.
Re: Well it's easy to show superhuman AI is a myth (Score:5, Insightful)
AlphaGo is not a "general purpose" learning mechanism. It won't ever write sonnets meaningful to humans, or be able to to dance, or even employ symbolic differentiation.
It is a really nice toolset, and it is able to solve a task which is difficult for humans, but so does Google or your high-school calculator when you calculate sin(1.2).
It won't ever go beyond the computational underpinnings of playing Go-like games: evaluating game positions and calculating game trees. It won't ever say 'forget it, I'd rather be drinking beer with my buddies', which is an intelligent thing to do for most of us with respect to playing Go.
There's nothing human-like about AlphaGo, except that it solves a problem relevant to humans; the calculator example comes in mind.
I'd be thrilled to know what kind of specific major human problems you'd consider AI-approachable, because I currently only see a bunch of more or less advanced mechanisms that are fine-tuned to solve very specific computationally well-defined problems, and most human problems are not computationally well defined.
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
All true, and Google/DeepMind never said that AlphaGo, in of itself, is anything other than a dedicated Go playing program. I wouldn't compare it to a calculator though (or DeepBlue for that matter) since it learned how to do what it does (at this point primarily by playing against itself)... nobody programmed it how to evaluate whether a given Go board position is good or bad.
However... DeepMind are in the business of trying to create general intelligence, and are trying to do so based on reinforcement lea
Re: Well it's easy to show superhuman AI is a myth (Score:5, Insightful)
1) AlphaGo exploded on the scene by beating world class Go players much faster and much earlier than expected. Exponentially is a loaded word. e^0.0000001 is an exponential growth rate. So let's not quibble about how exponential the growth rate is.
Particularly as we can't really measure intelligence. But "exponential" has a meaning, and it means a steady rate of doubling.
2) AlphaGo is a general purpose learning tool. Just listen to the lectures and articles penned by the DeepMind team.
No, it's a narrow AI. In the end, it's simply doing math. It's not "thinking" in any sense of the term. It's just able to hold many more probabilities in its memory than a human, and play them out much faster.
3) Alphago has displayed human-like intelligence, as claimed by the Go professionals it has played. They have said that AlphaGo plays like a human player.
That is what you call anthropomorphizing. The human players are simply projecting onto the machine.
4) If you take the fourth assumption literally, AI's intelligence is going to expand infinitely. Talking about infinity in human terms is unreasonable. Yes, AI's intelligence will expand.
"Infinite" simply means there's no limit. We don't know whether or not there's a limit to intelligence, but since the universe is finite, there would seem to be a limit to the things one could know. Infinite intelligence is our notion of God. What are the odds that infinite intelligence is also mythological?
Re: (Score:2)
Minor quibble - or maybe not minor - but how do you know that the universe is finite? It would be a strange coincidence if the entirety of the universe happened to be the part that we can observe...
Re: (Score:2)
Nonsense, the SAT told me all I'll ever need to know about my INT stat!
Re: (Score:2)
Because intelligence as a single-dimensioned parameter is a myth.
You seem to be confusing the concept of intelligence with *measuring* intelligence.
Our current measure, IQ, applies specifically to the cognitive strength of human intelligence, and our ability to solve problems creatively.
You cannot meaningfully talk about the IQ of a dolphin or dog, let alone an AI or alien intelligence.
And the only reason anybody uses a single parameter for measuring human intelligence, is that the other measures and components of IQ are highly correlated, not beca
Re: (Score:3)
When people talk of super-human intelligence, they mean one capable of self-awareness
These are two independent things. Intelligence has to do with finding patterns, and using these patterns to solve related problems. That doesn't require self-awareness at all. And you can be self-aware but stupid. You need a combination of self-awareness and intelligence to survive in nature, but you don't need self-awareness to solve a math problem.
Re: (Score:2)
We use a single number, but we construct that number from five or six different fields:
a) math, or better: pattern matching e.g. what is the next number in this sequence
b) language, which words are related/which word does not fit etc.
c) optical/geometrical problems: which of the following figures does not fit into the picture (usully one is mirrored)
Well, some tests work with more areas some with less. Here they even use 7: http://www.iqtestexperts.com/i... [iqtestexperts.com]
My reasoning about this, especially if you had olde
Re: (Score:2)
in the middle range, 90 - 110 points,
IQ tests are also unreliable at the tail ends, for epistemological reasons.
How do you construct an intelligence test? You start with a collection of reasonable-seeming tests and you have a sample population perform them. You then rank them on test performance and assess whether your ranking confirms your preconceptions. So here's the problem with the tail ends: it's really hard to get a large enough sample of subjects to test the predictive value of your test with people who score three or more standard
Re: (Score:2)
Very true.
And tests are not realy compareable between countries/cultures etc. (except for math and spartial perception perhaps)
Re: (Score:2)
Not sure what that's intended to mean.
If it means you're at a disadvantage taking the test in a language you don't speak then "well duh!".
Re: (Score:2)
You seem to be confusing the concept of intelligence with *measuring* intelligence.
You know you're right. But I think there's a good reason for this: magnitude is an intrinsic part of the concept. I've never heard anyone talk about intelligence except as a concept that allows you to rank things (e.g. Chimps are more intelligent than dogs, which are more intelligent than gerbils). So to apply it to an entity like a human or a program is to implicitly measure that thing.
What I'm saying is that the concept is useful but of intrinsically limited in precision.
Re: (Score:2)
That is not intelligence, not even in thesense of current AI. When people talk of super-human intelligence, they mean one capable of self-awareness. Something even a toddler lacks.
That is nonsense. Do you not have memories of yourself as a toddler? I do. I was absolutely self-aware.
1,000 quatloos on the newcomer AI? (Score:2)
In some regards I'd argue that one deserved an insightful mod. The comment that actually had one (at this time) didn't deserve it, and no "funny" comments at all. Sad. (#PresidentTweety contamination is bigly sad.)
Consider the Fermi Paradox. Obvious resolution is that they're out there, but not talking to us because we're still amusing enough to bet quatloos on. What are they betting on?
Whether we create our AI successors before we exterminate ourselves. Right now the odds are falling fast. (#PresidentTweet
Re: (Score:2)
Re: Well it's easy to show superhuman AI is a myth (Score:2)
Re: (Score:2)
Re:Well it's easy to show superhuman AI is a myth. (Score:5, Interesting)
Re: (Score:3)
On the rare occasion that a human is right and the AI wrong, the AI will learn from its mistake.
Re: (Score:2)
there is no one machine (Score:2)
Globally linked, Purposeful AIs (Score:2)
Re: (Score:2)
So the new test for AI should not be, can we distinguish it from a human, but is it able to cold call an elderly widow and scam her out of most of her savings?
Not even like religious belief, just media hype (Score:2)
Re: (Score:2)
Interesting to see an AC believing Dr. Hawking over Mr. Kelly.
As far as I know, and I would love to get a better understanding of what he has done, Dr. Hawking has never programmed anything in his life.
Mr. Gates seemed to have done some work in the early days of Microsoft but hasn't programmed in 35+ years.
Mr. Musk would be the most credible source, but I guess his love of seeing his name in print out weighs his need to maintain the image of a practical visionary - this seems to be a problem as I would thin
Thank you Mr. Kelly (Score:2)
A nice dose of reality to counter the dire warnings from people that, in all honesty, should know the five points and why there's no reason to be worried about AI.
This ain't the Forbin Project.
Neither can most humans (Score:5, Insightful)
there is no one machine that can beat an average human in everything he or she does
Neither can most humans. There is no such thing as an average human. Every individual human specializes, and increasingly so as they get older (or they do not improve). It is a pervasive strawman to require AIs to "beat" an average human when the same quality isn't used to judge humans.
Re: (Score:2)
You know... every super genius is a master of Go, higher mathematics, english literature, Chess, Poker, and can use power tools easily to build cabinetry as well as solve physics and advanced computational theory problems. They are also really good at selling, writing song lyrics, mystery novels, television shows, and science fiction... and flirting.
And I left out the entire class of skills that relate to their physical body.
It's amazing how good it is to be a genius.
If the article is as stupid as the summary (Score:2, Insightful)
... I'm glad I did not RTFA.
> 1.) Artificial intelligence is already getting smarter than us, at an exponential rate
Nobody who knows anything says that. We don't have real AI at all yet, just expert systems and a few interesting decision algorithms.
> 2.) We'll make AIs into a general purpose intelligence, like our own.
Of course we will. (Why would anyone make a phone that is also a web browser, a camera, an appointment tracker, a video game machine, a music player, a movie player, a flashlight, a co
Re: (Score:2, Insightful)
I don't think you understood the summary. Mr. Kelly is *responding* to other people (like Gates, Hawking, and Musk) who have asserted these five theses. Kelly is arguing that they're bunk. In other words, he agrees with you.
Maybe you should RTFA. (Score:4, Informative)
Because then you wouldn't have been saying things like:
If you've figured out AI, you go general as soon as you can, because you get everything in one box.
...when Kelly dismisses that the concept of general-purpose AI because we look at intelligence through an anthropocentric lens. "General purpose" actually isn't.
Re: (Score:2)
Then replace "general purpose" with "multi purpose"
Re: (Score:3)
Brains are not magic! The existence of human intelligence proves that intelligence is possible, everything else is just details.
Details can be damn hard to figure out and it is not so unlikely that evolution already found something that damn close to optimum if you consider factors such as energy to build and operate. It's tradeoffs are likely already getting tuned to changed environment, where high intelligence helps a lot and starvation isn't as big of an issue as it has been for millions for years. Potentially genetic engineering can make these changes quicker.
No, there will not be basic income or anything decent like that. There will be mass incarceration as people turn to crime to survive.
Mass incarceration is expensive and inefficient. It is likely much che
Re: (Score:2)
Details can be damn hard to figure out and it is not so unlikely that evolution already found something that damn close to optimum if you consider factors such as energy to build and operate.
Evolution has major restrictions, though. It can only use limited materials, both for construction, signalling and energy. It needs to be able to grow from a single cell, with all the information encoded in our DNA (less than 1GB worth of information, of which only a part deals with the brain). It has to evolve in small steps, each one beneficial to survival, starting from a squid. Once stuck in a local optimum, it can't get out. Also, biological learning is limited because it is localized. There's no mast
Re: (Score:2)
I think you have a too simplistic view of evolution. Easily getting stuck in a local minimum would be a major obstacle for long term survial of any species. So you can expect that tools against that were among the first traits to evolve. Also things are apperantly encoded in a way, where what might seem to be a large structural change is actually only small changes to the DNA, e.g.: a mutation in a single gene can get people an extra finger. And the description of any revolutionary AI algorithm that people
Re: (Score:2)
You still have to pay for at least food and the buildings, basically the same things that you are paying for when you pay for a basic income or similar system. And while incarcerated people won't buy stuff and their productivity will be very low, if they work at all. And such a system isn't unlikely to quickly result in a revolution. And even mass incarceration isn't very effective in preventing crime. At some time people are going to be released and will potentially commit the next crime. The US has incarc
These are possible (Score:2)
The things he lists are not impossible. It's not inevitable of course, just not by any means beyond the realm of possibility.
Pfft (Score:2)
I think the problem here is Mr Kelly has no foresight what-so-ever.
Limited dismissal... (Score:2)
Which do you think is more like a religious belief? The proposal of a potential danger that might come with evolution of a particular branch of technology, or the complete dismissal of the possibility based on a very limited view regarding only 5 points of that very proposal?
I don't disagree that human-like AI is far from happening if it ever does, or that some of those particular points might be misguided in some way particularly if there is some sort of paranoia related to them... but I wouldn't outright
Nay-sayers, a long history of being so very wrong (Score:3, Insightful)
I'm not sure whether it is discomfort at the idea of having a computer call them silly, a deep belief that humanity is somehow special in a special way (carefully defined in undefinable terms) or just a deep and enduring lack of imagination. Between AlphaGo beating Lee Sodel, the cancer treatments being proposed by Watson and the rise of driver-less cars we are seeing many supposedly impossible roles being taken over by software.
The five assumptions noted basically are basically denial ... reinforced with more denial. The evidence in a number of areas is in. Computers and software routinely appear in locations doing things predicted to be impossible. Computing capability keeps exceeding predictions.
Arguably the one valid assumption made is that intelligence is computable. If it is, the Church-Turing thesis gives the useful theoretical result that anything computable can be run on a UTM. It seems likely that what will end up happening is that the deniers keep arguing the point on what 'intelligence' is even after the AI they deny being possible has become bored with the discussion and moved on to more interesting pastimes.
The rift widens (Score:2)
The more likely scenario is that between robots and AI, the working class will become redundant.
Unrealistically limited view (Score:5, Insightful)
If people like Bill Gates and Elon Musk are unrealistic in one direction, this person seems unrealistic in the other direction. He's basically betting against technological progress. And that's usually a losing bet, at least over long enough time periods.
1.) Artificial intelligence is already getting smarter than us, at an exponential rate.
Computers are already better than us at many tasks. That's been true for ages. And they're continuing to improve while we aren't. The set of tasks that computers are better at is constantly growing. I don't know of any fundamental limits to prevent them from eventually becoming better than us at the remaining tasks too. So it seems pretty likely they eventually will.
2.) We'll make AIs into a general purpose intelligence, like our own.
It's hard to even define what a "general purpose intelligence" means. But anything a human brain can do, computers will probably eventually be able to do it too.
3.) We can make human intelligence in silicon.
We can certainly make intelligence in silicon. We've already done it. Whether you consider it to be "human intelligence" or "inhuman intelligence" is kind of beside the point. If a computer can do something, whether it does it in the same way a human does is just an implementation detail.
4.) Intelligence can be expanded without limit.
I don't know of anyone who's claiming that. Where does he get this from? Anyway, the claim isn't that computers will advance without limit, only that they'll surpass humans.
5.) Once we have exploding superintelligence it can solve most of our problems...
Um, no. That's not at all what they're claiming. We certainly hope that it will solve many problems, but Gates, Musk, et al. are warning it could also create huge problems.
emulating human intelligence "will be constrained by cost"
Computers are cheap, and getting cheaper all the time. Humans are expensive and staying expensive. That's why automation has become such a big deal. Here again he seems to be betting against technological progress.
Re: (Score:3)
4.) Intelligence can be expanded without limit.
I don't know of anyone who's claiming that. Where does he get this from? Anyway, the claim isn't that computers will advance without limit, only that they'll surpass humans.
Nick Bostrom's warnings have this as an implicit assumption. Bostrom claims that once a "super-intelligence" is created, it could improve on itself so rapidly that no other competing attempt will have a chance to catch up. For this to be true, there would need to be no limit.
Re: (Score:3)
That reminds me about a story one of my proffessors kept telling:
"Two scientists discuss about AI.
One says: AI will never be possible there are so many things a human always will be better than a computer.
The other says: for every example you give me, where a human is better than a computer NOW, I shall build you a computer that beats all humans in that example."
At that stafe we are now, Watson and AphaGo basically can be trained on many problems. But that will only be single instances that solve a single p
Re: (Score:3)
The machine intelligence people are scared of (Score:2)
True AI has no "expert ." (Score:2)
Straw Men (Score:3)
I normally try to read the whole article before commenting, but it starts with a list of straw men claims, so I didn't bother.
1. Artificial intelligence is already getting smarter than us, at an exponential rate.
It would be more accurate to say that the claim is that Artificial Intelligence is increasing faster than ours, which is hard to dispute. Saying "getting smarter" makes it sound like a claim that AI is already smarter, which I don't think anyone has made.
2. We’ll make AIs into a general purpose intelligence, like our own.
Why not? The ability to learn to play Go and Poker better than humans, without having detailed algorithms built in, shows that computational brute force goes a long way, even when humans don't understand how the program works. Until recently it was thought that there would have to be conceptual advances specific to those games in order to defeat human champions (and in any case it was already possible to defeat the average human).
3. We can make human intelligence in silicon.
It's unnecessary for AI to emulate human intelligence (and chauvinistic to suggest that it has to). Its capabilities can match or exceed humans, while working in a completely different way.
4. Intelligence can be expanded without limit.
Why? All that's necessary is for AI to equal or exceed human capabilities. Even if one makes the farfetched assumption that humans are at the peak of intelligence, simply being able to match the most intelligent humans would exceed the capabilities of 99.9+% of the population.
5. Once we have exploding superintelligence it can solve most of our problems.
It would probably allow solving most of our existing problems, and create new ones. Life goes on. In any case what it could accomplish is completely independent of whether it's possible.
Has been obvious for a long, long time (Score:2)
Anybody actually competent in the subject area has known this for a long time. It is just the morons that use "Technology!" as a surrogate for religion that do not get its limitations at all and ascribe magical powers to it. These idiots are unfortunately about as stupid, as obnoxious and as pervasive as the religious fuckups.
Hmm (Score:2)
1.) Artificial intelligence is already getting smarter than us, at an exponential rate. 2.) We'll make AIs into a general purpose intelligence, like our own. 3.) We can make human intelligence in silicon. 4.) Intelligence can be expanded without limit. 5.) Once we have exploding superintelligence it can solve most of our problems.
1. no 2. probably 3. probably 4. probably 5. probably not, it will most likely use us for axle grease as we will otherwise compete for resources and we won't be good for anything else.
He Brought A Knife to fight the Death Star (Score:2)
All of his counter-argument are readily, and obviously, felled by variations on the same theme.
1.) Artificial intelligence is already getting smarter than us, at an exponential rate.
This is entirely irrelevant. Artificial Intelligence (a misnomer, if ever there was one) doesn't even have to be a factor. All that matters is that machines are purposed and sufficiently well programmed to do a specific task usually performed by a human. This is the exact same thing that happened in the industrial revolution. The only difference is that our machines and their programming are more sophisticated
Re: (Score:2)
I agree. Kelly explains why naive versions of the superintelligence hypothesis don't make sense, but ignores the main points. The brilliant minds (Gates, Musk, etc) did not claim to prove that human like superintelligence was about to take over. The identified the proliferation of tasks in which machines outperform humans as a threat to the stability of human society and eventually a possible existential threat in some possible scenarios. Kelly's article in no way counters the strength of their argumen
The myth of human intelligence (Score:2)
> 'The Myth of A Superhuman AI' ... five assumptions which, when examined closely, are not based on any evidence
I'd debunk the myth of human intelligence first. I've heard about it many times, but haven't seen any evidence for it either.
Dijkstra (Score:3, Insightful)
Not a robot (Score:2)
First they ignore me . . .
Then they ridicule me . . .
Then they fight me . . .
. . . And then I win.
Religious belief? (Score:2)
It seems to me "my brain is magic" is more of a religious belief. His "heresies" are all effectively variations on that theme.
Rationally speaking, why would... (Score:2)
Certainly if we were competing for resources, I could understand it somewhat, being more intelligent than us, darwinian
Re:"constrained by cost" (Score:5, Informative)
For example, scientists now know that one single neuron (of certain types) is an entire neural network all by itself. Dendrites with multiple localized spikes communicating with each other and with other cells. Ultimately performing non-linear computation prior to forwarding any signal to cell body.
Re: (Score:2)
On the other hand, neurons are severely limited by the biological processes, so it's possible that we can make artificial neurons that are better than the ones in our brains. A small neuron is 4 microns. A small transistor is 0.02 microns, so we can pack a lot of computation in the size of a neuron, and make it run millions of times faster too.
Re: (Score:3)
On the other hand, neurons are severely limited by the biological processes, so it's possible that we can make artificial neurons that are better than the ones in our brains. A small neuron is 4 microns. A small transistor is 0.02 microns, so we can pack a lot of computation in the size of a neuron, and make it run millions of times faster too.
It is true that we can expect artificial neurons, once we know how to make one, will run much faster than natural ones, given the fact that we aren't limited to the materials that natural evolution must work with.
But the scale comparison you make (though a common one) is wildly, unjustly, favorable to current technology. The common "feature size" measure used to compare solid state circuit elements is in no way comparable to the computational units in nervous systems, which actually takes place at the level
Re:"constrained by cost" (Score:5, Informative)
Given that our knowledge of the computational complexity of a single neuron is growing steadily, I think it's safe to say your FPGA cell estimate for a neuron was significantly too low. For example, scientists now know that one single neuron (of certain types) is an entire neural network all by itself. Dendrites with multiple localized spikes communicating with each other and with other cells. Ultimately performing non-linear computation prior to forwarding any signal to cell body.
Right you are. The absolute give-away (in addition to the ridiculous low-ball answer he provided) was "... that was pretty straightforward..." which shows the Dunning-Kruger Effect in full bloom. He had no idea now little he knows about the subject.
The example I like to use to illustrate how much smoke is being blown about this my tech types is the model organism Caenorhabditis elegans. This 1 mm long nematode has had every one of its 302 neurons in its nervous mapped out, including all connections to every other neuron, as well as the process of development from the initial fertilized egg - we have mapped out exactly how the nervous system develops (indeed every one of the 959 cells in its body have been similarly traced out).
Given this complete map of C. elegans nervous system we must have a spiffy computer of the little worm's "brain" able to replicate its behavior? Right?
Not even close. So far we cannot accurate model the behavior of even a single neuron in C. elegans. Even one single neuron represents computational complexity that we are still trying to understand.
Re: (Score:2)
Re:"constrained by cost" (Score:5, Informative)
Human (or just vision in general) is the best example. It accounts for 30% of the brain capacity. At one end, you have the human eye with a retina consisting of 100 million rods and cones. Then just in that space of a 5mm disc, there are seven layers of processing used to do contrast detection between colors and intensity along with edge detection. The optic nerve takes the compressed information from a thousand areas then passes it through to the brain into two paths; one to identify where objects are, the other to determine what the objects are and their orientation. Understanding what just a single region or layer of brain cells does leads to dozens of papers being published and advances in digital photography (image stabilization, motion correction).
Re:"constrained by cost" (Score:5, Interesting)
Indeed!
And one of the myricals in this is: if an object is about to hit your eyes or comes close by, the reflext to close the eyes and raise your hands etc. is triggered _before_ that information has even reached the brain/visual cortex.
The signal processing in the eye can bypass the visual cortex to trigger protective actions.
Re: (Score:2)
I guess you made a few mistakes then.
And are off by several orders of magnitute.
Re: (Score:2)
Beware an AI that can do software development
It'll probably write nightmarish recursive logic in some language of its own design that no human or other AI can make any sense of.
Seriously -- given the diversity of human thinking and the fact that no two people can actually agree on much of anything, why do we expect that large scale AI will be trustworthy, useful, or even sane.
Re: (Score:2)
Re: (Score:2)
Of course there was evidence of an airplane before one was invented: all the prototypes, models and plans people had been working on and refining for decades. In fact the first heavier than air fixed-wing aircraft and the first demonstration model that proved that airplanes were possible (George Cayley in 1799) was more than a hundred years before the invention of the final product we call an airplane. Inventions do not poof into existence fully-formed.
Re:He's wrong, and the smart people are right (Score:5, Interesting)
Unfortunately Sam Harris is bad at math. He claims "It's crucial to realize that the rate of progress doesn't matter, because any progress is enough to get us into the end zone. We don't need Moore's law to continue. We don't need exponential progress. We just need to keep going.". It seems he has never seen a monotonically increasing, yet asymptotically bounded function. However, that is exactly the kind of progress we are seeing in older technologies, e.g.: Airplanes stay at almost exactly same speed (because going past the sound barrier would use lots of energy) and get slightly more efficient each year, but will never get to the point where they can operate almost without any fuel or other large energy source, simply because the laws of physics don't allow that kind of progress.
But even if the possible progress is not bounded, it is still not guaranteed that we will get there. It can still take so long, that it never going to happen before human civilization is completely destroyed by some disaster. Or it could simply be stopped by economics as further improvements can easily get so expensive or tiny, that the likely benefits from pushing the research further can not offset the cost.
Harris also seems to think that general AI is ineviatable, because we want to make progress towards things such as things such as cureing cancer or Alzheimer. But it is not clear that such an achievement actually requires general superhuman intelligence. It likely requires superhuman intelligence, e.g.: the computers that simulate protein folding way better than any human could ever do, but not necessary general intelligence. Specialized artificial intelligence seems to be much easier to achieve and is at the same time likely almost as good as general intelligence for topics such as those. You don't need to develop an artificial general intelligence to cure cancer, if you already developed a specialized artificial intelligence that is able to find a cure.
Imagine what could happen when a huge neural net is applied.
The problem with huge neural nets is training them. The more possiblities a network has, the harder it becomes to train it. Large parts of the progress in the last few years were made by finding clever constraints on the network in order to make them easier to train.
Re: (Score:3, Insightful)
Re: (Score:2)
You are missing the point here. An improvement by 6x in fuel efficiency is great sure. However, it is nowhere near the improvements that would be expected by exponential growth. The plane that flies around the world with solar cells instead of fuel, is very slow, very expensive to build and has a very small payload. If you consider how much fossile energy was likely required to build this plane, you see that this isn't a breakthrough. You need to consider economics as well. The speed of the regular commeric
Re: (Score:2)
You are repeating a mistake that is often made when recent breakthroughs in one area of technology happen: Things are currently moving fast, so they are expect that things will continue to move fast. But if you look at the history, you see that a while after a breakthrough things are hitting a road block and are moving much slower. Some of these road blocks are already visible: Conventional semiconductor technology is close to its physical limits and good training of large network requires bigger and bigger
Re: (Score:2)
good training of large network requires bigger and bigger datasets, which are harder and harder to get.
Humans can learn stuff from much smaller datasets. That means that smarter algorithms should be possible to let computers do more with limited data.
Also, we can use different approaches. Imagine you want to train a computer about cats. You can give it 1 million 2D cat pictures, or you make a robot that can interact with a live cat. The 2nd option allows much more useful information to be extracted. And if you do both, it works even better. There's a lot more information you can get from cat pictures after y
Re: (Score:2)
Sure, smarter algorithms that are able to work with tiny datasets are likely possible. But when will they be available? We just don't know. We have already tried for a long time and didn't find anything that worked well, so it is clearly not a trivial problem. Your "let the robot play with the cat proposal" would likely end up with a robot that is able to recognize that cat extremely well, but fails to recognize that a cat with a different fur color is also a cat. It is also just a different way of generati
Re: (Score:2)
Humans can learn stuff from much smaller datasets.
With all of the sensory inputs we're constantly taking in, I'm not sure you can say that we're learning stuff from a smaller dataset. Think about it. Every day we're basically receiving an approximately 16-hour data set that contains two videos (with audio), smell, touch senses and any other senses that I can't think of at the moment. Just isolating the video component of that, by the time you're just 2 years old, you've received 1.95 PB of video data (human eye sees at 576 megapixels).
Re:How quickly some forget... (Score:4, Informative)
This seems to be an example of some kind of unbounded technological/scientific optimism that disregards the fact that during that history you're using as proof, we have also refined an understanding of physical limits that have not fundamentally changed. Think about laws of thermodynamics or the speed of light as a hard limit, among other things. We are not getting around those any time soon.
Of course if you're counting on a complete revolution of Physics, you're going to need "extraordinary evidence" to overturn a lot of what we already know. This is a tall order; even the theory of relativity and quantum mechanics do not do things like totally overturn Newton's ideas in our everyday life. You can't just expect these kinds of things to happen.
Then there is just some weirdness in the post...
What? The laws of physics have always had to be testable, otherwise you're just doing math. This is the reason the LHC was built, to be an experimental instrument. I do not understand the point about photons and gravitons; the former is a well-known quantum, the latter is theoretical. So far we haven't been able to quantize gravity.
Yeah, and time is a cube, eh?
No, limitations probably still are limitations, even when you develop a better understanding of what is going on. Stuff will fall down even tomorrow, even if you could demonstrate that you can quantize gravity. Getting around strongly established phenomena by better explanations would mean there is some until now completely non-observed part of the world we could exploit. This rarely happens so that what didn't work today, magically starts working tomorrow.
Re: (Score:2)
Really, you have nothing but grandstanding.
This is how it's always worked; you are just drawing an
Re: (Score:2)
That's obvious nonsense aptly demonstrated by organizations like Mensa
The problem is not with the IQ test, but with the MENSA criterium. They accept people with IQ of 130, which is pretty smart, but not a genius. On average, 1 out of 50 people has such an IQ, so 1 kid out of every two classrooms.
If MENSA had required an IQ of 180, it could be called a proper genius club. The only problem is that it would be too small to be profitable, and that the members would be too smart to join.
Re: (Score:2)
As I'm typing this I'm watching my very expensive, ultra-modern cleaning-robot try to figure out how to avoid the catpole in my livingroom, and it's immediately clear that AI has not even approached rodent-level. Or insect-level, for that matter.
Indeed. An ant does far better than "high"-tech at this time.
Re:The Singularity (Score:4, Insightful)
Re: (Score:2)
Did you ever read some of the sequels Frank Herbers son wrote? It is claimed they are based on his fathers notes, and one or two books thematize the Butlerian Jihad. Well, I did not, just wondering if one has an opinn about the books.
Re: (Score:2)
Re: (Score:2)
Thank you for your opinion. ... but I guess I will try one at least :)
I feared that
Perhaps it would be an idea to simply publish the notes.
Re: (Score:2)
I suppose you'd also like me to "disprove" the existence of flying saucers...a menial task.
The point, which you obviously missed, is that people far more qualified as pure thinkers are concerned...for reasons they've laid out on many occasions. Even the simplest google search should offer these arguments for your consideration.
When you're done with that, you can go troll somewhere else...the pickings probably won't be so lean.