Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Robotics

Wired Founding Editor Now Challenges 'The Myth of A Superhuman AI' (backchannel.com) 284

Wired's founding executive editor Kevin Kelly wrote a 5,000-word takedown on "the myth of a superhuman AI," challenging dire warnings from Bill Gates, Stephen Hawking, and Elon Musk about the potential extinction of humanity at the hands of a superintelligent constructs. Slashdot reader mirandakatz calls it an "impeccably argued debunking of this pervasive myth." Kelly writes: Buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence... 1.) Artificial intelligence is already getting smarter than us, at an exponential rate. 2.) We'll make AIs into a general purpose intelligence, like our own. 3.) We can make human intelligence in silicon. 4.) Intelligence can be expanded without limit. 5.) Once we have exploding superintelligence it can solve most of our problems... If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief -- a myth
Kelly proposes "five heresies" which he says have more evidence to support them -- including the prediction that emulating human intelligence "will be constrained by cost" -- and he likens artificial intelligence to the physical powers of machines. "[W]hile all machines as a class can beat the physical achievements of an individual human...there is no one machine that can beat an average human in everything he or she does."
This discussion has been archived. No new comments can be posted.

Wired Founding Editor Now Challenges 'The Myth of A Superhuman AI'

Comments Filter:
  • by Anonymous Coward on Saturday April 29, 2017 @08:28PM (#54327041)

    Anything is possible in 10-20 years, just give me all your money!

    • by Zero__Kelvin ( 151819 ) on Sunday April 30, 2017 @02:39AM (#54327777) Homepage
      I can design a system that is a Star Trek style Communicator that is also a computer more powerful than today's multi-million dollar Supercomputers, fits in your pocket, and will run on battery power for days*.

      * Circa 1970

      Today's "youth" have no perspective. It's you know your technology history you DO NOT doubt such claims. Do you have any idea how far we have progressed in less than half a human lifetime? Do you not get that the advancement of technology has far been non-linear to the point of being almost exponential?
      • by mikael ( 484 )

        With every component attribute that can be measured (amount of memory, CPU throughput, screen size, pixel depth, bus speed, network connection speed), those values have been doubling every two or less years. It is exponential.

      • by TheRaven64 ( 641858 ) on Sunday April 30, 2017 @05:48AM (#54328087) Journal

        Except that the claims of strong AI 'real soon now' have been coming since the '60s. Current AI research is producing things that are good at the sorts of things that an animal's autonomic system does. AI research 40 years ago was doing the same thing, only (much) slower. The difference between that and a sentient system is a qualitative difference, whereas the improvements that you list are all quantitative.

        Neural networks are good at generating correlations, but that's about all that they're good for. A large part of learning to think as a human child is learning to emulate a model of computation that's better suited to sentient awareness on a complex neural network. Most animals have neural networks in their heads that are far more complex than anything that we can build now, yet I'm not seeing mice replacing humans in most jobs.

        • by SpinyNorman ( 33776 ) on Sunday April 30, 2017 @11:07AM (#54329005)

          Neural networks are good at generating correlations, but that's about all that they're good for.

          No... What a supervised neural net does, in full generality, is to tune a massively parameterized function to minimize some measure of it's output error during the training process. It's basically a back box with a million (or billion) or so knobs on it's side than can be tweaked to define what it does.

          During training the net itself learns how to optimally tweak these knobs to make it's output for a given input as close as possible to a target output defined by the training data it was presented with. The nature of neural nets is that they can generalize to unseen inputs outside of the training set.

          The main limitation of neural nets is that the function it is optimizing and error measure it is minimizing both need to be differentiable, since the way they learns is by gradient descent (following the error gradient to minimize the error).

          The range of problems that neural nets can handle is very large, including things such as speech recognition, language translation, natural-langauge image description, etc. It's a very flexible architecture - there are even neural Turing machines.

          No doubt there is too much AI hype at the moment, and too many people equating machine learning (ML) with AI, but the recent advances both in neural nets and reinforcement learning (the ML technology at the heart of AlphaGo) are quite profound.

          It remains to be seen how far we get in the next 20 (or whatever) years, but already neural nets are making computers capable of super-human performance in many of the areas they have been applied. The combination of NN + reinforcement learning is significantly more general and powerful, powering additional super-human capabilities such as AlphaGo. Unlike the old chestnut of AI always being 20 years away, AlphaGo stunned researchers by beng capable of something *now* that was estimated to be at least 10 years away!

          There's not going to be any one "aha" moment where computers achieve general human-level or beyond intelligence, but rather a whole series of whittling away of things that only humans can do, or do best, until eventually there's nothing left.

          Perhaps one of the most profound benefits of neural nets over symbolic approaches is that they learn their own data representations for whatever they are tasked with, and these allow large chunks of functionality to be combined in simplistic lego-like fashion. For example, an image captioning neural net (capable of generating an english-language description of a photo) in it's simplest form is just an image classification net feeding into a language model net... no need to come up with complex data structures to represent image content or sentence syntax and semantics, then figure out how to map from one to the other!

          This ability to combine neural nets in lego-like fashion means that advances can be used combinatorial fashion... when we have a bag of tricks similar to what evolution has equipped the human brain with, then the range of problems it can solve (i.e. intelligence level) should be similar. I'd guess that a half-dozen advances is maybe all it will take to get a general-purpose intelligence of some sort, considering that the brain itself only has a limited number of functional areas (cortex, cerebellum, hippocampus, thalamus, basil ganglia, etc).

    • Don't listen to HIM. He's a charlatan. I can do everything he claims he can do in 10-20 years in two years. Give ME all your money!

  • by hey! ( 33014 ) on Saturday April 29, 2017 @08:33PM (#54327051) Homepage Journal

    Because intelligence as a single-dimensioned parameter is a myth.

    We already of have software with super-human information processing capabilities; and we're constantly adding more kinds of software that outperforms humans in specific tasks. Ultimately we'll have AIs that are as versatile has humans too. But "just as versatile" doesn't mean "good at the same things".

    So it's probably true that software is getting smarter at exponential rates (and humans aren't getting smarter as far as I can see), but only in certain ways.

    • by jlowery ( 47102 ) on Saturday April 29, 2017 @08:39PM (#54327067)

      Right. "Human intelligence" is a strawman. Computers can't have human intelligence because they lack human perceptions, and will not have the biochemical jibberjab underpinning it.

      Human intelligence is actually not that good... we are fooled all the time... hence, Trump!

    • by Anonymous Coward on Saturday April 29, 2017 @09:50PM (#54327235)

      The first three assumptions in this article have already been met sufficiently well enough to debunk the Wired article. AlphaGo has displayed superhuman intelligence in the first three areas of assumptions. 1) AlphaGo exploded on the scene by beating world class Go players much faster and much earlier than expected. Exponentially is a loaded word. e^0.0000001 is an exponential growth rate. So let's not quibble about how exponential the growth rate is.

      2) AlphaGo is a general purpose learning tool. Just listen to the lectures and articles penned by the DeepMind team.

      3) Alphago has displayed human-like intelligence, as claimed by the Go professionals it has played. They have said that AlphaGo plays like a human player.

      4) If you take the fourth assumption literally, AI's intelligence is going to expand infinitely. Talking about infinity in human terms is unreasonable. Yes, AI's intelligence will expand.

      5) The fifth assumption can be argued many ways. Some problems are not solvable due to their paradoxical nature. Other problems are subjective and are uniquely unsolvable by some individuals, but not by all individuals. It is a matter of time before a general purpose AI program will solve subjective emotional problems. Whether all human beings accept the solutions is subjective and open to speculation.

      The human population is composed of experts, with divisions of labor. It is not unreasonable for AI programs to have areas of expertise.

      • by Anonymous Coward on Saturday April 29, 2017 @11:07PM (#54327379)

        AlphaGo is not a "general purpose" learning mechanism. It won't ever write sonnets meaningful to humans, or be able to to dance, or even employ symbolic differentiation.

        It is a really nice toolset, and it is able to solve a task which is difficult for humans, but so does Google or your high-school calculator when you calculate sin(1.2).

        It won't ever go beyond the computational underpinnings of playing Go-like games: evaluating game positions and calculating game trees. It won't ever say 'forget it, I'd rather be drinking beer with my buddies', which is an intelligent thing to do for most of us with respect to playing Go.

        There's nothing human-like about AlphaGo, except that it solves a problem relevant to humans; the calculator example comes in mind.

        I'd be thrilled to know what kind of specific major human problems you'd consider AI-approachable, because I currently only see a bunch of more or less advanced mechanisms that are fine-tuned to solve very specific computationally well-defined problems, and most human problems are not computationally well defined.

        • Re: (Score:3, Insightful)

          by Visarga ( 1071662 )
          So, are you saying that reinforcement learning is intrinsically limited, or that AlphaGo is limited to the domain of Go? Remember, humans also use reinforcement learning in organizing actions. A reinforcement learning agent that has to optimize for a body that needs to drink, eat and socialize in order to function would totally go to grab a beer instead of playing a losing game. The needs of the agent are formative. Human needs are a source of much of our special skills. If we put artificial agents in simil
          • Exactly. It's something that works at the level of a human subconscious: the leftover bits of evolved junk in our minds from before we developed sentience. The sorts of things that let us shout at the sky before a thunderstorm and then assume that we've made Thor angry, not the sorts of things that allow us to build a modern technical society.
        • If a person thought like AlphaGo they would call him/her a savant, not 'intelligent'. Therefore it can't even be called artificial intelligence.
        • All true, and Google/DeepMind never said that AlphaGo, in of itself, is anything other than a dedicated Go playing program. I wouldn't compare it to a calculator though (or DeepBlue for that matter) since it learned how to do what it does (at this point primarily by playing against itself)... nobody programmed it how to evaluate whether a given Go board position is good or bad.

          However... DeepMind are in the business of trying to create general intelligence, and are trying to do so based on reinforcement lea

      • by sudon't ( 580652 ) on Sunday April 30, 2017 @09:07AM (#54328499)

        1) AlphaGo exploded on the scene by beating world class Go players much faster and much earlier than expected. Exponentially is a loaded word. e^0.0000001 is an exponential growth rate. So let's not quibble about how exponential the growth rate is.

        Particularly as we can't really measure intelligence. But "exponential" has a meaning, and it means a steady rate of doubling.

        2) AlphaGo is a general purpose learning tool. Just listen to the lectures and articles penned by the DeepMind team.

        No, it's a narrow AI. In the end, it's simply doing math. It's not "thinking" in any sense of the term. It's just able to hold many more probabilities in its memory than a human, and play them out much faster.

        3) Alphago has displayed human-like intelligence, as claimed by the Go professionals it has played. They have said that AlphaGo plays like a human player.

        That is what you call anthropomorphizing. The human players are simply projecting onto the machine.

        4) If you take the fourth assumption literally, AI's intelligence is going to expand infinitely. Talking about infinity in human terms is unreasonable. Yes, AI's intelligence will expand.

        "Infinite" simply means there's no limit. We don't know whether or not there's a limit to intelligence, but since the universe is finite, there would seem to be a limit to the things one could know. Infinite intelligence is our notion of God. What are the odds that infinite intelligence is also mythological?

        • Minor quibble - or maybe not minor - but how do you know that the universe is finite? It would be a strange coincidence if the entirety of the universe happened to be the part that we can observe...

    • Nonsense, the SAT told me all I'll ever need to know about my INT stat!

    • by quenda ( 644621 )

      Because intelligence as a single-dimensioned parameter is a myth.

      You seem to be confusing the concept of intelligence with *measuring* intelligence.
      Our current measure, IQ, applies specifically to the cognitive strength of human intelligence, and our ability to solve problems creatively.
      You cannot meaningfully talk about the IQ of a dolphin or dog, let alone an AI or alien intelligence.
      And the only reason anybody uses a single parameter for measuring human intelligence, is that the other measures and components of IQ are highly correlated, not beca

      • When people talk of super-human intelligence, they mean one capable of self-awareness

        These are two independent things. Intelligence has to do with finding patterns, and using these patterns to solve related problems. That doesn't require self-awareness at all. And you can be self-aware but stupid. You need a combination of self-awareness and intelligence to survive in nature, but you don't need self-awareness to solve a math problem.

      • We use a single number, but we construct that number from five or six different fields:
        a) math, or better: pattern matching e.g. what is the next number in this sequence
        b) language, which words are related/which word does not fit etc.
        c) optical/geometrical problems: which of the following figures does not fit into the picture (usully one is mirrored)

        Well, some tests work with more areas some with less. Here they even use 7: http://www.iqtestexperts.com/i... [iqtestexperts.com]

        My reasoning about this, especially if you had olde

        • by hey! ( 33014 )

          in the middle range, 90 - 110 points,

          IQ tests are also unreliable at the tail ends, for epistemological reasons.

          How do you construct an intelligence test? You start with a collection of reasonable-seeming tests and you have a sample population perform them. You then rank them on test performance and assess whether your ranking confirms your preconceptions. So here's the problem with the tail ends: it's really hard to get a large enough sample of subjects to test the predictive value of your test with people who score three or more standard

          • Very true.
            And tests are not realy compareable between countries/cultures etc. (except for math and spartial perception perhaps)

            • And tests are not realy compareable between countries/cultures etc.

              Not sure what that's intended to mean.

                If it means you're at a disadvantage taking the test in a language you don't speak then "well duh!".

      • by hey! ( 33014 )

        You seem to be confusing the concept of intelligence with *measuring* intelligence.

        You know you're right. But I think there's a good reason for this: magnitude is an intrinsic part of the concept. I've never heard anyone talk about intelligence except as a concept that allows you to rank things (e.g. Chimps are more intelligent than dogs, which are more intelligent than gerbils). So to apply it to an entity like a human or a program is to implicitly measure that thing.

        What I'm saying is that the concept is useful but of intrinsically limited in precision.

      • That is not intelligence, not even in thesense of current AI. When people talk of super-human intelligence, they mean one capable of self-awareness. Something even a toddler lacks.

        That is nonsense. Do you not have memories of yourself as a toddler? I do. I was absolutely self-aware.

    • In some regards I'd argue that one deserved an insightful mod. The comment that actually had one (at this time) didn't deserve it, and no "funny" comments at all. Sad. (#PresidentTweety contamination is bigly sad.)

      Consider the Fermi Paradox. Obvious resolution is that they're out there, but not talking to us because we're still amusing enough to bet quatloos on. What are they betting on?

      Whether we create our AI successors before we exterminate ourselves. Right now the odds are falling fast. (#PresidentTweet

  • there is no one human
  • The first AIs will be purpose built like today's supercomputers. They will make weather predictions, analyse financial trends, or study languages. Actually being intelligent isn't really necessary for interacting with humans, they only need to fake it well enough to fool us. The shift in society comes when those purpose-built AIs are efficiently linked along with the ability to interact with us. This is when it stops faking intelligence and actually becomes intelligent.
    • So the new test for AI should not be, can we distinguish it from a human, but is it able to cold call an elderly widow and scam her out of most of her savings?

  • That's all it is, really. The media latched on to the term 'AI' and ran with it, and with the fantasy misrepresentation of what 'AI' is in TV shows and movies, most people are uninformed/uneducated enough to actually believe that the media hype is real. Add to that more media hype about some corner cases like computers beating chess masters and winning at poker, plus gods-be-damned Google and their adding fuel to the media-hype fire (because, frankly, they want to make back a profit on the millions they've
    • Interesting to see an AC believing Dr. Hawking over Mr. Kelly.

      As far as I know, and I would love to get a better understanding of what he has done, Dr. Hawking has never programmed anything in his life.

      Mr. Gates seemed to have done some work in the early days of Microsoft but hasn't programmed in 35+ years.

      Mr. Musk would be the most credible source, but I guess his love of seeing his name in print out weighs his need to maintain the image of a practical visionary - this seems to be a problem as I would thin

  • A nice dose of reality to counter the dire warnings from people that, in all honesty, should know the five points and why there's no reason to be worried about AI.

    This ain't the Forbin Project.

  • by The Evil Atheist ( 2484676 ) on Saturday April 29, 2017 @09:09PM (#54327151)

    there is no one machine that can beat an average human in everything he or she does

    Neither can most humans. There is no such thing as an average human. Every individual human specializes, and increasingly so as they get older (or they do not improve). It is a pervasive strawman to require AIs to "beat" an average human when the same quality isn't used to judge humans.

    • You know... every super genius is a master of Go, higher mathematics, english literature, Chess, Poker, and can use power tools easily to build cabinetry as well as solve physics and advanced computational theory problems. They are also really good at selling, writing song lyrics, mystery novels, television shows, and science fiction... and flirting.

      And I left out the entire class of skills that relate to their physical body.

      It's amazing how good it is to be a genius.

  • ... I'm glad I did not RTFA.

    > 1.) Artificial intelligence is already getting smarter than us, at an exponential rate

    Nobody who knows anything says that. We don't have real AI at all yet, just expert systems and a few interesting decision algorithms.

    > 2.) We'll make AIs into a general purpose intelligence, like our own.

    Of course we will. (Why would anyone make a phone that is also a web browser, a camera, an appointment tracker, a video game machine, a music player, a movie player, a flashlight, a co

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      I don't think you understood the summary. Mr. Kelly is *responding* to other people (like Gates, Hawking, and Musk) who have asserted these five theses. Kelly is arguing that they're bunk. In other words, he agrees with you.

    • by SolemnLord ( 775377 ) on Saturday April 29, 2017 @10:30PM (#54327331)

      Because then you wouldn't have been saying things like:

      If you've figured out AI, you go general as soon as you can, because you get everything in one box.

      ...when Kelly dismisses that the concept of general-purpose AI because we look at intelligence through an anthropocentric lens. "General purpose" actually isn't.

  • The things he lists are not impossible. It's not inevitable of course, just not by any means beyond the realm of possibility.

  • Back in the good old days we had this story (myth) that a sea-going vessel could travel 20,000 leagues under the sea. Like that could EVER happen!

    I think the problem here is Mr Kelly has no foresight what-so-ever.
  • Which do you think is more like a religious belief? The proposal of a potential danger that might come with evolution of a particular branch of technology, or the complete dismissal of the possibility based on a very limited view regarding only 5 points of that very proposal?

    I don't disagree that human-like AI is far from happening if it ever does, or that some of those particular points might be misguided in some way particularly if there is some sort of paranoia related to them... but I wouldn't outright

  • by JimToo ( 1304315 ) on Saturday April 29, 2017 @11:26PM (#54327417)

    I'm not sure whether it is discomfort at the idea of having a computer call them silly, a deep belief that humanity is somehow special in a special way (carefully defined in undefinable terms) or just a deep and enduring lack of imagination. Between AlphaGo beating Lee Sodel, the cancer treatments being proposed by Watson and the rise of driver-less cars we are seeing many supposedly impossible roles being taken over by software.

    The five assumptions noted basically are basically denial ... reinforced with more denial. The evidence in a number of areas is in. Computers and software routinely appear in locations doing things predicted to be impossible. Computing capability keeps exceeding predictions.

    Arguably the one valid assumption made is that intelligence is computable. If it is, the Church-Turing thesis gives the useful theoretical result that anything computable can be run on a UTM. It seems likely that what will end up happening is that the deniers keep arguing the point on what 'intelligence' is even after the AI they deny being possible has become bored with the discussion and moved on to more interesting pastimes.

  • The more likely scenario is that between robots and AI, the working class will become redundant.

  • by SoftwareArtist ( 1472499 ) on Sunday April 30, 2017 @12:01AM (#54327483)

    If people like Bill Gates and Elon Musk are unrealistic in one direction, this person seems unrealistic in the other direction. He's basically betting against technological progress. And that's usually a losing bet, at least over long enough time periods.

    1.) Artificial intelligence is already getting smarter than us, at an exponential rate.

    Computers are already better than us at many tasks. That's been true for ages. And they're continuing to improve while we aren't. The set of tasks that computers are better at is constantly growing. I don't know of any fundamental limits to prevent them from eventually becoming better than us at the remaining tasks too. So it seems pretty likely they eventually will.

    2.) We'll make AIs into a general purpose intelligence, like our own.

    It's hard to even define what a "general purpose intelligence" means. But anything a human brain can do, computers will probably eventually be able to do it too.

    3.) We can make human intelligence in silicon.

    We can certainly make intelligence in silicon. We've already done it. Whether you consider it to be "human intelligence" or "inhuman intelligence" is kind of beside the point. If a computer can do something, whether it does it in the same way a human does is just an implementation detail.

    4.) Intelligence can be expanded without limit.

    I don't know of anyone who's claiming that. Where does he get this from? Anyway, the claim isn't that computers will advance without limit, only that they'll surpass humans.

    5.) Once we have exploding superintelligence it can solve most of our problems...

    Um, no. That's not at all what they're claiming. We certainly hope that it will solve many problems, but Gates, Musk, et al. are warning it could also create huge problems.

    emulating human intelligence "will be constrained by cost"

    Computers are cheap, and getting cheaper all the time. Humans are expensive and staying expensive. That's why automation has become such a big deal. Here again he seems to be betting against technological progress.

    • 4.) Intelligence can be expanded without limit.

      I don't know of anyone who's claiming that. Where does he get this from? Anyway, the claim isn't that computers will advance without limit, only that they'll surpass humans.

      Nick Bostrom's warnings have this as an implicit assumption. Bostrom claims that once a "super-intelligence" is created, it could improve on itself so rapidly that no other competing attempt will have a chance to catch up. For this to be true, there would need to be no limit.

    • That reminds me about a story one of my proffessors kept telling:
      "Two scientists discuss about AI.
      One says: AI will never be possible there are so many things a human always will be better than a computer.
      The other says: for every example you give me, where a human is better than a computer NOW, I shall build you a computer that beats all humans in that example."

      At that stafe we are now, Watson and AphaGo basically can be trained on many problems. But that will only be single instances that solve a single p

  • will be a virus. It will be something that has the intent to acquire the resources available in its environment and propagate its own kind. People are already scared of them. Whether we care about whether it experiences intent the way we do won't matter. It'll be out there, acquiring resources and propagating its own kind.
  • You can understand its hardware and write the AI version of Grey's Anatomy, but has that ever made anyone more of an expert on you than yourself? A machine that is self-aware would be no different. We may be able to read it's mind through logging, but there's no debugger for the "why." Besides, AI's founding function was and still continueing underlying mission is to destroy encryption and anonymity. This is why it gains so much funding. Don't believe me, research Alan Turing. Meanwhile, we are told it is t
  • by arobatino ( 46791 ) on Sunday April 30, 2017 @03:27AM (#54327883)

    I normally try to read the whole article before commenting, but it starts with a list of straw men claims, so I didn't bother.

    1. Artificial intelligence is already getting smarter than us, at an exponential rate.

    It would be more accurate to say that the claim is that Artificial Intelligence is increasing faster than ours, which is hard to dispute. Saying "getting smarter" makes it sound like a claim that AI is already smarter, which I don't think anyone has made.

    2. We’ll make AIs into a general purpose intelligence, like our own.

    Why not? The ability to learn to play Go and Poker better than humans, without having detailed algorithms built in, shows that computational brute force goes a long way, even when humans don't understand how the program works. Until recently it was thought that there would have to be conceptual advances specific to those games in order to defeat human champions (and in any case it was already possible to defeat the average human).

    3. We can make human intelligence in silicon.

    It's unnecessary for AI to emulate human intelligence (and chauvinistic to suggest that it has to). Its capabilities can match or exceed humans, while working in a completely different way.

    4. Intelligence can be expanded without limit.

    Why? All that's necessary is for AI to equal or exceed human capabilities. Even if one makes the farfetched assumption that humans are at the peak of intelligence, simply being able to match the most intelligent humans would exceed the capabilities of 99.9+% of the population.

    5. Once we have exploding superintelligence it can solve most of our problems.

    It would probably allow solving most of our existing problems, and create new ones. Life goes on. In any case what it could accomplish is completely independent of whether it's possible.

  • Anybody actually competent in the subject area has known this for a long time. It is just the morons that use "Technology!" as a surrogate for religion that do not get its limitations at all and ascribe magical powers to it. These idiots are unfortunately about as stupid, as obnoxious and as pervasive as the religious fuckups.

  • 1.) Artificial intelligence is already getting smarter than us, at an exponential rate. 2.) We'll make AIs into a general purpose intelligence, like our own. 3.) We can make human intelligence in silicon. 4.) Intelligence can be expanded without limit. 5.) Once we have exploding superintelligence it can solve most of our problems.

    1. no 2. probably 3. probably 4. probably 5. probably not, it will most likely use us for axle grease as we will otherwise compete for resources and we won't be good for anything else.

  • All of his counter-argument are readily, and obviously, felled by variations on the same theme.

    1.) Artificial intelligence is already getting smarter than us, at an exponential rate.

    This is entirely irrelevant. Artificial Intelligence (a misnomer, if ever there was one) doesn't even have to be a factor. All that matters is that machines are purposed and sufficiently well programmed to do a specific task usually performed by a human. This is the exact same thing that happened in the industrial revolution. The only difference is that our machines and their programming are more sophisticated

    • by ganv ( 881057 )

      I agree. Kelly explains why naive versions of the superintelligence hypothesis don't make sense, but ignores the main points. The brilliant minds (Gates, Musk, etc) did not claim to prove that human like superintelligence was about to take over. The identified the proliferation of tasks in which machines outperform humans as a threat to the stability of human society and eventually a possible existential threat in some possible scenarios. Kelly's article in no way counters the strength of their argumen

  • > 'The Myth of A Superhuman AI' ... five assumptions which, when examined closely, are not based on any evidence

    I'd debunk the myth of human intelligence first. I've heard about it many times, but haven't seen any evidence for it either.

  • Dijkstra (Score:3, Insightful)

    by Geeky Don ( 968061 ) on Sunday April 30, 2017 @09:17AM (#54328523)
    As my old friend Edsger Dijkstra once said "The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim."
  • First they ignore me . . .

    Then they ridicule me . . .

    Then they fight me . . .

    . . . And then I win.

  • It seems to me "my brain is magic" is more of a religious belief. His "heresies" are all effectively variations on that theme.

  • ... superintelligent AI ever be inclined to want to wipe us out? Even ignoring the notion that you would have to presuppose that AI would be as prone to irrational behavior as humans sometimes exhibit, I can't see any reason to conclude that is actually even a remotely likely scenario, striking me as being about on par for plausibility as the premise behind the movie "Lucy" from 2014.

    Certainly if we were competing for resources, I could understand it somewhat, being more intelligent than us, darwinian

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...