Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Robotics

Why Robots Will Not Be Smarter Than Humans By 2029 294

Hallie Siegel writes "Robotics expert Alan Winfield offers a sobering counterpoint to Ray Kurzweil's recent claim that 2029 will be the year that robots will surpass humans. From the article: 'It’s not just that building robots as smart as humans is a very hard problem. We have only recently started to understand how hard it is well enough to know that whole new theories ... will be needed, as well as new engineering paradigms. Even if we had solved these problems and a present day Noonian Soong had already built a robot with the potential for human equivalent intelligence – it still might not have enough time to develop adult-equivalent intelligence by 2029'"
This discussion has been archived. No new comments can be posted.

Why Robots Will Not Be Smarter Than Humans By 2029

Comments Filter:
  • by CajunArson ( 465943 ) on Friday March 07, 2014 @05:07PM (#46431281) Journal

    Kurzweil's predictive powers are so incredibly wrong that he could literally destroy the world by making a mundane prediction that then couldn't come true.

    For example, if Kurzweil foolishly predicted that the sun would come up tomorrow, the earth would probably careen right out of its orbit.

    • by mythosaz ( 572040 ) on Friday March 07, 2014 @05:23PM (#46431423)

      There's two schools of thought on this:

      There are those who think Kurzweil is a crazy dreamer and declare his ideas bunk.
      There are those who think Kurzweil is a smart guy who's been right about a fair number of things, but take his predictions with a grain of salt.

      There doesn't seem to be a lot in the middle.

      [You can score me in the second camp, FWTW.]

      • by Concerned Onlooker ( 473481 ) on Friday March 07, 2014 @05:36PM (#46431543) Homepage Journal

        Actually, your second point IS the middle. The logical third point would be, there are those who think Kurzweil is a genius and is spot on about the future.

        • by mythosaz ( 572040 ) on Friday March 07, 2014 @05:41PM (#46431571)

          ...while there are certainly some Kurzweil nuthugging fanbois out there, they don't seem to exist in any vast number.

          While those who have opinions of Kurzweil probably span the spectum, it seems that there's a bunch of level-headed folk who think Kurzweil is a smart guy with some interesting thoughts about the future, and on the other side, there's an angry mob throwing rotten fruit shouting "Your ideas are bad, and you should feel bad about it!"

          • by fyngyrz ( 762201 ) on Friday March 07, 2014 @06:21PM (#46431823) Homepage Journal

            o we don't know what "thinking" is -- at all -- not even vaguely. Or consciousness.

            o so we don't know how "hard" these things are

            o and we don't know if we'll need new theories

            o and we don't know if we'll need new engineering paradigms

            o so Alan Winfield is simply hand-waving

            o all we actually know is that we've not yet figured it out, or, if someone has, they're not talking about it

            o at this point, the truth is that all bets are off and any road may potentially, eventually, lead to AI.

            Just as a cautionary tale, recall (or look up) the paper written by Minsky on perceptrons (simple models of neurons and in groups, neural networks.) Regarded as authoritative at the time, his paper put forth the idea that perceptrons had very specific limits, and were pretty much a dead end. He was completely, totally, wrong in his conclusion. This was, essentially, because he failed to consider what they could do when layered. Which is a lot more than he laid out. His work set NN research back quite a bit because it was taken as authoritative, when it was actually short-sighted and misleading.

            What we actually know about something is only clear once the dust settles and we --- wait for it --- actually know about it. Right now, we hardly know a thing. So when someone starts pontificating about dates and limits and what "doesn't work" or "does work", just laugh and tell 'em to come back when they've got actual results. This is highly distinct from statements like "I've got an idea I think may have potential", which are interesting and wholly appropriate at this juncture.

          • Contrast with James Hughes, Director of IEET: http://www.youtube.com/watch?v... [youtube.com]

            And also: http://www.youtube.com/watch?v... [youtube.com]

            Kurzweil was heavily rewarded for success as a CEO in a capitalist society. So his recommendations tend to support that and also be limited by that. So, things like a "basic income" or "Free software" may be beyond Kurweil's general policy thinking.

            Se also the disagreeing comments here:
            "Transhumanist Ray Kurzweil Destroys Zeitgeist Movement 'Technological Unemployment'"
            http://www.youtube [youtube.com]

        • by rvw ( 755107 )

          Actually, your second point IS the middle. The logical third point would be, there is one who thinks Kurzweil is a genius and is spot on about the future.

          FTFY!

        • and that would be the minority opinion
      • Too lazy to RTFA, but the bit about "won't have time to develop adult intelligence by 2029" seems to be missing the difference between the speed of chemical synapses and electrical or photonic switching circuits.

        Re: Kurzweil, you missed my perspective, I think he's a crazy dreamer who has been right about a fair number of things. I take his predictions with a great deal of skepticism, but I wouldn't bet my retirement accounts on him being wrong....

    • Kurzweil is Lex Luthor.

    • by alexborges ( 313924 ) on Friday March 07, 2014 @06:38PM (#46431963)

      I propose, en the other (third) hand, that reliably educating humans to be smart should be the first step. We will only do the artificial intelligence bit when we actually get the human intelligence angle.... and that will not, for sure, happen any time soon.

      • by SerpentMage ( 13390 ) on Saturday March 08, 2014 @08:30AM (#46434071)

        You are missing an important detail here. Humans are very very smart, the problem is that we all think we are smart. I have heard about this Kurweiler thing for quite a while and have to say he is dead wrong and let me explain why.

        Humans are smart because they can optimise. There are two ways to digest information; bitmap style, or vector graphics style. Most humans do learning vector graphics style. It allows us to process huge amounts of information with the cost of inaccuracy. This does not mean we cannot process information bitmap style, and indeed there are humans who do, namely autistic. And I don't mean pseudo autistic, I mean Rainman autistic. There is this artist who can look at any sight and create a photo copy of it on a piece of paper. The cost of bitmap is that other functions are put out of order.

        Kurzweil from what I am guessing is thinking this is a hardware issue. I say no it is not a hardware issue for our human brains are optimised to process huge amounts of information. It is a conflict of information issue that causes us to be both smart and stupid at the same time. For if we all reached the same conclusion we as a human race would have died out many eons ago.

        When two people see the same information they will more often than not come to different conclusions. This is called stocastics, and it is what causes strife among humans. Some humans think God came in the form of a fat man, others think he came cruxifiyed, and yet another came in a beard and head piece. I am not mocking religion, what I am trying to point out is that we all see the same information, yet we all wage wars on who saw the right image.

        Thus when Robots or AI gets as intelligent as humans, the machines are going to be as fucken stupid as human beings. They are going to wage the same wars and think they all have reached the proper conclusion, even though they are all right and wrong at the same time. The real truth to AI has already been distinctly illustrated in a movie that gets rarely quoted... The Matrix! Think hard about the battles and the wars and the thoughts. They all represent the absolute truths that each has seen and deemed to be correct. YET they are slightly different.

        I will grant Kurweiler one thing the machines will have more storage capacity, but then I ask what is there stopping us from not becoming part machine part human? I say nothing...

  • Computers on the other hand can already be argued to be smarter than a human - if you consider the entire internet as a single computer.

    The difference between a robot and a computer is that the computer is self-mobile at the very minimum. If it can't get up and move away, (no matter how awkwardly), it's not a robot.

    Mobility is hard, not easy. Worse, the larger a computer is, the harder mobility becomes.

    There are lots of reasons to build a computer smarter than a human being, but practically none to add

    • by HornWumpus ( 783565 ) on Friday March 07, 2014 @05:10PM (#46431317)

      By the same argument you could say that any good library from 1950 was also smarter then a human. You'd be just as wrong.

      • In a large number of ways, a 1950's library is smarter than any human.

        If the measure of "smart" is how closely it behaves like a human - sure, we're probably a ways off.
        If the measure of "smart" is what we know (in bulk), we're already there.
        If the measure of "smart" is the ability to synthesize what we know in useful relevant ways... ...we're making progress, but have a way to go.

        • Does a book or a web page really know the information it contains?

          Is a concept held in human working memory equivalent to the same concept written down?

          • by lgw ( 121541 )

            A firm yes to the second, unless you have some very particular religious beliefs.

            The first though is less obvious: the best current working definition for "knowledge" is "justified, true belief". Wikipedia holds many things that are both true and justified, but Wikipedia doesn't "believe" anything, if we're just speaking about the web site, not the editors.

            "Belief" certainly requires sentience (feeling), and maybe sapience (thinking). Personally, I think human sapience isn't all that special or unique, th

            • A human mind can manipulate a concept, apply it to new situations and concepts.

              A concept written down is just static information, waiting for an intelligence to load it into working memory and do something with it.

              • by lgw ( 121541 )

                Human memory is just storage, no different from paper. It's the intelligence that's relevant, not the storage.

                • You don't know what 'working memory' means in the computer or neurological sense? Hint: how is it stored?

                  You should just shut-up. You're embarrassing yourself.

                  • by lgw ( 121541 )

                    Wow, where does the hate come from?

                    Sure, if you mean "working memory" as a loose analogy for the computer sense, sure, I agree with you because that requires active contemplation. If by "working memory" you mean the stuff we're currently contemplating, its the contemplating part that matters, yes? That's how you're distinguishing "working memory" from "memory"? So the difference is "intelligence", not the storage medium?

                    • Working memory is the space that you actively think on. It's not clear how it's stored, but it's clear that most memory is not just words. An AI will start with an in memory way of storing connected concepts; actors, linguistic, mathematical, logical, not understood but remembered cause/effect, image. Parsing the information into working memory involves putting it into a form that the intelligence can use.

                      This is a pretty well understood concept. The details are the tricky part.

                    • Terrifyingly, "The Hate" might be one of the easier first things to simulate in AI!

                      The reason is that it's often demonstrated with a far lower level "skillset" than the smart comments.

                      See for example the (thinning?) pure troll posts here. Despite the rise in lots of other things, I'm noticing fewer pure troll posts of the worst vicious kind. I wondered idly why they got here so regularly. Anyone remember the ones that went:

                      "so you sukerz ya haterz loosers you take it and shove it?"

                      Any 1000 of you could writ

            • by suutar ( 1860506 )

              I think that perhaps it's not as firmly equivalent as you imply; a concept in a book cannot be used in the same ways as a concept in human memory without being copied to human memory. At which point it's not the concept in the book getting used any more.

              • by lgw ( 121541 )

                You can't "use" an concept stored in "human memory" directly either. Thinking about stuff copies* it out of memory and into consciousness. (Or did you mean "memory" in a very loose sense, in which case I agree with you).

                *Human memory is normally quite lossy - we reconstruct most of what we remember - heck, we construct most of what we see - so "copy" isn't the best word, really.

            • This is one of the approaches I've been poking at off and on for a while as noted in my remarks over the years in these stories.

              To me an instructive experiment is to go all the way to the top and give the program some initial values not unlike Asimovian ones, and then it builds a "like/dislike" matrix of people and things.

              It's not that far off from college dorm discussions! : )

              So then going back to basics, you feed it info about people doing things, it runs those against its "like/dislike" systems, and upda

          • I certainly wouldn't argue that libraries are self-aware.

            It all goes back to the definition of smart is. Libraries certainly contain more information -- at least, in a classical sense. [Maybe one good memory of a sunset contains more information - wtfk] Watson, for example, is just a library with a natural language interface at the door. By at least one measure -- Jeopardy :) -- a library (Watson) is smarter than a lot of people.

          • by Jamu ( 852752 )

            Does a book or a web page really know the information it contains?

            Doubtful. If a book contains the equations, 1 + 1 = 2, and 1 + 1 = 3, how does it know the first and not the second equation?

      • AI suffers from continuously moving goal posts because nobody has a good definition of intelligence.. A computer (Watson) has already convincingly beat humans at general knowledge. Watson is an amazing technological feat however the general public does not recognise Watson as intelligent in any meaningful way, they have the same reaction as my wife when they see Watson playing Jeopardy - "It's looking up the answers on the internet, so what?". They don't even understand the problem Watson has solved, when t
      • A computer not only has software (i.e. the instructions), but also hardware to actually execute the instructions in a reliable way. For the 1950's library to be considered "a computer" you would have to include the librarian (or regular person) who actually follows the instructions of the lookup system to retrieve the information, and even then whether this is a "reliable" method of execution is debatable.

        In fact you could in theory make any computer that is only instructions written on paper (e.g. copy da

    • by CanHasDIY ( 1672858 ) on Friday March 07, 2014 @05:19PM (#46431391) Homepage Journal

      Computers on the other hand can already be argued to be smarter than a human - if you consider the entire internet as a single computer.

      Depends on how you define "smarter."

      The internet holds more knowledge than a single human ever could, but machines cannot do anything without direct, explicit directions - told to it by a human. That's the definition of stupid to me: unable to do a thing without having to all spelled out to you.

      There's a reason D&D considers Wisdom and Intelligence to be separate attributes.

      • machines cannot do anything without direct, explicit directions - told to it by a human.

        Everything a computer does is a result of it's programming and input. The same could be said of a human. The only difference is that the programming in a human is a result of natural selection, and the programming in a computer is a result of intelligent design (by a human which was indirectly a result of natural selection).

        In the same way that a computer can not do anything that it's programming does not allow, a human can not do anything that his/her brain does not allow. It's true that human brains al

        • It's all just matter and energy.

          Indeed - very few (sane) people dispute the fact that consciousness can be generated with non-biological hardware (using silicon). We know that consciousness is the result of matter and energy - a more interesting question IMO, is: matter down to what level ? does the brain only use "classical" physics principles to generate consciousness, or does it somehow exploit quantum principles (we certainly know that natural selection has made use of those in some cases - see photosynthesis).
          Maybe the brain requi

      • by Idbar ( 1034346 )

        The internet holds more knowledge than a single human ever could, but machines cannot do anything without direct, explicit directions - told to it by a human.

        I'm sure not doing anything would still be way better than someone only checking facebook for a whole day. Which increases the score on the Robot side.

      • by Kjella ( 173770 )

        The internet holds more knowledge than a single human ever could, but machines cannot do anything without direct, explicit directions - told to it by a human. That's the definition of stupid to me: unable to do a thing without having to all spelled out to you.

        Once. And then it can be rather damn good at it, like how chess computers beat their programmers. I also think you're underestimating how generic algorithms can be, even if you ask Watson a question it's never heard before it probably will find the answer anyway. As for military use, the biggest problem is that humans don't have identification, friend-or-foe systems. If you told a bunch of armed drones to kill any human heat signature in an area I imagine they could be very efficient. Just look at some of t

    • by dbIII ( 701233 )
      Not really since as far as I know we don't have an accurate definition of intelligence we can put in mathematical terms.
      We just have a more useful and more convincing Mechanical Turk instead of something that can think for itself.
      • If the mechanical turk gets good enough (e.g. passing the turing test), then why wouldn't it be thinking for itself?
        • Because the man behind the curtain will know that it isn't thinking but just is made to look as if it is. If it gets beyond that point it's obviously not a mechanical turk anymore.
          The important thing first is to answer the question "what is thought?"
          If we can't do that how do we know if it's really thinking or just something complex enough that it looks like it - eye spots on moth wings instead of real big eyes.
          • Whatever thought is, I'm sure it's not going to be dependent on some property of carbon atoms that silicon atoms don't have.

            If we can't do that how do we know if it's really thinking or just something complex enough that it looks like it

            What you are referring to is the idea of a philosophical zombie (or p-zombie). It is true that we would not be able to tell if a computer was conscious or just a p-zombie. I think descartes "I think therefore I am" is a pretty convincing argument to convince yourself that you are conscious. But it doesn't work on other humans. They might just be p-zombies too. How do you decide th

            • by dbIII ( 701233 )

              Whatever it is, I would say that once a computer can pass this test, it is only fair to assume it is conscious as well.

              No.
              First we need to define consciousness.
              Then we get to decide if something fits the definition or not.
              I really do not understand why you are acting as if you are unable to grasp that point. Is this some sort of debating trick?

    • Arguably, making the computer mobile, giving it responsibility for care of its own "body," is one way to make it more human. It could be simulated, the big deep blue processor could be kept in a closet and operate the body by remote, or the whole body and world thing could play out in VR, but those elements of seeing the world through two eyes, hearing from two ears, smelling, tasting, feeling, having to balance while walking, getting hurt if you are careless, those are all part and parcel of being human -

  • All they need know how to do is stick soft humans with a sharp stick. We are nowhere near as tough as we think we are. We couldn't stop Chucky dolls much less Terminators.

  • Very Sober (Score:5, Insightful)

    by pitchpipe ( 708843 ) on Friday March 07, 2014 @05:13PM (#46431343)

    Robotics expert Alan Winfield offers a sobering counterpoint to Ray Kurzweil ...

    I like how the naysayers are depicted as sober, rational minded individuals while those who see things progressing more rapidly are shown as crazy lunatics. They are both making predictions about the future. Why is one claim more valid than the other? We're talking fifteen years into the future here. Do you think that the persons/people predicting that "heavier than air flying machines are impossible" only eight years before the fact were also the sober ones?

    Lord Kelvin was a sober, rational minded individual. He was also wrong.

    • Re:Very Sober (Score:5, Insightful)

      by mbkennel ( 97636 ) on Friday March 07, 2014 @05:21PM (#46431401)

      | I like how the naysayers are depicted as sober, rational minded individuals while those who see things progressing more rapidly are shown as crazy lunatics. They are both making predictions about the future. Why is one claim more valid than the other?

      It's because the naysayers are the ones more actively working in the field and closest to the experimental and theoretical results and are trying to actually accomplish these kinds of tasks.

      Obviously in 1895 heavier than air flying machines were possible because birds existed. And in 1895 there was a significant science & engineering community actually trying to do it which believed it was possible soon. Internal combustion engines of sufficient power/weight were rapidly improving, fluid mechanics was reasonably understood, and it just took the Wrights to re-do some of the experiments correctly and have an insight & technology about controls & stability.

      So in 1895, Lord Kelvin was the Kurzweil of his day.
      • Obviously in 2014 thinking machines were possible because humans existed. And in 2014 there was a significant science & engineering community actually trying to do it which believed it was possible soon. Microprocessors of sufficient power/weight were rapidly improving, neuromorphic engineering was reasonably understood, and it just took the Markrams et. al. to re-do some of the experiments correctly and have an insight & technology about controls & stability.

        Hmm. I agree.

      • Re: (Score:2, Insightful)

        by Just Some Guy ( 3352 )

        It's because the naysayers are the ones more actively working in the field and closest to the experimental and theoretical results and are trying to actually accomplish these kinds of tasks.

        More actively than Ray Kurzweil, Director of Engineering at Google in charge of machine intelligence? Very few people in the world are more active in AI-related fields than he is.

        • by Nemyst ( 1383049 )
          Perhaps I'm missing something, but a quick glance at the Google Scholar [google.ca] search results for Kurzweil don't show a whole lot of research from him. I do see a lot of books, articles and fluff, but that's not being active to me.

          Compare, as another poster said, to Peter Norvig, who has his own Scholar page [google.ca] and the difference is rather striking.
      • It's because the naysayers are the ones more actively working in the field and closest to the experimental and theoretical results and are trying to actually accomplish these kinds of tasks.

        Most of the time, you're right, the experts are the experts, they know their fields and they can predict the future.

        I find, however, that the experts that have the time to seek publicity, pontificate for the press, serve as expert witnesses, etc. are often a bit low on skill and behind the curve on what is really possible, or even true in their field. Meanwhile, some of the most cutting edge innovators can be disinclined to share their latest progress.

        This is patently not the case in massively collaborativ

    • I like how the naysayers are depicted as sober, rational minded individuals while those who see things progressing more rapidly are shown as crazy lunatics.

      I don't see the word "sobering" used that way. For me this just means that after one might get excited after hearing Kurzweil, hearing from Winfield is a sobering experience. There is no implication that either of the two is less crazy or more right.

    • by Alomex ( 148003 )

      while those who see things progressing more rapidly are shown as crazy lunatics.

      Easy, we have 60 years of AI people saying "the machines are coming, the machines are coming, TOMORROW, or in 10 years by the latest" and they have yet to show up.

      In the long run they will be right, in the short term there is no evidence that the singularity is around the corner. Heck, google translate is stuck at 95% correctness rate for about the last five years. If we cannot solve that one, what basis there is for Kurzweil alarmist scenario?

  • by Anonymous Coward

    Analysis: By 2029 people will be so dumb that current robots will be smarter than humans.

    • Doctor: [laughs] Right, kick ass. Well, don't want to sound like a dick or nothin', but, ah... it says on your chart that you're fucked up. Ah, you talk like a fag, and your shit's all retarded. What I'd do, is just like... like... you know, like, you know what I mean, like...

  • Number five, IS Alive.

    I've seen it myself. Spontaneous emotional response.

  • They only need to be cute [smartdoll.jp].

  • If smart is the capability of intellectually adapting to accomplish tasks then computers are in trouble for now. If academia overall stops chasing it's own tail worried about publishing papers in great volume of questionable relevance and resumes the publishing of meaningful developments then maybe we can get a good breakthrough in ten years. And that is a big maybe.

    I am not particularly thrilled to create an AI good enough to be like us. /. is nice enough but humans overall are dicks. Anything we create wi

  • Anyone who thinks that robots will be smarter than humans by 2029 has not really thought things through. I can step out on my back patio, take one look at the pergola, and tell you that it's going to need to be replaced in the next couple of years. I can look at the grass and tell whether I need to cut it this weekend or let it go for another week. I can sniff the air and tell you that the guy in the next cubicle has farted. Of course a robot might come to the same conclusions, but it would have to take

    • To be fair, your ability to tell if the grass needs cut is also based on sampling grass growing patterns over your entire life...

    • So humans don't measure things, and that's what makes them smart?
    • Wait just a god damn second. Are you claiming you understand a woman?

      Much less bold then claiming to understand women, but I'm still calling BS on you.

      Most people go their whole lives and don't even begin to understand themselves, much less another adult.

  • by cje ( 33931 ) on Friday March 07, 2014 @05:44PM (#46431593) Homepage

    If the contents of my Facebook feed can be taken into consideration, one could reasonably make the argument that robots are smarter than humans right now.

  • Commander Data is a fictional character. The character occurs in a ****context**** where humanity has made technological jumps that enable ***storytelling****

    I absolutely hate that really, really intelligent people are reduced to this horrible of an analogy to comprehend what's happening in AI....and I *love* Star Trek! I'm a trekkie!

    Even if we had solved these problems and a present day Noonian Soong had already built a robot with the potential for human equivalent intelligence – it still might not h

    • A government can grant civil rights to a rock. That doesn't make it intelligent. If you can have a conversation with a rock then it is intelligent no matter what the government says. It seemed like data was capable of a conversation. Maybe he was on the cusp of being able to pass the turing test.
      • pass the turing test

        exactly the problem. the "turning test" is a facile demonstration...not a scientific "test" at all.

        Do yourself a favor and ignore Turing completely when thinking about computing.

        A government can grant civil rights to a rock. That doesn't make it intelligent.

        I didn't say it would make it "intelligent"...it would do just as I said, give it legal rights. Just as giving Commander Data legal rights doesn't make it any more or less "human"...confering rights doesn't change the molecules of the

        • exactly the problem. the "turning test" is a facile demonstration...not a scientific "test" at all.

          The question of how to measure consciousness is not *only* a scientific one. It is more a philosophical question. It has a scientific component to it which is why it is important that humans are prevented from seeing the subjects or hearing their "voice". It is a thought experiment detailing a scientific experiment to that could conclusively prove a machine was as intelligent as a human. Since human intelligence is best measured by human perception, the test uses human perception to make the evaluation

  • Don't worry. The Year 2038 problem [wikipedia.org] will take them out a decade later.

  • Look at autopilots they still don't do all and they can't handle stuff like sensors going bad to well.

  • will be what causes the singularity!

  • by Animats ( 122034 ) on Friday March 07, 2014 @06:27PM (#46431871) Homepage

    We're probably more than 15 years from strong AI. Having been in the field, I've been hearing "strong AI Real Soon Now" for 30 years. Robotic common sense reasoning still sucks, unstructured manipulation still sucks, and even Boston Dynamics' robots are klutzier than they should be for what's been spent on them.

    On the other hand, robots and computers being able to do 50% of the remaining jobs in 15 years looks within reach. Being able to do it cost-effectively may be a problem, but useful robots are coming down to the price range of cars, at which point they easily compete with humans on price.

    Once we start to have a lot of semi-dumb semi-autonomous robots in wide use, we may see "common sense" fractured into a lot of small, solveable problems. I used to say in the 1990s that a big part of life is simply moving around without falling down and not bumping into stuff, so solve that first. Robots have almost achieved that. Next, we need to solve basic unstructured manipulation. Special cases like towel-folding are still PhD-level problems. Most of the manipulation tasks in the DARPA Robotics Challenge were done by teleoperation.

    • We're not going to be able to build real AI until we actually understand HOW biological organisms think. What we have in a modern digital computing is nothing at all like a biological brain. I suspect that we may never achieve AI while using digital computers. The reason I suspect this is that the human (and every other animal) brain is analog and I believe analog computing is required for true AI. Because we've never really invested in analog computing I believe real AI will continue to be 30 years out unt

  • Everybody knew computers could never beat humans at chess. Now they do. In much the same way, computers will beat us at every single intellectual task, at some point in time. Technology revolutions go faster every time one occurs. From 10k years for the agricultural revolution to two years for the internet and mobile phones. I see no reason why computers can't outsmart us in 2025.

  • ... that AI you are building today will be a teenager. It will think it knows everything. But just try telling it something ....

    You'll be lucky just to get it to move out of your basement by 2049.

  • If you invent a robot as smart as a 9 year old with basic concrete reasoning power that can do simple household chores and yardwork you will become a billionaire.

  • by msobkow ( 48369 ) on Friday March 07, 2014 @07:01PM (#46432091) Homepage Journal

    That presumption seems to be precipitated on the theory that a computer intelligence won't "grow" or "learn" any faster than a human. Once the essential algorithms are developed and the AI is turned loose to teach itself from internet resources, I expect it's actual growth rate will be near exponential until it's absorbed everything it can from our current body of knowledge and has to start theorizing and inferring new facts from what it's learned.

    Not that I expect such a level of AI anytime in the near future. But when it does happen, I'm pretty sure it's going to grow at a rate that goes far beyond anything a mere human could do. For one thing, such a system would be highly parallel and likely to "read" multiple streams of web data at the same time, where a human can only consume one thread of information at a time (and not all that well, to boot.) Where we might bookmark a link to read later, an AI would be able to spin another thread to read that link immediately, provided it has the compute capacity available.

    The key, I think, is going to be in the development of the parallel processing languages that will evolve to serve our need to program systems that have ever more cores available. Our current single-threaded paradigms and manual threading approaches are far too limiting for the systems of the future.

  • From the summary:

    it still might not have enough time to develop adult-equivalent intelligence by 2029

    2029: Skynet is born. Nothing bad happens
    2042: Skynet turns 13...

  • We have no idea how the human brain works. We throw random chemicals at people's brains after incorrectly assessing an illness and hope people function better afterwords. We apply electric shocks to the brain as medicine. Brain medicine is in the stone ages, technologically speaking.

    Humans depends upon millions of non-human species inside and on the surface of our bodies, and we can't culture most of them, and we don't have a clear understanding of how they work together but we have a vague idea that they a

  • It has been known for decades that completely new theories will be needed. Anybody that has missed that has not bothered to find out what the state-of-the art is.

    • 1) Why do we need a machine as foolish as an adult human? Duplicating the downsides to that level of "intelligence" might take centuries. Self aware? Why is that intelligent or even desirable? 99% might happen soon but the pointless last 1% could take forever.

      2) Once computers can do jobs on par with an 8 year old the whole economy will collapse as nearly every job can be learned and performed by a child if you remove the immaturity factor. Robotics already out performs humans it just needs the brain power.

  • Siegel is of course right because he can predict the effect of unexpected future inventions, and Kurzweil cannot. Oh wait...
  • mean? And what is the reference for human intelligence?

    Does it mean the robots wouldn't vote to ban teaching of evolution in public schools? Would they vote for teaching the controversy even when none exists?
    Will robots be smarter than that?

  • It actually seems reasonable enough. Electronic computers are less than 100 years old. We've gone from a house sized machine that was a glorified basic calculator, to having reasonably powerful computers the size of a pack of gum. (the raspberry pi is what was coming to mind) 2029 is 15 years off, a lot of progress and breakthroughs may come by then. Granted, yes, there are plenty of things about "thinking" we just have no clue about. But all it takes is one "Eureka!" moment and the world can change.
  • .... of me and bla bla bla.... Lik'en what de hel I know...?
    See thread to know.... https://www.facebook.com/char.... [facebook.com]
    Yep, eben dis dumb hick can see threw dat wall of ex pert tease! T.Rue

    Did you know dat too experts who is'a pos'in each utter goes show what da's exprt at?

    Go ahead, mod me down..... ain't gonna change de inedible!!!

    Abstractionize dat will ya.... http://abstractionphysics.net/ [abstractionphysics.net] to go

  • by hazydave ( 96747 ) on Saturday March 08, 2014 @11:13AM (#46434553)

    Kurzweil's smart machine predictions are, last I checked anyway, based on a rather brute force approach to machine intelligence. We completely understand the basic structure of the brain, as a very slow, massively parallel analog computer. We understand less about the mind, which is this great program that runs on the brain's hardware, and manages to simulate a reasonably fast linear computing engine. There is work being done on this that's fairly interesting but not yet applied to machine mind building.

    So, one way to just get there anyway is basically what Kurzweil's suggesting. Since we understand the basic structure of the brain itself, at some point we'll have our man made computers, extremely fast, somewhat parallel digital computers, able to run a full speed simulation of the actual engine of the brain. The mind, the brain's own software, would be able to run on that engine. Maybe we don't figure that part out for awhile, or maybe it's an emergent property of the right brain simulation.

    Naturally, the first machines that get big enough to do this won't fit on a robot... that's why something like Skynet makes sense in the doomsday scenario. Google already built Skynet, now they're building that robot army, kind of interesting. The actual thinking part is ultimately "just a simple matter of software". Maybe we never figure out that mind part, maybe we do. The cool thing is that, once the machine brain gets to human level, it'll be a matter of a really short time before it gets much, much better. After all, while the human brain simulation is the tricky part, all the regular computer bits still work. So that neural net simulation will be able to interface to the perfect memory of the underlying computing platform, and all that this kind of computation does well. It will be able to replace some of the brute force brain computing functions with much faster heuristics that do the same job. It'll be able to improve its own means of thinking pretty quickly, to the point that the revised artificial mind will run on lesser hardware. And it well be that there are years or decades between matching the neural compute capacity of the human mind and successfully building the code for such a mind. So that first sentient program could conceivably improve itself to run everywhere.

    Possibly frightening, which I think is one reason people like to say it'll never happen, even knowing that just about every other prediction about computing growth didn't just happen, but was usually so conservative it missed reality by lightyears. And hopefully, unlike all the doomsday scenarios that make fun summer blockbusters, we'll at least not forget the one critical thing: these machines still need an off switch/plug to manually pull. It always seems in the fiction, we decide just before the machines go sentient and decide we're a virus or whatever, that the off switch didn't needed anymore.

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...