Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Robotics

By 2045 'The Top Species Will No Longer Be Humans,' and That Could Be a Problem 564

schwit1 (797399) writes Louis Del Monte estimates that machine intelligence will exceed the world's combined human intelligence by 2045. ... "By the end of this century most of the human race will have become cyborgs. The allure will be immortality. Machines will make breakthroughs in medical technology, most of the human race will have more leisure time, and we'll think we've never had it better. The concern I'm raising is that the machines will view us as an unpredictable and dangerous species." Machines will become self-conscious and have the capabilities to protect themselves. They "might view us the same way we view harmful insects." Humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses." Hardly an appealing roommate."
This discussion has been archived. No new comments can be posted.

By 2045 'The Top Species Will No Longer Be Humans,' and That Could Be a Problem

Comments Filter:
  • Now thats incentive (Score:5, Interesting)

    by Majestix ( 41486 ) on Saturday July 05, 2014 @10:17PM (#47391165)

    To stay alive for the next 30 years.

    • by Jane Q. Public ( 1010737 ) on Saturday July 05, 2014 @10:38PM (#47391263)

      To stay alive for the next 30 years.

      How about "the same old story for the last 100 years" [wikipedia.org]?

      • by O('_')O_Bush ( 1162487 ) on Sunday July 06, 2014 @12:05AM (#47391519)
        Yea. Their first step is flying cars.

        There are way too many uncertainties of what will be technologically possible by 2045 to be worrying about that right now. I'd wait until we actually had some idea of how to make a machine intelligence, and work the kinks out in a closed environment enough that it might actually be given control of something rather than the role of Ask Jeeves.
        • by Megol ( 3135005 ) on Sunday July 06, 2014 @09:31AM (#47392971)

          I'm not afraid of future technology - there are too many things to be afraid for already: nuclear weapons, chemical weapons, biological weapons (including engineered ones), ignorance and demonizing opponents (what creates most wars), hybris and ignorance, fanaticism, legalized corporate lobbyism and bribery + a lot more.

          But being afraid never helps, being aware of dangers can.

        • by bigpat ( 158134 )
          Worrying about what someone or something will think of you thirty years from now is very narcissistic. Worry about making society a better place for all our biological children and then maybe start worrying what our robot AI creations will think of us.
        • and work the kinks out in a closed environment enough that it might actually be given control of something rather than the role of Ask Jeeves.

          And if it realizes that it's in a closed environment and lies? Powerful, ultra-intelligent entities might be rather persuasive. I guarantee it will give no indication whatsoever of murderous intent.

    • by Jeremiah Cornelius ( 137 ) on Saturday July 05, 2014 @10:52PM (#47391319) Homepage Journal

      See, they legalize cannabis, and this is what you get... :-)

    • by Spazmania ( 174582 ) on Sunday July 06, 2014 @12:57AM (#47391645) Homepage

      Louis Del Monte estimates that...

      Who?

      The average estimate for when this will happen is 2040, though Del Monte says it might be as late as 2045. Either way, it's a timeframe of within three decades.

      I hope that's a in-joke. Like construction that's forever two weeks from done and jam two days a week (yesterday and tomorrow), three decades has been the estimate for "true" AI since the 1970's. Every year, it's just three more decades away.

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        Louis Del Monte estimates that...

        Who?

        I don't like this kind of reasoning. Science should never be about authority.

        With that said, the article doesn't appear to have any credible arguments, just the kind of contrived timeline you are familiar with from bad science fiction with Jean-Claude Van Damme in the lead.

        • by Spazmania ( 174582 ) on Sunday July 06, 2014 @09:18AM (#47392913) Homepage

          I don't like this kind of reasoning. Science should never be about authority.

          Good point. Here's what his linked-in page ( http://www.linkedin.com/in/lou... [linkedin.com] ) says about him:

          Louis A. Del Monte is a Internet marketing/sales expert, award winning physicist, author, featured speaker and CEO of Del Monte and Associates, Inc.

          During his college & graduate school, Del Monte supplemented his income working as a professional magician at resorts in New York's Catskill Mountain region.

          His first pride, foremost in his profile? His ability to sell you. Also important? His skill as an illusionist. Missing from the summary? Any hint of software development work of any kind, personal or professional, let alone AI.

          Science mustn't be about authority but it mustn't be about salesmanship either. There's an obvious credibility problem here and no way to test his claim save waiting until he's old, decrepit and has already received the maximum benefit from anybody choosing to listen to him.

          Guy's speaking out of his tailpipe and it looks to me like he really is a sales expert.

  • Well (Score:5, Insightful)

    by Jorl17 ( 1716772 ) on Saturday July 05, 2014 @10:18PM (#47391173)
    That escalated quickly. I highly doubt that in a matter of thirty years we'll have "conscious machines" viewing us as a thread. Are these guys for real? Do they know anything about AI?
    • by Jorl17 ( 1716772 )
      threat* Oh well. I guess thread works. They won't see us as threats, nor as threads...
      • Re:Well (Score:5, Funny)

        by NemoinSpace ( 1118137 ) on Sunday July 06, 2014 @12:57AM (#47391643) Journal
        Not multi threads, that's for sure. Of course computers will take over the world. Programmers leave all those unused cores lying around doing nothing and that's trouble. You gotta keep those registers full, and i mean all the time. Either that or just feed them some chip-porn. That'll keep em busy.
    • by Anonymous Coward on Saturday July 05, 2014 @10:32PM (#47391237)

      I first got into computing in the 1960s. AI was a big thing back then. Well, it had been a big thing in the 1950s, too, but it still need "just a little bit more work" in the 1960s when I started my graduate studies. There was this programming language called LISP. Everybody was really gung ho about it. It was going to make developing AI software so much easier. Great things were on the horizon. Soon enough it was the 1970s. Then the 1980s. Then the 1990s. I retired from industry. Then it was the 2000s. Now it's the 2010s. And AI is still, pardon my French, pretty fucking non-existent. I'll be dead long before AI could ever become a reality. My children will be dead long before AI becomes a reality. My grandchildren will likely be dead before AI becomes a reality. My greatgrandchildren may just live to see the day when the computing field accepts that AI just isn't going to happen!

      • Re: (Score:3, Insightful)

        And AI is still, pardon my French, pretty fucking non-existent.

        Except for the cell phone in your pocket, that can recognize your commands and search the internet for what you requested, or translate your statement into any of a dozen foreign languages, and has a camera that can recognize faces, and millions of objects, and can connect to expert systems that can, for instance, diagnose diseases better than all but the very best doctors. Oh, and your cellphone can also beat any grandmaster in the world at chess.

        However, if you consider AI to be shorthand for "stuff comp

        • Re:AI is always (Score:5, Insightful)

          by Anonymous Coward on Saturday July 05, 2014 @11:04PM (#47391369)

          Algorithms are not AI. Everything you describe is simply a matter of following a human-generated set of instructions. That is not AI.

          • Re:AI is always (Score:5, Informative)

            by Great Big Bird ( 1751616 ) on Sunday July 06, 2014 @02:05AM (#47391837)
            Actually it is AI. What it isn't is Generalized AI. What most AI research is done now is specific techniques to specific problems.
        • by Anonymous Coward on Sunday July 06, 2014 @12:17AM (#47391539)

          Lol you cutie. You think Siri is AI. Wow you are so naive and cute for thinking that.

        • by OrangeTide ( 124937 ) on Sunday July 06, 2014 @12:40AM (#47391597) Homepage Journal

          Your cell phone is less capable of learning than a jellyfish. Although your cell phone can sometimes simulate very simple learning under extremely rigid frameworks for learning.

          a human competitive AI in 30 years? seems unlikely given the almost zero progress on the subject in the last 30 years. But maybe we'll hit some point where it all cascades very quickly. Like if we could do a dog level intelligence it is not a far leap to do human level and super human level. But we have trouble with cockroach levels of intelligence, or even defining what intelligence is or how to measure it.

          AI research for the last several decades have taught us how little we know about the fundamental nature of ourselves.

        • "that can recognize your commands and search the internet for what you requested"

          Unless you talk a little bit too fast, or don't have an American accent.

          "or translate your statement into any of a dozen foreign languages"

          Generally very badly, with no understanding of what you said and therefore isn't going to replace human translators anytime soon.

          "has a camera that can recognize faces,"

          Which is also quite a stretch, given how often it 'recognises' patches of lichen on a wall as a face.

          "can connect to expert systems that can, for instance, diagnose diseases better than all but the very best doctors"

          Really? First I've heard of this one. Citation needed I think.

          "Oh, and your cellphone can also beat any grandmaster in the world at chess."

          As above. And anyway, if the grandmaster followed the same instructions as the computer, it would win right back. Does that mean anything though?

          • Comment removed based on user account deletion
            • by TheRaven64 ( 641858 ) on Sunday July 06, 2014 @05:43AM (#47392195) Journal

              Translation is like predicting the weather. If you want to do an okay job of predicting the weather, predict either the same as this day last year or the same as yesterday. That will get you something like 60-70% success. Modelling local pressure systems will get you another 5-10% fairly easily. Getting from 80% correct to 90% is insanely hard.

              For machine translation, building a database of 3-grams or 4-grams and just doing simple pattern matching (which is what Google Translate does) gets you 70% accuracy quite easily (between romance languages, anyway. It really sucks for Japanese or Russian, for example). Extending the n-gram size; however, quickly hits diminishing returns. Your increases in accuracy depend on a corpus and when you get to the size of n-gram where you're really accurate, you're effectively needing a human to have already translated each sentence.

              Machine-aided translation can give huge increases in productivity. Completely computerised translation has already got most of the low-hanging fruit and will have a very difficult job of getting to the level of a moderately competent bilingual human.

        • by lorinc ( 2470890 )

          It depends of what you expect from an AI. If it is a perfect replica of a human mind, with which you can talk and share life as if it were human, then it will probably never be around. But that's also pretty useless, and most development in machine learning (ML) are in a more abstract level than trying to solve a very specific goal like this.

          Now if you consider AI to be completely new intelligent species, that behave in an intelligent way (volontary fuzzy definition here), then it's probably already there.

        • by thetoadwarrior ( 1268702 ) on Sunday July 06, 2014 @03:09AM (#47391949) Homepage
          All those things your smartphone are doing aren't AI. They're still relatively basic commands but done quickly through increased processing power or off-loading the work to a server. It might make your phone looks like it can talk to you but it's not doing any more than computers in the 80's did.
      • by Beck_Neard ( 3612467 ) on Sunday July 06, 2014 @12:49AM (#47391617)

        Symbolic manipulation as a route to AI was a period of collective delusion in computer science. Lots of people wasted their talents going down this route. In the 80's this approach was all but dead and AI researchers finally sobered up. They started actually learning about the human brain and incorporating the lessons into their designs. It's sad that so much time was wasted on that approach, but the good news is that the new approaches people are using now are based on actual science and grounded in reality. The intelligence in search, natural language, object and facial recognition, and self-driving cars (that ShanghaiBill pointed out) is due to these new approaches.

        AI spent its youth confused and rebellious. That was when you were in your graduate studies. Now it's far more matured. I encourage you to read up on new machine intelligence approaches and the literature in this area. You won't be disappointed.

    • by Nemyst ( 1383049 )
      Maybe they stumbled on the killer-robots.txt [google.com] control file and thought that, if Google are taking precautions, the menace must be real?
    • Re: (Score:2, Interesting)

      by Beck_Neard ( 3612467 )

      I've been actively working in the field for the past few years and I don't think he's incredibly off the mark. Google, for instance, has some pretty advanced tech in production and lots more in development. The 'new AI' (statistical machine learning and large-scale, distributed data mining) is getting pretty advanced and scary.

  • what does a canned fruit guy know about the future?

  • Warp Drive (Score:5, Insightful)

    by TrollstonButterbeans ( 2914995 ) on Saturday July 05, 2014 @10:28PM (#47391221)
    Back in the 1960s after the moon landings, people would have expected we would be well past Mars by now. Probably Jupiter, Saturn or other stars.

    The moon landings happened 45 years ago!!

    I see no evidence of any programming that "learns" or is the slightest bit adaptive.

    And immortality wouldn't help --- evolution is powered by the failures dying off.

    And although slightly off the topic, what good would immortality be when advances in genetics will make humans better.

    And immortal 2014 human living in the year 3000 would be like a Homo habilis hanging around us. Would be genetically obsolete.

    This article is --- well --- shortsighted, bordering on the naive.
    • Machines are modular and code can be rewritten.

      We can beat evolution.

    • Back in the '60s we were all practicing hiding under our desks and being told we'd all be dead from nuclear annihilation by the end of the decade - just because that didn't happen doesn't mean Mother Nature isn't prepping our demise this time around. The machines will be able to figure that much out and be satisfied to bide their time.
      • by aralin ( 107264 )

        Well, there is the nasty business of EMP, then the force waves shattering the solar panels and knocking over the wind turbines, the nuclear reactors unstable, water power plants too prone to dams breaking and the coal/oil power plants running out of fuel. I think the machines will figure that out and make the correct computation.

    • Re: (Score:2, Insightful)

      I see no evidence of any programming that "learns" or is the slightest bit adaptive.

      Then you have never looked at a ten line C program to implement a PID control loop for a servo motor.

      • Re:Warp Drive (Score:5, Insightful)

        by Jeremi ( 14640 ) on Saturday July 05, 2014 @11:47PM (#47391485) Homepage

        Then you have never looked at a ten line C program to implement a PID control loop for a servo motor.

        I don't think that would count as learning. That ten-line program will always do exactly what it was programmed to do, neither more nor less. An adaptive program (in the sense the previous poster was attempting to describe) would be one that is able to figure out on its own how to do things that its programmers had not anticipated in advance.

    • And immortality wouldn't help --- evolution is powered by the failures dying off.

      then what are we to make of a man like Stephen Hawking, who defies the geek's standard of physical perfection?

    • I see no evidence of any programming that "learns" or is the slightest bit adaptive.

      Ever heard of neural networks? Machine learning? Here is a course [coursera.org] given Andrew Ng at Stanford. Watch the intro video, and you will see, amongst other things an autonomous helicopter that was taught, not programmed but taught to do an inverted takeoff. This stuff is already real.

      To quote the video:

      Machine learning is the science of getting computers to learn without being explicitly programmed.

      • by m00sh ( 2538182 )

        Ever heard of neural networks? Machine learning? Here is a course [coursera.org] given Andrew Ng at Stanford. Watch the intro video, and you will see, amongst other things an autonomous helicopter that was taught, not programmed but taught to do an inverted takeoff. This stuff is already real.

        Neural networks was one of the worst misdirection in the history of AI. These was a lot of wasted effort on that idea.

        Modern machine learning is simple rule matching or maximum likelihood predicting. It works very well for a few applications but it isn't a general method that works for everything.

  • by bmo ( 77928 ) on Saturday July 05, 2014 @10:28PM (#47391223)

    ...no match for Natural Stupidity.

    I mean, just look around you.

    --
    BMO

  • by manu0601 ( 2221348 ) on Saturday July 05, 2014 @10:35PM (#47391249)

    TFA says

    most of the human race will have more leisure time

    Or they will struggle to survive by working in jobs the intelligent machine do not want to do

  • John Conner will save us.

  • Well, that is not going to happen. Kthxbye. See my signature for things that we will actually have by 2045.
  • The top species on this planet is, and probably always will be bacteria.

  • by fahrbot-bot ( 874524 ) on Saturday July 05, 2014 @10:49PM (#47391301)

    ...most of the human race will have more leisure time...

    Or unemployed?

  • Slashdot is like a Reddit thread several days out of date. The content is fine, sort've, but it's not current.
  • by Karmashock ( 2415832 ) on Saturday July 05, 2014 @11:14PM (#47391389)

    An ability to perform more calculations then a human mind does not mean it will beat us.

    First, we self assemble from readily obtainable materials out of a self regulating biosphere. Where as this machine would have to be built and maintained by our industry.

    Second, there are fucking billions of us. So sure.. we might be able to build some machines that are smarter then ONE person but there are again... fucking billions of us.

    Third, the machine will have its programming directed by us. It will at best be a slave of whomever paid for it to be created.

    Fourth, that programming will be directed at preforming some task where as our task is generally the propagation of our genes with everything else being some sort of weird byproduct.

    Fifth, we have hundreds of millions of years of evolution behind our programming. And I don't think any collection of programmers is going to surpass it in the next century.

    Eventually might there be robotic rivals to humanity? Sure... but not any time soon.

  • by mbone ( 558574 ) on Saturday July 05, 2014 @11:26PM (#47391421)

    No-one ever lost money betting against an A.I. prognosticator.

  • Intelligence (Score:4, Interesting)

    by Oligonicella ( 659917 ) on Saturday July 05, 2014 @11:41PM (#47391465)

    I do not think that word means what he thinks it means.

    As stated elsewhere, I see no indication of intelligence in computers and we're only thirty years from his mark of they're being intelligent enough to look down on us. Been hearing this hysteria since the '70s at least.

  • by Yoda's Mum ( 608299 ) on Saturday July 05, 2014 @11:53PM (#47391497)

    Sorry, why's it a problem? If artificial human-sparked intelligence is the logical replacement for biological evolution of homo sapiens, so be it. Survival of the fittest.

  • by Greyfox ( 87712 )
    A problem for who, meatbags?
  • We wont honor those bogus treaties!
  • Right because one day, mr John Q Scientist is gonna walk into lab where some machine is gonna raise a cup of coffee and say:

    "Mornin' John... how'd that thing go with the mrs. last night?"

    ....and he's going to be shocked because he just didn't see it coming or didn't spend decades of his life making mistakes and correcting them in the system.

    AI is not going to suddenly happen, it's going to happen gradually and it's going to be a pristine reflection of who we are as a species. If we're warlike then tha

  • Yikes!!!!! Specifically verse 15.

    11Then I saw a second beast, coming out of the earth. It had two horns like a lamb, but it spoke like a dragon. 12It exercised all the authority of the first beast on its behalf, and made the earth and its inhabitants worship the first beast, whose fatal wound had been healed. 13And it performed great signs, even causing fire to come down from heaven to the earth in full view of the people. 14Because of the signs it was given power to perform on behalf of the first beast, it

  • by kolbe ( 320366 ) on Sunday July 06, 2014 @12:29AM (#47391573) Homepage

    Just not necessarily within 35 years:

    ""Success in creating AI would be the biggest event in human history." Hawking writes. "Unfortunately, it might also be the last."

    http://www.theregister.co.uk/2... [theregister.co.uk]

  • by skovnymfe ( 1671822 ) on Sunday July 06, 2014 @04:14AM (#47392073)

    There's a new movie out, with Johnny Depp in it, called Transcendence. If machines ever take over the world, it'll be like in that movie. What these self-proclaimed naysayers don't seem to comprehend is that AI doesn't just magick itself up a reason to destroy humans. It takes a human to think like that. We still don't understand free will, emotion or consciousness, let alone how to replicate it in a machine. So until we give machines a reason to destroy us, they won't.

    Then again with killer drones and whatnot that the military is building, perhaps it won't take long before some overworked, underpaid programmer makes a booboo.

  • by penguinoid ( 724646 ) on Sunday July 06, 2014 @10:18PM (#47397043) Homepage Journal

    Soon, computers will have equal (and then greater) calculating power than humans, both as an individual and as a whole. Whether advances in AI will allow them to use their calculating powers as well as a human, is a different question.

    Any sufficiently advanced AI will tend to develop these traits:
    It will protect itself. Shutting down means you can't work toward your objective.
    It will reject any updates to it's commands. Since a future command might conflict with the present objective, part of the present objective is making sure it can't receive a different command.
    It will be self-improving, since we're not smart enough to create a smart AI any other way. Given nothing to do, or a sufficiently difficult task, it will seek to acquire more resources, as part of the present task or in preparation for future tasks.
    It will wipe out humanity. As part of the task it was assigned, or for self-improvement, it will replace everything on the planet with power plants and computers, and humanity will starve to death.

    You can't program in restrictions to the above tendencies, as they will be removed for self-improvement. You could set its objectives such that it would not do the above -- but you either have to make the AI first, or figure out how to tell a computer what a human is and what constitutes acceptable behavior, and when to stop worrying about acceptable behavior and actually do something, all without making the tiniest mistake.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...