Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


By 2045 'The Top Species Will No Longer Be Humans,' and That Could Be a Problem 564

Posted by Unknown Lamer
from the kill-all-humans dept.
schwit1 (797399) writes Louis Del Monte estimates that machine intelligence will exceed the world's combined human intelligence by 2045. ... "By the end of this century most of the human race will have become cyborgs. The allure will be immortality. Machines will make breakthroughs in medical technology, most of the human race will have more leisure time, and we'll think we've never had it better. The concern I'm raising is that the machines will view us as an unpredictable and dangerous species." Machines will become self-conscious and have the capabilities to protect themselves. They "might view us the same way we view harmful insects." Humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses." Hardly an appealing roommate."
This discussion has been archived. No new comments can be posted.

By 2045 'The Top Species Will No Longer Be Humans,' and That Could Be a Problem

Comments Filter:
  • Well (Score:5, Insightful)

    by Jorl17 (1716772) on Saturday July 05, 2014 @10:18PM (#47391173)
    That escalated quickly. I highly doubt that in a matter of thirty years we'll have "conscious machines" viewing us as a thread. Are these guys for real? Do they know anything about AI?
  • Warp Drive (Score:5, Insightful)

    by TrollstonButterbeans (2914995) on Saturday July 05, 2014 @10:28PM (#47391221)
    Back in the 1960s after the moon landings, people would have expected we would be well past Mars by now. Probably Jupiter, Saturn or other stars.

    The moon landings happened 45 years ago!!

    I see no evidence of any programming that "learns" or is the slightest bit adaptive.

    And immortality wouldn't help --- evolution is powered by the failures dying off.

    And although slightly off the topic, what good would immortality be when advances in genetics will make humans better.

    And immortal 2014 human living in the year 3000 would be like a Homo habilis hanging around us. Would be genetically obsolete.

    This article is --- well --- shortsighted, bordering on the naive.
  • by Anonymous Coward on Saturday July 05, 2014 @10:32PM (#47391237)

    I first got into computing in the 1960s. AI was a big thing back then. Well, it had been a big thing in the 1950s, too, but it still need "just a little bit more work" in the 1960s when I started my graduate studies. There was this programming language called LISP. Everybody was really gung ho about it. It was going to make developing AI software so much easier. Great things were on the horizon. Soon enough it was the 1970s. Then the 1980s. Then the 1990s. I retired from industry. Then it was the 2000s. Now it's the 2010s. And AI is still, pardon my French, pretty fucking non-existent. I'll be dead long before AI could ever become a reality. My children will be dead long before AI becomes a reality. My grandchildren will likely be dead before AI becomes a reality. My greatgrandchildren may just live to see the day when the computing field accepts that AI just isn't going to happen!

  • by Jane Q. Public (1010737) on Saturday July 05, 2014 @10:38PM (#47391263)

    To stay alive for the next 30 years.

    How about "the same old story for the last 100 years" [wikipedia.org]?

  • by ShanghaiBill (739463) on Saturday July 05, 2014 @10:46PM (#47391291)

    And AI is still, pardon my French, pretty fucking non-existent.

    Except for the cell phone in your pocket, that can recognize your commands and search the internet for what you requested, or translate your statement into any of a dozen foreign languages, and has a camera that can recognize faces, and millions of objects, and can connect to expert systems that can, for instance, diagnose diseases better than all but the very best doctors. Oh, and your cellphone can also beat any grandmaster in the world at chess.

    However, if you consider AI to be shorthand for "stuff computers can't do yet", then, yes, AI will always be "right around the corner".

  • Re:Warp Drive (Score:2, Insightful)

    by ShanghaiBill (739463) on Saturday July 05, 2014 @11:01PM (#47391355)

    I see no evidence of any programming that "learns" or is the slightest bit adaptive.

    Then you have never looked at a ten line C program to implement a PID control loop for a servo motor.

  • Re:AI is always (Score:5, Insightful)

    by Anonymous Coward on Saturday July 05, 2014 @11:04PM (#47391369)

    Algorithms are not AI. Everything you describe is simply a matter of following a human-generated set of instructions. That is not AI.

  • Re:Warp Drive (Score:5, Insightful)

    by Jeremi (14640) on Saturday July 05, 2014 @11:47PM (#47391485) Homepage

    Then you have never looked at a ten line C program to implement a PID control loop for a servo motor.

    I don't think that would count as learning. That ten-line program will always do exactly what it was programmed to do, neither more nor less. An adaptive program (in the sense the previous poster was attempting to describe) would be one that is able to figure out on its own how to do things that its programmers had not anticipated in advance.

  • by Yoda's Mum (608299) on Saturday July 05, 2014 @11:53PM (#47391497)

    Sorry, why's it a problem? If artificial human-sparked intelligence is the logical replacement for biological evolution of homo sapiens, so be it. Survival of the fittest.

  • by O('_')O_Bush (1162487) on Sunday July 06, 2014 @12:05AM (#47391519)
    Yea. Their first step is flying cars.

    There are way too many uncertainties of what will be technologically possible by 2045 to be worrying about that right now. I'd wait until we actually had some idea of how to make a machine intelligence, and work the kinks out in a closed environment enough that it might actually be given control of something rather than the role of Ask Jeeves.
  • by OrangeTide (124937) on Sunday July 06, 2014 @12:40AM (#47391597) Homepage Journal

    Your cell phone is less capable of learning than a jellyfish. Although your cell phone can sometimes simulate very simple learning under extremely rigid frameworks for learning.

    a human competitive AI in 30 years? seems unlikely given the almost zero progress on the subject in the last 30 years. But maybe we'll hit some point where it all cascades very quickly. Like if we could do a dog level intelligence it is not a far leap to do human level and super human level. But we have trouble with cockroach levels of intelligence, or even defining what intelligence is or how to measure it.

    AI research for the last several decades have taught us how little we know about the fundamental nature of ourselves.

  • by Beck_Neard (3612467) on Sunday July 06, 2014 @12:49AM (#47391617)

    Symbolic manipulation as a route to AI was a period of collective delusion in computer science. Lots of people wasted their talents going down this route. In the 80's this approach was all but dead and AI researchers finally sobered up. They started actually learning about the human brain and incorporating the lessons into their designs. It's sad that so much time was wasted on that approach, but the good news is that the new approaches people are using now are based on actual science and grounded in reality. The intelligence in search, natural language, object and facial recognition, and self-driving cars (that ShanghaiBill pointed out) is due to these new approaches.

    AI spent its youth confused and rebellious. That was when you were in your graduate studies. Now it's far more matured. I encourage you to read up on new machine intelligence approaches and the literature in this area. You won't be disappointed.

  • Re:AI is always (Score:5, Insightful)

    by viperidaenz (2515578) on Sunday July 06, 2014 @12:54AM (#47391633)

    It's not going to change it's mind half way to New York and go somewhere else.

    Until a machine can come up with an idea of it's own, it's not intelligent.

  • by Spazmania (174582) on Sunday July 06, 2014 @12:57AM (#47391645) Homepage

    Louis Del Monte estimates that...


    The average estimate for when this will happen is 2040, though Del Monte says it might be as late as 2045. Either way, it's a timeframe of within three decades.

    I hope that's a in-joke. Like construction that's forever two weeks from done and jam two days a week (yesterday and tomorrow), three decades has been the estimate for "true" AI since the 1970's. Every year, it's just three more decades away.

  • by Knuckles (8964) <knuckles@@@dantian...org> on Sunday July 06, 2014 @02:27AM (#47391873)

    The machine has no fucking clue about what it is translating. Not the media, not the content, not even what to and from which languages it is translating (other than a variable somewhere, which is not "knowing". None whatsoever. Until it does, it has nothing to do with AI in the sense of TAFA. (The alarmist fucking article)

  • by thetoadwarrior (1268702) on Sunday July 06, 2014 @03:09AM (#47391949) Homepage
    All those things your smartphone are doing aren't AI. They're still relatively basic commands but done quickly through increased processing power or off-loading the work to a server. It might make your phone looks like it can talk to you but it's not doing any more than computers in the 80's did.
  • by msclrhd (1211086) on Sunday July 06, 2014 @06:03AM (#47392231)

    The chess programs had the rules of chess programmed into them, and the move to play was calculated by rating different moves in the search space using an algorithm that was programmed by the developers of the AI system. This means that it is only specialised to chess.

    To be the AI in movies like The Terminator, the program will need to be able to learn the rules and strategies of chess itself, and adapt its algorithm over time. To simplify the problem of recognising the elements on the board (machine vision), you could represent the board as an 8x8 array of Unicode characters.

    Teaching the rules is difficult because you need a way of communicating those rules, which means that the program will need to understand language and the meaning behind the language (or enough meaning to understand rules to a particular game). Also, chess has a lot of rules that can be complex (en passant, castling, etc.) so it would be better to start with a simple game like tic tac toe or connect 4.

    The real threat is not in a generic AI that deems humans as a threat, but a specially tasked program or AI that miscalculates: allowing machines to control drones or military aircraft to perform air strikes, or similar things. There, if a machine gets things wrong it can cause untold destruction. Think SkyNet/The Terminator, but here the machines do not know what they are doing (they don't have independent thought or understanding like humans and animals), they just classify humans (or buildings) as a threat -- that is, this can be via a decision tree like in the chess games and the best "move" is to attack any building.

  • by Anonymous Coward on Sunday July 06, 2014 @06:19AM (#47392269)

    Welcome to the http://en.m.wikipedia.org/wiki/Chinese_room

    Q: if there was a human dumb savant who could translate instantly between multiple languages, though without understanding how he did it (think Rainman), would you say he was not intelligent? Why? What is intelligence? We are inconsistent - we praise humans as intelligent when they can perform some complex algorithm well (chess), and yet as soon as a computer beats a human, or all humans, we denigrate the task as "not intelligence". Often the reason is "just an algorithm", but as a neuroscientist knows, that is a poor excuse - it's algorithms all the way down.

  • by Megol (3135005) on Sunday July 06, 2014 @09:31AM (#47392971)

    I'm not afraid of future technology - there are too many things to be afraid for already: nuclear weapons, chemical weapons, biological weapons (including engineered ones), ignorance and demonizing opponents (what creates most wars), hybris and ignorance, fanaticism, legalized corporate lobbyism and bribery + a lot more.

    But being afraid never helps, being aware of dangers can.

  • by SerpentMage (13390) <(ac.oohay) (ta) (ssorGHnaitsirhC)> on Sunday July 06, 2014 @06:35PM (#47395811)

    Exactly! I have been telling people that machines will not wipe us out because they will become as stupid as we are.

    Don't believe me? Here is my argument. Humans actually are very intelligent. I am not saying that some are more intelligent than others. I am saying we as a species are rather intelligent. However, it is that intelligence that gets in our way. When humans look at a problem they see answers. If the problem is science then the answer is relatively simple and we have devised ways to ensure our errors do not get in the way.

    But here is where the tricky bit comes in. If the problem is not entirely scientific and involves the interactions of humans, or interactions of any living beings (eg human to environment) then our decisions become stochastic; Same basis results in completely different results. This is not due to the lack of knowledge. TRUST ME it is not. It is due to people weighing certain aspects heavier than others. We all do this. You would think that we all come to the same conclusion, but we don't! It is this stochastic behavior that machines will have as well.

    For when machines become "aware" they will see the facts in different lights than say other machines. It is only natural because machines cannot store all information about everything. They, like humans, will have to optimize, prune and figure it out. Thus they like us will make stochastic decisions! I am even thinking that machines will turn into the Monty Python Holy Grail missions, and even though that sounds silly it will.

    Of course machines might have more capacity than humans, but even there I am skeptical because humans will have brain implants and be cyborgs and the cycle of lunacy will start all over again. IMO the most accurate representation of the dilemma of humans and machines is the Matrix. Watch it closely and see what its basis is.

"Kill the Wabbit, Kill the Wabbit, Kill the Wabbit!" -- Looney Tunes, "What's Opera Doc?" (1957, Chuck Jones)