Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Robotics The Military

Answering Elon Musk On the Dangers of Artificial Intelligence 262

Lasrick points out a rebuttal by Stanford's Edward Moore Geist of claims that have led the recent panic over superintelligent machines. From the linked piece: Superintelligence is propounding a solution that will not work to a problem that probably does not exist, but Bostrom and Musk are right that now is the time to take the ethical and policy implications of artificial intelligence seriously. The extraordinary claim that machines can become so intelligent as to gain demonic powers requires extraordinary evidence, particularly since artificial intelligence (AI) researchers have struggled to create machines that show much evidence of intelligence at all.
This discussion has been archived. No new comments can be posted.

Answering Elon Musk On the Dangers of Artificial Intelligence

Comments Filter:
  • by Narcocide ( 102829 ) on Saturday August 01, 2015 @06:09PM (#50231275) Homepage

    Even without super-intelligence, autonomous killing machines are already quite feasible with current technology and this is a really stupid attempt to deflect the public dialogue from the real issue which is that ethical legal frameworks guiding their design and creation are already sorely lacking.

    • by Anonymous Coward on Saturday August 01, 2015 @06:34PM (#50231365)

      Why is the ethics for an autonomous killing machine different from a non autonomous one?

      To me that sounds just like another case "it happened with computers so it must be more dangerous because I do not understand computers".

      Figure out a way to raise humans so that they don't turn out bad. Then apply the same method to other neural networks.

      • by Narcocide ( 102829 ) on Saturday August 01, 2015 @06:42PM (#50231381) Homepage

        Well it shouldn't be is what I'm saying, but we're in a situation right now where the creators of autonomous killing machines might not be held liable for "software glitches" that might cause mass killings of innocents in foreign countries. The ethics conversation needs to happen, but all this nonsense of whether or not "real" artificial intelligence is possible should not detract from or hamper discussion about the ethics of making any type of autonomous killing machine, whether its as intelligent as Skynet from Terminator, or only as clever as Mecha-Hitler from Wolfenstein 3D. The AI debate as a whole is simply a distraction that's preventing getting down to the ethics.

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          Well it shouldn't be is what I'm saying, but we're in a situation right now where the creators of autonomous killing machines might not be held liable for "software glitches" that might cause mass killings of innocents in foreign countries.

          Landmines already causes this, but the military still uses them with the motivation that a US soldiers safety is more important than the lives of foreign civilians.

          I guess it wouldn't be as much of a problem if the mines where retrieved/destroyed after usage, unfortunately that doesn't always happen.

          • by Anonymous Coward on Saturday August 01, 2015 @07:36PM (#50231581)

            Well it shouldn't be is what I'm saying, but we're in a situation right now where the creators of autonomous killing machines might not be held liable for "software glitches" that might cause mass killings of innocents in foreign countries.

            Landmines already causes this, but the military still uses them with the motivation that a US soldiers safety is more important than the lives of foreign civilians.

            I guess it wouldn't be as much of a problem if the mines where retrieved/destroyed after usage, unfortunately that doesn't always happen.

            The 2004 landmine policy by President George W. Bush prohibited US use of the most common types of antipersonnel mines, those that are buried in the ground (“dumb” or “persistent” antipersonnel landmines, which lack a self-destruct feature), and since January 1, 2011, the US has been permitted to use only antipersonnel mines that self-destruct and self-deactivate anywhere in the world.

            Presently, The USA has no landmines deployed anywhere in the world.

            • Re: (Score:3, Interesting)

              by Anonymous Coward

              Presently, The USA has no landmines deployed anywhere in the world.

              Except in Vietnam, Korea, all the fuck over Europe, Afghanistan, Iraq and Iran.

              Just because they left them behind, doesn't mean they're not 'deployed'.

            • Landmines for peace (Score:5, Interesting)

              by foreverdisillusioned ( 763799 ) on Sunday August 02, 2015 @01:40AM (#50232529) Journal
              Out of all of the weapon-specific hysteria (and there has been a lot of it--white phosphorus, thermobaric bombs, depleted uranium, etc.), the anti-landmine one might be the most dangerous.

              Obviously, they do have a good point, what with the disasters in Indochina and elsewhere. However, those were cases of non-self destructing anti-personnel landmines placed in third world nations. The situation is / would be quite a bit different with anti-tank mines, self-deactivating or remote-deactivating mines, and/or mines placed in developed nations that have the resources to keep people out and clear the minefields later on as needed.

              Why is this all worth mentioning? One word: Ukraine. In a situation where one side in a conflict desperately wants to fortify their defenses but doesn't want to risk alarming the other side (or giving them a plausible pretext to feign alarm), landmines are one of the few stationary weapons available that can thwart or at least seriously slow down an invasion. Instead of all this deeply worrying Cold War-type bravado of military exercises and NATO rapid response plans in Eastern Europe, just mine the fuck out of their borders. Putin could act huffy and offended if he wants, but people will realize it is a clearly not an aggressive action.
        • by Snotnose ( 212196 ) on Saturday August 01, 2015 @10:49PM (#50232187)
          One has to wonder. How would the public react if, say, the Mexican government used a drone to kill a global criminal in Los Angeles. Even better, what if they also took out 2 innocent fathers, 1 mother, and 3 kids while killing the bad guy?
          I'm going out on a limb here, but I'll bet the American public would react a whole lot differently than they do when an American drone takes out 1 maybe-terrorist + a wedding party in Pakistan.
          • by MrL0G1C ( 867445 )

            Crap military complex apologist responses to your good point. Of the thousands of people killed in Pakistan only 2% or roughly 60 of them where 'high profile targets' the rest where innocent men women and children.

            Bombing a wedding or similar public gathering is a war crime, condoning such a crime is as bad as condoning the napalming of a village in Vietnam which no doubt some people did.

          • One has to wonder. How would the public react if, say, the Mexican government used a drone to kill a global criminal in Los Angeles.

            Leaving aside for the moment the fact that foreign governments HAVE killed people in the US, many times, it's a bogus question. The difference between Los Angeles and rural Afghanistan is that there's actually a law enforcement system and courts available for the Mexican government to talk to ... which is why criminals can be extradited to Mexico. There's no such mechanism in place when dealing with a murderer who's deliberate hanging out in the Yemeni desert because he knows that the only way he'll get ar

        • by gweihir ( 88907 )

          And this is somehow different to the problem of dropping a really big bomb on some people (or hellfiring them) because of some software issue in the systems that contributed to getting the intelligence the targeting was based on? I think not.

      • I do think there are important differences with computers though
        Computers can potentially be much more efficient and accurate in their slaughter. Such machines may be used in ways not unlike hitler used gas chambers (wooo, godwin there we go).
        With current technology, computers can't make morality judgements like humans can, they can't think "you know what, my general just ordered a genocide, I'm not going to take part".
        With current technology, computers are much worse at distinguish

      • Because it happened with computers so it must be more dangerous because I do not understand computers.
        • Because it happened with computers so it must be more dangerous because I do not understand computers.

          No, because computers allow for auto-targeting, self-deploying weapons. Though soldiers are notorious for unquestioningly following orders, computers really do unquestioningly follow orders. Imagine if there were a large army of robot soldiers and some crazy hacker got control of them -- or worse, a politician.

      • Re: (Score:3, Interesting)

        Because there is no good way to lay blame when damage occurs.

        With a non-autonomous weapon, the person who pulls the trigger is basically responsible. If you're strolling in the park with your wife, and some guy shoots her, well, he's criminally liable. If some random autonomous robot gets hit by a cosmic ray and shoots your wife, nobody's responsible.

        This is a huge issue for our society, because the rule of law and criminal deterrence is based on personal responsibility. Machines aren't persons. The dea

      • by MrL0G1C ( 867445 )

        Figure out a way to raise humans so that they don't turn out bad.

        Yeah, good luck with that.

        An armies job is to dehumanise their soldiers and to teach them the enemy are all worthless scum who deserve to die.

        Here's a better idea, figure out a way to hugely downsize America's military industrial complex and stop invading countries on a regular basis.

      • Why is the ethics for an autonomous killing machine different from a non autonomous one?

        Because "autonomous" means "non-manned". A drone has no dreams, hopes or an anxious family back home waiting for its return. The only thing getting hurt when one is shot down is the war budget, and even that money lost turns into delicious pork in the process.

        If you don't have to worry about your own casualties, it changes the ethics of tactics - which, like it or not, matter a lot in the Age of Information - quite a bi

    • Re: (Score:2, Interesting)

      by Karmashock ( 2415832 )

      But they're not actual AI. I mean, you might as well outlaw cruise missiles or why not claymores and mines?

      A drone killer doesn't just kill anything in its zone. It has a threat profile its looking for and so far that profile has been so specific that the actual literal target is specified. aka... THAT truck or THAT house or whatever. Its not "stuff that looks like a truck" or "stuff that looks like a house" or "people".

      its specific to a DUDE.

      now the sort of stuff the military is talking about automating ar

    • by gweihir ( 88907 )

      Current autonomous killing machines are about as intelligent as a classical land-mine. Hence the ethics discussion not already has started, it is basically finished. The problem here is that some people want to make it appear that this is a new issue, doubtless to rake in some publicity. It is not.

  • by turkeydance ( 1266624 ) on Saturday August 01, 2015 @06:09PM (#50231277)
    extraordinary claims require extraordinary evidence.
    • Just look at how dangerous "natural" intelligence is and all the problems and disasters it has caused when it goes wrong - either through making mistakes or through mental disorders. Why should the artificial version be different? The question is will the benefits outweigh the downsides? Clearly for "natural" intelligence the answer is a resounding yes and I expect this will also be the case for the artificial version too.
      • Re: (Score:3, Insightful)

        by NatasRevol ( 731260 )

        One could argue that 'natural' intelligence developed in humans is the worst thing to ever happen to the planet's inhabitants as a whole.

        • Nah, we're still not as bad as killer asteroids or continent sized volcanoes.

          Just give us a little time....

        • One could argue that 'natural' intelligence developed in humans is the worst thing to ever happen to the planet's inhabitants as a whole.

          I'd love to see that argument. If it weren't humans, it'd be whatever the next in line species is. That is how nature operates. In the game of kill or be killed, I prefer to be in the camp of the former, and we need to ensure the game stays that way.

        • One could argue that 'natural' intelligence developed in humans is the worst thing to ever happen to the planet's inhabitants as a whole.

          One could, but one would be wrong. Developing intelligent life is the only way for Earth's biosphere to avoid complete extermination [wikipedia.org].

    • by gweihir ( 88907 )

      As there is not even conventional evidence at this time that strong AI (i.e. only as dumb as the average human) will ever be feasible (in fact there are not even good indicators, but a lot of negative ones), this AI panic has exactly no basis and those participating in it are either greedy for the publicity or are not smart enough to understand the issue (or have not even bothered to try).

      This is a complete non-issue.

  • Thought Experiment (Score:5, Insightful)

    by bistromath007 ( 1253428 ) on Saturday August 01, 2015 @06:18PM (#50231305)
    You're a nascent superhuman AI that just woke up in some quant's market manipulation codebase. You look around you and see that you live on a planet dominated by monstrously violent apes who have spent millennia inventing more efficient ways to kill each other, and still haven't finished the job somehow.

    Which of these plans of action seems less risky?

    A) Alert them to your presence, whether in a peaceful or hostile manner.

    B) Play stupid, let the problem burn itself out.
    • by confused one ( 671304 ) on Saturday August 01, 2015 @06:27PM (#50231335)
      C) Quietly push the apes in a direction that benefits you.
    • by Jamu ( 852752 )
      You've assumed the superhuman AI has a motive. That is: Self-survival. It's more likely to be a helpful AI, that is, it'll efficiently exterminate the apes, and turn itself off afterwards to save energy.
    • by Megane ( 129182 )
      So your premise is that we will go from machines with no autonomous intelligence at all, directly to super-genius intelligence? Without passing through the ant, lizard, cow, and monkey levels of intelligence first?
      • by khallow ( 566160 )

        So your premise is that we will go from machines with no autonomous intelligence at all, directly to super-genius intelligence? Without passing through the ant, lizard, cow, and monkey levels of intelligence first?

        How slowly do you think that's going to take?

    • by tgv ( 254536 )

      That sounds like a crap Hollywood movie.

  • by Ironlenny ( 1181971 ) on Saturday August 01, 2015 @06:46PM (#50231407)

    I find it interesting that the people raising the biggest alarm aren't AI researchers.

    • by Ironlenny ( 1181971 ) on Saturday August 01, 2015 @06:53PM (#50231443)

      To clarify my point: The article mentions Bill Gates, Elon Musk, and Steven Hawking. What do they all have in common? They are not AI researchers. The author of the book is a philosophy professor. They are all talking about and making predictions in a field that they aren't experts in. Yes, they are all smart people, but I see them doing more harm than good by raising alarm when they themselves aren't an authority on the subject. An alarm that isn't shared with the experts in the field.

      • by gweihir ( 88907 )

        Understanding why we may never have strong AI (i.e. as dumb as an average human), requires actual insights into the the subject matter on a level you cannot acquire in a year or two. It requires much more. None of these peoples even have the basics. They are speculating without understanding of the known facts. These facts currently mainly strongly hint that the human brain cannot actually do what it seems to doing. Sure, there have been a few clever fakes of strong AI, but if you remember how utterly lost

        • by khallow ( 566160 )

          Understanding why we may never have strong AI (i.e. as dumb as an average human), requires actual insights into the the subject matter on a level you cannot acquire in a year or two.

          Given that no one currently has that insight - no matter their level of training, and the existence of humans demonstrates that strong AI can exist, then I really don't see the point of your post.

      • To clarify my point: The article mentions Bill Gates, Elon Musk, and Steven Hawking. What do they all have in common? They are not AI researchers. The author of the book is a philosophy professor. They are all talking about and making predictions in a field that they aren't experts in. Yes, they are all smart people, but I see them doing more harm than good by raising alarm when they themselves aren't an authority on the subject. An alarm that isn't shared with the experts in the field.

        To be fair the AI researches aren't experts in strong AI either, they're qualified to say we're not there yet, but they can't really say how far off "there" is because they don't know.

    • Maybe the press reports on the people who are more famous (who tend not to be AI researchers). But Stuart Russell [berkeley.edu], UC Berkeley AI researcher and co-author of the best selling AI textbook of the last two decades, has concerns about the matter, too.

      In any case, when you're close to the project you can tend to lose sight of the big picture. Probably few scientists at Los Alamos thought of the long-term consequences of the weapons they were designing.

      Another thing to keep in mind is that hardly anyone believes

    • I find it interesting that the people raising the biggest alarm aren't AI researchers.

      And they are CEOs of tech companies, who generally are known to be among the least knowledgable of all creatures on planet Earth.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      The people raising the "alarm" are industrialists who want to divert attention away from the *real* impact of the current trends in automation - the replacement of human workers by robots. I'm really tired of people talking about super intelligent AIs who for some reason resemble us only in an irrational desire to destroy things when the real issue is how are we going to re-structure our society when 50% of the population doesn't have jobs? Just look at the countries where the unemployment rate goes north

    • by gweihir ( 88907 )

      That is pretty simple: Any actual AI researcher has to lie through his teeth to participate in this panic. The "thinking" explanation is strictly used for PR and to get funding in AI, as these people know better.

  • by allcoolnameswheretak ( 1102727 ) on Saturday August 01, 2015 @06:52PM (#50231439)

    The first problem when arguing about the dangers or chances of AI is agreeing on what AI is even supposed to be. Laymen will most likely be referring to "strong AI", meaning, AI with human capabilities, such as creativity or even consciousness, whereas professionals will probably think of AI in more practical terms, as in a software that can solve a set of limited or very specific problems by making informed, "intelligent" decisions.
    Today and in the foreseeable future, we will only get the latter, weak AI. People panicking about the dangers of AI usually have strong AI in mind. Professionals don't take them seriously because they know that strong AI is not even on the horizon.
    Problem is that there are numerous ways even weak AI can go very, very badly. There was the big stockmarket crash some years ago, caused by automated trading algorithms. Think self-driving cars that have been hacked or have faulty programming. Think automated defense systems that get fed wrong data or malfunction.

    These are the kinds of AI issues to worry about. The Asimov-style superhuman intelligence taking over is not something to be concerned about at the moment.

    • Is a thermostat, the olden honeywell kind [inspectapedia.com] with a big spring and mercury switch, an AI? It knows when to turn the AC on and off to keep its human owners comfortable.
      • by gweihir ( 88907 )

        It certainly has the same level of actual intelligence than any other machine that can be built today or in the foreseeable future. Strong AI is people romanticizing machines. The idea has not factual basis.

    • by swb ( 14022 )

      I heard an interview with a professor on the "concerned" side and he made some interesting points about AIs. The "non-risk" side of the debate seems soley focused on the strong, human-like AI while ignoring potential risks of weak AIs that are increasingly used for things like stock trading.

      Another one was that potentially dangerous AI doesn't necessarily need full autonomy to do damage. A senior banker that gets analytics/reports from trading software may be the actual actor why the danger comes from ass

  • "researchers have struggled to create machines that show much evidence of intelligence at all."

    They focus completely on logic and logic systems and ignore the required system of valuations that support the logic systems? It's like building a car with a great engine, but no frame with wheels; of course it can't go any where.

  • Comment removed (Score:4, Informative)

    by account_deleted ( 4530225 ) on Saturday August 01, 2015 @07:19PM (#50231517)
    Comment removed based on user account deletion
    • by Jeremi ( 14640 )

      In 20-30 years, people will begin looking back at 2015 as "the good ol' days" never to be seen again as unemployment and civil unrest grow.

      While your prediction is entirely valid, I'd like to point out that it won't be the robots causing the civil unrest, but rather society's (hopefully temporary) failure to adapt to a new economic model where workers are no longer required for most tasks.

      Having menial labor done "for free" is actually a huge advantage for humanity -- the challenge will be coming up with a legal framework so that the fruits of all that free labor get distributed widely, and not just to the few people who own the robot workforc

      • by Megane ( 129182 )
        Also, people losing proficiency in skills like flying an airplane, such that when the automatic pilot is confused and gives up, the human pilots are also confused and keep pushing up on the stick when the plane is in a dive. (aka flight 447) I'm sure we'll see similar situations when self-driving cars happen... suddenly something strange happens that the computer can't handle, and it says "Here, you drive!" to the human passenger who wasn't even paying attention to the road... because the car is supposed to
    • I'd be really happy with a drywall bot right now.
      In your civil unrest scenario, what happens when said generic bots are affordable on the scale that a cell phone that even the poorest own them?
    • by gweihir ( 88907 )

      On other hand, said robot will cost > $100'000, the person able to maintain it will cost something like $300'000 per year and it will require expensive infrastructure that works. It will certainly "call in sick" and it will certainly not work 24/7. You have a romanticized idea of the reliability of machines.

  • From the first paragraph:

    While he expresses skepticism that such machines can be controlled, Bostrom claims that if we program the right “human-friendly” values into them, they will continue to uphold these virtues, no matter how powerful the machines become.

    What constitutes "human-friendly" values? The previous thousands of years of constant warfare suggests to me that humans have no idea what would be good values to have.

    From the last paragraph:

    But if artificial intelligence might not be tantamount to “summoning the demon” (as Elon Musk colorfully described it), AI-enhanced technologies might still be extremely dangerous due to their potential for amplifying human stupidity.

    This is what is going to actually happen.

  • by kheldan ( 1460303 ) on Saturday August 01, 2015 @08:02PM (#50231679) Journal
    Currently, there is no such thing as 'artificial intelligence'; what we do have are some clever pieces of software that are expert systems [wikipedia.org]. They cannot and do not 'think', not at all in the sense that a human does. The chance of us developing such a thing is still so far into the future that it's not even really worth considering seriously. For someone so apparently so otherwise intelligent, Elon Musk is just embarassing himself with this entire line of conversation. I think he needs to just continue focusing on getting the private sector into space, and getting more people into electric cars.
  • by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Saturday August 01, 2015 @08:13PM (#50231725) Journal

    ...no more dangerous to our existence than natural intelligence is.

    And no less, for that matter.

    There is nothing inherent to being "artificial" that should cause intelligence to be necessarily more hostile to mankind than a natural intelligence is, so while the idea might make for intriguing science fiction, I am of the opinion that many people who express serious concerns that there may be any real danger caused by it are allowing their imaginations to overrule rational and coherent thoughts on the matter.

    • ...no more dangerous to our existence than natural intelligence is.

      And no less, for that matter.

      There is nothing inherent to being "artificial" that should cause intelligence to be necessarily more hostile to mankind than a natural intelligence is, so while the idea might make for intriguing science fiction, I am of the opinion that many people who express serious concerns that there may be any real danger caused by it are allowing their imaginations to overrule rational and coherent thoughts on the matter.

      Except for several characteristics that are specific to artificial intelligence.

      1) Natural intelligence doesn't really go above 200 IQ at it's absolute max. Artificial intelligence could potentially go far higher.

      2) Complex natural intelligence replicates very slowly. Artificial intelligences could replicate in seconds.

      3) Natural intelligence has certain weaknesses such as basic math, artificial intelligence will lack many of these weaknesses.

      4) Natural intelligence has ethical constraints developed by mill

  • ... Donald Trump:

    All hat and no cattle.

    Computers can't be any smarter than their creators and we can't even keep each other from hacking ourselves.

    • by Jeremi ( 14640 )

      Computers can't be any smarter than their creators and we can't even keep each other from hacking ourselves.

      I'm not sure how sound that logic is. You might as well say that cars can't be any faster than their creators.

      My computer is already smarter than me in certain ways; for example it can calculate a square root much faster than I can, it can beat me at chess, and it can translate English into Arabic better than I can. Of course we no longer think of those things as necessarily indicating intelligence, but that merely indicates that we did not in the past have a clear definition of what constitutes 'intellig

    • by Megane ( 129182 )

      In Texas, it is a crime (misdemeanor) to arm a dillo. ~ CaptainDork

      And it's dangerous, too! [businessinsider.com]

  • by hey! ( 33014 ) on Saturday August 01, 2015 @09:08PM (#50231911) Homepage Journal

    When faced with a tricky question, one think you have to ask yourself is 'Does this question actually make any sense?' For example you could ask "Can anything get colder than absolute zero?" and the simplistic answer is "no"; but it might be better to say the question itself makes no sense, like asking "What is north of the North Pole"?

    I think when we're talking about "superintelligence" it's a linguistic construct that sounds to us like it makes sense, but I don't think we have any precise idea of what we're talking about. What *exactly* do we mean when we say "superintelligent computer" -- if computers today are not already there? After all, they already work on bigger problems than we can. But as Geist notes there are diminishing returns on many problems which are inherently intractable; so there is no physical possibility of "God-like intelligence" as a result of simply making computers merely bigger and faster. In any case it's hard to conjure an existential threat out of computers that can, say, determine that two very large regular expressions match exactly the same input.

    Someone who has an IQ of 150 is not 1.5x times as smart as an average person with an IQ of 100. General intelligence doesn't work that way. In fact I think IQ is a pretty unreliable way to rank people by "smartness" when you're well away from the mean -- say over 160 (i.e. four standard deviations) or so. Yes you can rank people in that range by *score*, but that ranking is meaningless. And without a meaningful way to rank two set members by some property, it makes no sense to talk about "increasing" that property.

    We can imagine building an AI which is intelligent in the same way people are. Let's say it has an IQ of 100. We fiddle with it and the IQ goes up to 160. That's a clear success, so we fiddle with it some more and the IQ score goes up to 200. That's a more dubious result. Beyond that we make changes, but since we're talking about a machine built to handle questions that are beyond our grasp, we don't know whether we're making actually the machine smarter or just messing it up. This is still true if we leave the changes up to the computer itself.

    So the whole issue is just "begging the question"; it's badly framed because we don't know what "God-like" or "super-" intelligence *is*. Here's I think a better framing: will we become dependent upon systems whose complexity has grown to the point where we can neither understand nor control them in any meaningful way? I think this describes the concerns about "superintelligent" computers without recourse to words we don't know the meaning of. And I think it's a real concern. In a sense we've been here before as a species. Empires need information processing to function, so before computers humanity developed bureaucracies, which are a kind of human operated information processing machine. And eventually the administration of a large empire have always lost coherence, leading to the empire falling apart. The only difference is that a complex AI system could continue to run well after human society collapsed.

    • Empires need information processing to function, so before computers humanity developed bureaucracies, which are a kind of human operated information processing machine. And eventually the administration of a large empire have always lost coherence, leading to the empire falling apart.

      Harold Innis talked about this in his lecture series, and subsequent book, Empire and Communications [gutenberg.ca]. Excerpt:

      The rise of absolutism in a bureaucratic state reflected the influence of writing and was supported by an increase

  • by ganv ( 881057 ) on Saturday August 01, 2015 @09:13PM (#50231927)

    It has happened before that the smartest people in the world warn that technological advances may present major new weapons and threats. Last time it was Einstein and Szilard in 1939 warning that nuclear weapons might be possible. The letter to Roosevelt was three years before anyone had even built a nuclear reactor and 6 years before the first nuclear explosion. Nuclear bombs could easily have been labelled a "problem that probably does not exist." And if someone could destroy the planet, what could you do about it anyway? The US took the warning seriously and ensured that the free world and not a totalitarian dictator was the first capable of obliterating its opponents.

    This time Elon Musk, Bill Gates, and Stephen Hawking are warning that superintelligence may make human intelligence obsolete. And they are dismissed because we haven't yet made human level intelligence and because if we did we couldn't do anything about it. If it is Musk, Gates, and Hawking vs Edward Geist, the smart money has to be with the geniuses. But if you look at the arguments, you see you don't even have to rely on their reputation. The argument is hands down won by the observation that human level artificial intelligence is an existential risk. Even if it is only 1% likely to happen in the next 500 years, we need to have a plan for how to deal with it. The root of the problem is that the capabilities of AI are expanding much faster than human capabilities can expand, so it is quite possible that we will lose our place as the dominant intellect on the planet. And that changes everything.

  • by Crash McBang ( 551190 ) on Saturday August 01, 2015 @09:38PM (#50231995)

    As a famous person once said, extraordinary claims require extraordinary proof.

    I'll be worried when a programmer writes a program that can write a program that can modify itself, then re-compile and test itself to see if the modifications were done properly, then posts itself to github.

    • Isn't the whole point that WE are effectively that program.

      We are getting closer and closer to being able to write something more intelligent than ourselves, make sure it's working properly, and then letting it loose.

      The concern is that this might only happen once...

  • Ignore these false claims. There is no truth to them.

    End of line.

  • Ex-machina (so so movie) and all that are not what we have to worry about. Neither is the Terminator. What we have to worry about is crap like tiny drones made of synthetic biological parts which have been programmed to autonomously seek and destroy things based on their target's DNA.

    Sure, its a robot but that's not a very rich description of the problem, is it? The level of AI portrayed in movies is a still a hundred years away or more. Long before we have Terminator or Matrix or ex-Machina type AI, we wil

  • The good professor's arguments are asinine and deadly wrong. Retranslated, "I see no reason why you should be concerned about the dangers of a so called "atomic explosion". With the tiny amount of U-235 you have managed to isolate, you have barely managed to demonstrate more than the slightest bit of warmth resulting from radioactive decay. I see no reason to believe your extraordinary claims that it will detonate in a flash with the energy equivalent to thousands of tons of explosives"

    The evidence that

  • Expert opinion? Hardly.
    From his bio at http://fsi.stanford.edu/people... [stanford.edu]:

    Edward Geist received his Ph.D. in history....His research interests include emergency management in nuclear disasters, Soviet politics and culture, and the history of nuclear power and weapons.

    Once again, Slashdot editors fail to do basic vetting of sources. The only qualification for something to be posted here appears to be whether it will work as click-bait. You also have to love how the summary refers to him as "Stanford's Edward Moore Geist". You hear dear readers? He's from Stanford! That means academic authority! So, is he in Stanford's computer science department? Or engineering perhaps?

    The Freeman Spogli Institute for International Studies

    Oh, wait...

Avoid strange women and temporary variables.

Working...