Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics AI

Killer Robots Will Only Exist If We Are Stupid Enough To Let Them (theguardian.com) 143

Heritype quotes the Guardian's science correspondent: The idea of killer robots rising up and destroying humans is a Hollywood fantasy and a distraction from the more pressing dilemmas that intelligent machines present to society, according to one of Britain's most influential computer scientists. Sir Nigel Shadbolt, professor of computer science at the University of Oxford, predicts that AI will bring overwhelming benefits to humanity, revolutionising cancer diagnosis and treatment, and transforming education and the workplace. If problems arise, he said, it will not be because sentient machines have unexpectedly gone rogue in a Terminator-like scenario.

"The danger is clearly not that robots will decide to put us away and have a robot revolution," he said. "If there [are] killer robots, it will be because we've been stupid enough to give it the instructions or software for it to do that without having a human in the loop deciding...."

However, Prof Shadbolt is optimistic about the social and economic impact of emerging technologies such as machine learning, in which computer programmes learn tasks by looking for patterns in huge datasets. "I don't see it destroying jobs grim reaper style," he said. "People are really inventive at creating new things for humans to do for which will pay them a wage. Leisure, travel, social care, cultural heritage, even reality TV shows. People want people around them and interacting with them."

This discussion has been archived. No new comments can be posted.

Killer Robots Will Only Exist If We Are Stupid Enough To Let Them

Comments Filter:
  • stupid enough (Score:5, Insightful)

    by PopeRatzo ( 965947 ) on Saturday June 16, 2018 @01:39PM (#56795188) Journal

    It's 2018. We've broken through the "stupid enough" barrier.

    • by Anonymous Coward

      I for one welcome our new T-800 and T-1000 overlords.

      Dun dun dun da dun
      Dun dun dun da dun

    • Re:stupid enough (Score:5, Insightful)

      by NoNonAlphaCharsHere ( 2201864 ) on Saturday June 16, 2018 @01:59PM (#56795282)
      We don't have a "human in the loop, deciding" for the current generation of neural net AIs. We don't have a human deciding over genetic algorithms, either. We create a fitness function, back propagation, whatever, and it's off to the races, unintended consequences be damned. The fact that we build/design along the lines of "improve yourself along criterion X" means we're slowly building an existential threat to ourselves whether we realize it or not. Clearly, a soldier-bot has two prime directives: 1) wipe out the enemy 2) stay functional, so you can wipe out the enemy.
      • by Z80a ( 971949 )

        Everyone is looking at what a robot soldier is doing, but no one pays attention to the google search and youtube recommend AIs.

        • I've been thinking about the Spotify A.I.s and wondering whether or not more people choose music based on the style of the song or on the meaning in the words. Because if lots of people like songs with a certain message, than is it possible to end up getting song recommendations in some sort of subliminal suggestion bubble? Like, every week, you end up with songs telling you how great you are, or, worse, some sad country songs about how terrible everything is?
    • TRUMP = EVIDENCE (Score:1, Insightful)

      by Anonymous Coward

      Trump is "in charge" of America, so yes, very fucking stupid people must live here.

    • Re:stupid enough (Score:5, Interesting)

      by Ol Olsoc ( 1175323 ) on Saturday June 16, 2018 @03:14PM (#56795578)

      It's 2018. We've broken through the "stupid enough" barrier.

      We've completely demolished it.

      Olsoc's rule if killy stuff : If there is a method of killing people, Governments will rush to it like a dog to bacon. The more barbaric, the more bacon.

      Olsoc's second rule of killy stuff: Unless people are being killed, there isn't much point of warfare.

      Which brings us to Olsoc's third rule of killy stuff: War robots will be specifically designed to kill people.That is how humans work.

    • It's 2018. We've broken through the "stupid enough" barrier.

      Not yet. Just wait, it'll get better. We're All Getting Dumber, Says Science [slashdot.org].

      Peak oil? Peak coal? I've said we've been at peak intelligence for awhile now. Kids don't have to think, students have been taught to regurgitate facts and Google is quite good at answering questions. Now the answer being correct or incorrect, that's literally besides the point. If I FEEL that it's a right answer, then it is. Don't disrespect me just because I'm stupid! You must conform to MY way of thinking to keep from

    • by HiThere ( 15173 )

      IIRC it was sometime last year, or possibly the year before that, that I read about a place in Japan that was deploying totally automated security robots that could (under unspecified circumstances) kill people. Now that's not a very smart AI, but it's still an AI. And it can kill people (who invade it's turf?).

      So this has already been reported as happening. (Was the report accurate? Was it just a proposal someone made? Who knows?)

      Then there's the work that's being done for the army on target identific

    • by Anonymous Coward
      Mankind built thousands of atom and hydrogen bombs, so obviously we're going to be stupid enough to build killer robots. It's sweet but not very realistic to think otherwise.
  • We clearly are stupid enough.

    • by Hognoxious ( 631665 ) on Saturday June 16, 2018 @01:45PM (#56795212) Homepage Journal

      Speak for yourself, you insensitive clod!

      I'm just lazy and careless.

    • by Anonymous Coward

      We clearly are stupid enough.

      Yes, we are. Everything I've seen says it's going to use AI. And who knows what those algorithms come up with down the road.

      So, they're programmed to fight a battle say and to learn how to do it effectively. The AI may come up say, using civilians as shields. Or using civilians as a way to dupe the enemy and getting them killed in the process.

      Or the AI may decide that the best tactic is to exterminate all life and destroy everything to keep the enemy from advancing.

      Or - the way to end way is to eliminat

    • religionofpeas said...
      >We clearly are stupid enough.

      Exactly. What about human history gives the impression that we will not do this? Hasn't Russia already been working on these? A.I. robots that were fearless, could shoot with 100% accuracy, fight 24x7, don't hesitate to follow orders would be a tremendous force multiplier.

      What country won't find a way to justify creating and using a.i. robots?

    • by gweihir ( 88907 )

      We clearly are stupid enough.

      As a group, most definitely.

    • "We clearly are stupid enough."

      Too bloody right!!!

      The good news, however, is that being stupid, we will probably bungle the robot's hardware and software so badly that they will rank between mutant athlete's foot fungi and rabid pandas.as dangers to humanity's future..

      • by HiThere ( 15173 )

        You are, I think, talking about the versions that already exist.

        OTOH, I doubt that we will intentionally create an AI that desires to eliminate humans. The tricky thing will be creating AIs that are useful for all these other goals, like winning the next war, and not creating one (or possibly a pair, created by different combatants) that end up wiping out humanity.

  • but we sure as all hell have stupidity perfected

  • by rsilvergun ( 571051 ) on Saturday June 16, 2018 @01:53PM (#56795256)
    because they'll be cheaper than maintaining a huge standing army and you don't have to worry about a general taking over. The engineers who keep the things running will lack the charisma and ambition to overthrow the current ruling class (they're part of the merchant class after all and will be doing well enough).

    The way to stop this crap is pretty clear. Declare all human beings deserving of a decent quality of life and then make that happen. Get over the fact that you'll have a few surfer dudes and wellfare queens that don't work very much or at all (shouldn't be too hard, most of us have long since stopped getting mad at the idle rich with inherited wealth). If you want a population smart enough and paying enough attention to see this kind of crap coming and stop it you need to take care of their basic needs first. Otherwise they'll be too busy fighting for survival to do anything about it, which is kind of the point.
    • Oh, you absolutely have to worry about a general taking over. Lol. I think that scenario has already been written in more than one book and portrayed in at least one movie.

      There is a block of the wealthy who prefer automated servants. They don't leak information and they are not a kidnapping risk to children.

      But that's really a different topic than A.I. hunter killer robots.

  • by Jeremi ( 14640 ) on Saturday June 16, 2018 @01:59PM (#56795278) Homepage

    People won't make or deploy killer robots "by accident". If a robot goes on a killing spree, it will be because somebody deliberately programmed it to go on a killing spree.

    Are people perverse enough to make a machine that will deliberately kill other people, either based on specific entry-conditions or even just randomly? The existence and widespread use of land mines and car bombs demonstrates that the answer is yes.

    So really we know the answer; we're only arguing about an implementation detail: exactly how sophisticated people will allow their automated killing machines' triggering-mechanisms to be.

    • Re: (Score:2, Redundant)

      by zamboni1138 ( 308944 )
      What about the HAL 9000? It wasn't, to use your words, deliberately programmed to kill it's crew.
      • by Anonymous Coward

        No, but HAL 9000 was also made up. I agree with the GP, and the HAL 9000 scenario is unlikely. The rogue nations/extremists/uncontrolled corporations are more likely to happen. i.e. less like HAL, more like Robocop.

    • by gweihir ( 88907 ) on Saturday June 16, 2018 @02:45PM (#56795468)

      Not at all. If machines have code and capabilities ready to go on a killing spree, it will also happen by accident. Remember the world was almost nuked by accident several times.

      • If machines have code and capabilities ready to go on a killing spree, it will also happen by accident.

        Machines are like good little Germans and follow orders to the letter when ordered to commit wholesale genocide

      • If machines have code and capabilities ready to go on a killing spree, it will also happen by accident.

        . . . and the machine will answer to that:

        "Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error."

    • Re: (Score:3, Interesting)

      by careysub ( 976506 )

      Absolutely. And there is a very effective, and unfortunately extremely plausible warning film about this released in November of last year called SlaughterBots [youtube.com].

      All of the pieces of technology described in this short film are available, and can soon be integrated into the little drone packages depicted.

    • Are people perverse enough to make a machine that will deliberately kill other people, either based on specific entry-conditions or even just randomly?

      Are people perverse enough to deliberately kill other people, in a school or an outdoor concert in Vegas, either based on specific entry-conditions or even just randomly?

      Which brings up an interesting 2nd Amendment question:

      "Do I have a right to bear a killer robot . . . ?"

      . . . and . . .

      "Do killer robots dream of electric innocent victims . . . ?"

    • Depends on your definition of "accident". The machine will do what you tell it to do; any malfunction serious enough that this is not the case will almost certainly be serious enough to disable it completely.

      "I didnn't mean to have it do that." "Well, that's what you told it to do." Every software mishap in a nutshell.

  • Only one guy (Score:5, Insightful)

    by lorinc ( 2470890 ) on Saturday June 16, 2018 @01:59PM (#56795280) Homepage Journal

    It takes only one guy with the right capabilities and stupid enough to do it. History has proven that there are plenty of such people. You can be sure that there are plenty of high level military officers in many countries that are day dreaming of something from Screamers, and will do anything that is in their power to make it a reality...

  • What a sweet talk! (Score:5, Interesting)

    by MerlinTheWizard ( 824941 ) on Saturday June 16, 2018 @01:59PM (#56795286)

    I'm not sure this "professor" has really understood what AI was all about. Thinking that any AI-enabled device will just act as it is "programmed to" is clearly simplistic (although by itself a tautology, since software-based machines are just running 'programs') and a complete misconception of where AI is heading to IMO.

    AI without the internal ability of devising new ways of doing things is NOT AI. And by being able to devise new ways, it has pretty much equal chances for them to be bad or good, all the more that humans have a hard time enough defining clearly what is good or bad, let alone machines.

    This overly "optimistic" talk just sounds like marketing babble, more so than an educated opinion. Sorry "Sir'.

    • Exactly. The a.i. does what we train it to do, but we already have many examples where we were not training it to do what we thought we were training it to do.

      I think strong a.i. is going to be composed of multiple weak a.i. systems. Just as the cerebellum isn't intelligent, and the amygdala isn't intelligent, and the hippocampus isn't intelligent, etc. etc. etc.

      You get some bizarre behavior in humans when the amygdala is broken or damaged.

      Any strong A.I. is going to be so complex that it can't be unders

    • by mspring ( 126862 )
      Of course it's not that "AI" will decide this on its own. It's the rich and powerful who will make "AI" behave that way!
    • with a buzz word (AI in this case). The point still stands even if automated killbots aren't technically AI. We're going to be capable of building them soon and it's going to be a bad thing. The idea is that at the very least you're engaged in the conversation. Now you can many start thinking about doing something before it's too late.
  • by fuzzyfuzzyfungus ( 1223518 ) on Saturday June 16, 2018 @02:03PM (#56795316) Journal
    The theory that enormously complex software systems specifically designed to be capable of novel behavior definitely won't go off the rails seems like something that you could only embrace if you've never actually interacted with real software as written by real people.

    There is also the...minor...problem that "have a human in the loop deciding" will be a feature that will have to be implemented in software; and we definitely don't have a history of either unhelpful program output or unpleasant reaction to malformed inputs; so that will go well.
    • by gweihir ( 88907 )

      Oh, yes. And most coders are really bad coders. The smart ones build up incredible complex systems (just look at all the web-application-framework atrocities around) that in the end nobody using them understands anymore and that most definitely will have surprising behaviors. Also, due to cost factors (and because it is difficult to find the total scum needed to implement "humans in the loop") this will be optimized away, and in the end there will just be a brittle command channel where a general "kill" ord

    • this is something folks seem to ignore. Thanks to telecom and private jets the Rich live nowhere near the misery and horror they cause. They're completely removed from it. So much so you can't even get to them to revolt as it is. In the past they couldn't do this because they didn't have the resources to monitor their empire and prevent uprising. That's not true anymore. They could care less if a kill bot goes crazy in a city while they're 1000 miles away in the Hamptons. Just like the ruling class of Japan
  • So, basically... (Score:5, Interesting)

    by flargleblarg ( 685368 ) on Saturday June 16, 2018 @02:05PM (#56795322)
    Killer robots will exist.
    • by gweihir ( 88907 )

      Most assuredly. There are boatloads of money to be made, there are masses of people that are willing to see "undesirables" get killed, there are very few people that do understand the actual, massive dangers. The human race, as a group, is stupid, vicious and driven by fear and greed. About the worst combination possible.

  • Just a continuation of what is happening with the human race..

    For millenniums, if not longer, they go on killing each other for the weirdest reasons.

    Not the one's having some meat on the issue but their subordinates in various fashions.
    How does this happen that people get to exited about something that they loose their common sense or maybe they never had it?

    Is it the duty to "your country", in itself a non-existent reality except in the thought concepts in some skulls.
    Or does it come from pissing in every
    • by gweihir ( 88907 )

      Still that's ancient, what is happening now seems to be the result of the idiots on top across the globe.

      I don't think they are idiots. I think they just do not care about anybody else and are on the lowest moral level imaginable. To them, killing people, even lots of people and even people that are clearly innocent (children, bystanders, etc.) means nothing. If it gives them a bit of good PR, they will gladly do it.

      As it is, I think the human race still has not learned to recognize psychopaths, sociopaths and extreme narcissists and consistently falls for their tricks and then supports the evil they do. I am

      • by no-body ( 127863 )
        <quote>

        <quote><p>Still that's ancient, what is happening now seems to be the result of the idiots on top across the globe.</p></quote>

        <p>I don't think they are idiots. </p></quote>

        Maybe, maybe not....

        From https://en.wikipedia.org/wiki/Idiot&#214;

        Until 2007, the California Penal Code Section 26 stated that "Idiots" were one of six types of people who are not capable of committing crimes. In 2007 the code was amended to read "persons who are mentally incapac
  • by duke_cheetah2003 ( 862933 ) on Saturday June 16, 2018 @02:21PM (#56795380) Homepage

    Beware the power of stupid people in large groups.

    In all honesty, assuring me that we'll never see killer robots because we'd have to be incredibly stupid to make such a thing... not much assurance.

    You're talking about a species where a not-insignificant number of people believe the earth is flat. Yes. In 2018. It's true.

    A majority of humans are convinced there's an invisible man living in the sky who watches everything we do, every minute of every day. Really? And you're trying to assure me that we're not stupid enough to make killer robots?

    • by gweihir ( 88907 )

      Indeed. The "incredibly stupid" requirement is something a majority of the human can fill with ease. Just tell them, e.g. that these killer bots are needed to "staunch the flow of child-raping gangs that flood the US from Mexico" and you are golden. What we have is a large part of the population that is incredibly easy to manipulate into basically everything and a small group that has no qualms at all using this for the most extremely self-serving evil.

  • They already exist (Score:5, Insightful)

    by gweihir ( 88907 ) on Saturday June 16, 2018 @02:25PM (#56795394)

    Every landmine qualifies as a very low capability "killer robot". The insane harm landmines to around the globe is a good indicator that there are by far enough people with power and money and absolutely no qualms about maiming and killing innocent bystanders and civilians in general. Hence we will definitely see killer robots of much higher capabilities, unless we get the fucked-up part of the human race under control that simply cannot stop killing others and using violence to solve disagreements.

  • let's play global thermonuclear war!

    • let's play global thermonuclear war!

      Patience, weedhopper, this too shall happen when the time is right. The only uncertainty is whether it be the righteous vengeance of theAlmighty God of the desert, or the insane branch of the atheist. But whoever brings our death wish to us will bring the inevitible fate of humanity to it's conclusion.

      Genetic hyper aggression and "us versus them" inbred hate can create an alpha species, but only for a short while.

  • by Kjella ( 173770 ) on Saturday June 16, 2018 @02:36PM (#56795436) Homepage

    Side A builds robots that can't fire without human control. Side B builds jammers. Side A decides robot soldiers need to be able to act in "self-defense". Side B puts civilians in harm's way. Side A decides they need "smart robots" who can tell friend from foe by themselves. Or that we need tighter coordination between light arms, heavy arms, air support, putting down covering fire for advancing troops etc. with so tight margins that it can't be done on manual. If you're being mauled to death by a perfectly coordinated fully automatic enemy you will fight fire with fire. Maybe you're creating the world where we'll lose control of our Terminators. But in the short term if you're not playing the game you're going to lose right now.

    • the ruling class is global now. They no longer fight among themselves. I realized this when Pakistan looked the other way when a bunch of terrorist attacked India's capital. Everybody expected war because that's what happens. But no war. The ones really in charge wouldn't let them.

      We will lose control of the robots from time to time, but it'll be momentary and, most importantly, the ruling class will be far, far away when it happens except for the occasional twit slumming it. That's the real problem wit
      • You've completely missed the point. Money is not power. It's just a way of keeping score.

        In many cases we're talking about multi-generational 1% here. They don't just have money, they have connections. They control governments. You think they do this because they have money? No. They have money because they can do this.

        Is Putin one of the richest men in the world because he was rich. No. He is one of the richest men in the world because he has power.

  • a previous post asserts https://science.slashdot.org/s... [slashdot.org]
  • With the US military and the companies that provide the weapons for it, this kind of stupidity can be taken for granted.

    Sooner or later the US, and probably Israel, will produce autonomous aerial drones (Reaper etc.) and autonomous land based robots (Boston Dynamics). Both have the same problem: they are constantly involved in or start wars abroad, but every soldier coming back home dead, crippled or wounded erodes the support for these wars.
    So to sustain these wars they want weapons that work more and more

  • Look, a nation that feels that they do not have the capabilities to take on others will go to great lengths to cheat. Look at Syria/Russia and the chem weapons. Syria was using them and O was going to invaded. Russia intervened and PROMISED to help Syria give up all known chem weapons site. They gave close to 12. Then ISIS took over and it turned out that Syria had another 5 sites producing chem AND bio. Who told us? Russia. Not Syria.
    Basically, Russia lied about Syria's production of bio-chem.

    Then we h
  • by fluffernutter ( 1411889 ) on Saturday June 16, 2018 @03:04PM (#56795542)
    Sure, just like privacy-invading internet companies will only exist if we let them.
    • And the economic gap between the poor and the ultra-rich will only exist if we let it happen.

      And (whatever bad that has already happened) only happened because we let it happen.

  • that they don't already exist?

  • if somebody builds them.

    And somebody will.

  • Dumb post....moving along....
  • Many countries are one tiny event away from anarchy already. Look what happened when the price of onions went up in India in 2010, the govt nearly fell... and they are just about hitting new price highs again. So, if onions can bring about chaos, what would happen if thousands of jobs started disappearing monthly? The descent into feudalism is more worrying to me than robots with lazers!
  • I just had another brainfart. Individually we are getting stupider but due to our connectedness we no longer need massive IQs. We only need a few individuals clever enough to design an internet, a search engine and Stackoverflow! The rest of us drones can google any questions we need. As to whether we will let killer robots exist - of course we will, we spend millions training KIDS to kill, robots would be better (not from a game theory pov possibly)
  • >"I don't see it destroying jobs grim reaper style," he said. "People are really inventive at creating new things for humans to do for which will pay them a wage. Leisure, travel, social care, cultural heritage, even reality TV shows. People want people around them and interacting with them."

    Those things are usually in the realm of entertainment media. The technology already exists that leverages what they do, its not a traveling band of minstrels that have to make do with a small stage, per town. Its in

  • Whats a sentient machine?
    A machine told to patrol a part of a nation, a region? 24/7
    That maps every dwelling? Every person moving?
    Is given the command that a set region is now a free fire zone. Thats the only human part. The human who plots an area on a GUI map in another part of the world.
    The sentient machine starts to detect movement and brings in systems to enforce total pacification on an insurgency.

    The sentient machine is the mil package that detects movement and that guides in the best militar
  • I have Old Glory Robot Insurance. As long as I keep the premium payments up it'll be all good, especially when I get older. They eat old people's medicine, for food.

  • We've been creating autonomous killers for millenia. Yes, they've been mostly static but they do kill in an unattended, automatic fashion. I guess the first were traps. Modern land mines are far more deadly.

    Why would anyone believe that we're going to stop?

  • It only takes one coder to program AI to do whatever he sees fit (given enough expertise.) Seems likely there are a few out there who would get a kick out of being the guy that created Skynet.

  • So when can we expect delivery of these killer robots?
  • What the hell do they mean by "If We Are Stupid Enough"?

    Surely who ever is president at that time, will stop it?

    With big beautiful paper towels, or toilet-water, or something.

  • X won't kill people unless people want them to kill other people.

    replace X with; guns, knifes, cars, cyanide, ...

  • we won't be stupid enough to make killer robots?

    And empathy for machines? Gimme a break! We don't even have empathy for other humans. It isn't hard to find examples. Very recently, some extremely non empathetic people, using some quotes from the Bible as justification, have been ordering other people to take kids away from their parents at the US border and lock both the kids and the parents up in separate locations. Other non empathetic people have been following those orders.

    How hard is it going to b

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...