Killer Robots Will Only Exist If We Are Stupid Enough To Let Them (theguardian.com) 143
Heritype quotes the Guardian's science correspondent:
The idea of killer robots rising up and destroying humans is a Hollywood fantasy and a distraction from the more pressing dilemmas that intelligent machines present to society, according to one of Britain's most influential computer scientists. Sir Nigel Shadbolt, professor of computer science at the University of Oxford, predicts that AI will bring overwhelming benefits to humanity, revolutionising cancer diagnosis and treatment, and transforming education and the workplace. If problems arise, he said, it will not be because sentient machines have unexpectedly gone rogue in a Terminator-like scenario.
"The danger is clearly not that robots will decide to put us away and have a robot revolution," he said. "If there [are] killer robots, it will be because we've been stupid enough to give it the instructions or software for it to do that without having a human in the loop deciding...."
However, Prof Shadbolt is optimistic about the social and economic impact of emerging technologies such as machine learning, in which computer programmes learn tasks by looking for patterns in huge datasets. "I don't see it destroying jobs grim reaper style," he said. "People are really inventive at creating new things for humans to do for which will pay them a wage. Leisure, travel, social care, cultural heritage, even reality TV shows. People want people around them and interacting with them."
"The danger is clearly not that robots will decide to put us away and have a robot revolution," he said. "If there [are] killer robots, it will be because we've been stupid enough to give it the instructions or software for it to do that without having a human in the loop deciding...."
However, Prof Shadbolt is optimistic about the social and economic impact of emerging technologies such as machine learning, in which computer programmes learn tasks by looking for patterns in huge datasets. "I don't see it destroying jobs grim reaper style," he said. "People are really inventive at creating new things for humans to do for which will pay them a wage. Leisure, travel, social care, cultural heritage, even reality TV shows. People want people around them and interacting with them."
stupid enough (Score:5, Insightful)
It's 2018. We've broken through the "stupid enough" barrier.
Re: stupid enough (Score:1)
I for one welcome our new T-800 and T-1000 overlords.
Dun dun dun da dun
Dun dun dun da dun
James P. Hogan: Two Faces of Tomorrow (1979) (Score:2)
An AI with a survival instinct experiencing intermittent power failures can start doing unexpected things: https://www.goodreads.com/book... [goodreads.com]
"Survival test. Civilization had grown so complex that only a world-wide computer network could control everything. But the computer was only logical - it lacked common sense. And its all-too-logical decisions were beginning to cause too many near-fatal accidents. The solution was on the drawing-boards - a universal, self-aware and self-programming computer, equipped wi
Re: (Score:1)
Re:stupid enough (Score:5, Insightful)
Re: (Score:2)
Everyone is looking at what a robot soldier is doing, but no one pays attention to the google search and youtube recommend AIs.
Re: (Score:2)
TRUMP = EVIDENCE (Score:1, Insightful)
Trump is "in charge" of America, so yes, very fucking stupid people must live here.
Re:stupid enough (Score:5, Interesting)
It's 2018. We've broken through the "stupid enough" barrier.
We've completely demolished it.
Olsoc's rule if killy stuff : If there is a method of killing people, Governments will rush to it like a dog to bacon. The more barbaric, the more bacon.
Olsoc's second rule of killy stuff: Unless people are being killed, there isn't much point of warfare.
Which brings us to Olsoc's third rule of killy stuff: War robots will be specifically designed to kill people.That is how humans work.
Re: (Score:2)
Maybe someone should raise the bar ?
It really isn't in our nature. The violent and aggressive will quickly kill anyone who isn't
Re: (Score:3)
It's 2018. We've broken through the "stupid enough" barrier.
Not yet. Just wait, it'll get better. We're All Getting Dumber, Says Science [slashdot.org].
Peak oil? Peak coal? I've said we've been at peak intelligence for awhile now. Kids don't have to think, students have been taught to regurgitate facts and Google is quite good at answering questions. Now the answer being correct or incorrect, that's literally besides the point. If I FEEL that it's a right answer, then it is. Don't disrespect me just because I'm stupid! You must conform to MY way of thinking to keep from
Re: (Score:2)
IIRC it was sometime last year, or possibly the year before that, that I read about a place in Japan that was deploying totally automated security robots that could (under unspecified circumstances) kill people. Now that's not a very smart AI, but it's still an AI. And it can kill people (who invade it's turf?).
So this has already been reported as happening. (Was the report accurate? Was it just a proposal someone made? Who knows?)
Then there's the work that's being done for the army on target identific
Re: (Score:1)
Not much of a comfort. (Score:2, Insightful)
We clearly are stupid enough.
Re:Not much of a comfort. (Score:5, Funny)
Speak for yourself, you insensitive clod!
I'm just lazy and careless.
Agreed (Score:1)
We clearly are stupid enough.
Yes, we are. Everything I've seen says it's going to use AI. And who knows what those algorithms come up with down the road.
So, they're programmed to fight a battle say and to learn how to do it effectively. The AI may come up say, using civilians as shields. Or using civilians as a way to dupe the enemy and getting them killed in the process.
Or the AI may decide that the best tactic is to exterminate all life and destroy everything to keep the enemy from advancing.
Or - the way to end way is to eliminat
Re: (Score:2)
Sounds like Artificial Islam!
Re: (Score:1)
religionofpeas said...
>We clearly are stupid enough.
Exactly. What about human history gives the impression that we will not do this? Hasn't Russia already been working on these? A.I. robots that were fearless, could shoot with 100% accuracy, fight 24x7, don't hesitate to follow orders would be a tremendous force multiplier.
What country won't find a way to justify creating and using a.i. robots?
Re: (Score:3)
We clearly are stupid enough.
As a group, most definitely.
Re: (Score:2)
"We clearly are stupid enough."
Too bloody right!!!
The good news, however, is that being stupid, we will probably bungle the robot's hardware and software so badly that they will rank between mutant athlete's foot fungi and rabid pandas.as dangers to humanity's future..
Re: (Score:2)
You are, I think, talking about the versions that already exist.
OTOH, I doubt that we will intentionally create an AI that desires to eliminate humans. The tricky thing will be creating AIs that are useful for all these other goals, like winning the next war, and not creating one (or possibly a pair, created by different combatants) that end up wiping out humanity.
Excuse me! (Score:1)
We're talking about killer robots; not Orangutans who want to destroy the world for their egos.
Look We may not have AI down pat (Score:1)
but we sure as all hell have stupidity perfected
The rich are going to want automated kill bots (Score:5, Interesting)
The way to stop this crap is pretty clear. Declare all human beings deserving of a decent quality of life and then make that happen. Get over the fact that you'll have a few surfer dudes and wellfare queens that don't work very much or at all (shouldn't be too hard, most of us have long since stopped getting mad at the idle rich with inherited wealth). If you want a population smart enough and paying enough attention to see this kind of crap coming and stop it you need to take care of their basic needs first. Otherwise they'll be too busy fighting for survival to do anything about it, which is kind of the point.
Re: (Score:1)
Oh, you absolutely have to worry about a general taking over. Lol. I think that scenario has already been written in more than one book and portrayed in at least one movie.
There is a block of the wealthy who prefer automated servants. They don't leak information and they are not a kidnapping risk to children.
But that's really a different topic than A.I. hunter killer robots.
Re: (Score:2)
It's not a matter of stupid (Score:5, Insightful)
People won't make or deploy killer robots "by accident". If a robot goes on a killing spree, it will be because somebody deliberately programmed it to go on a killing spree.
Are people perverse enough to make a machine that will deliberately kill other people, either based on specific entry-conditions or even just randomly? The existence and widespread use of land mines and car bombs demonstrates that the answer is yes.
So really we know the answer; we're only arguing about an implementation detail: exactly how sophisticated people will allow their automated killing machines' triggering-mechanisms to be.
Re: (Score:2, Redundant)
Re: (Score:1)
No, but HAL 9000 was also made up. I agree with the GP, and the HAL 9000 scenario is unlikely. The rogue nations/extremists/uncontrolled corporations are more likely to happen. i.e. less like HAL, more like Robocop.
Re:It's not a matter of stupid (Score:4, Insightful)
Not at all. If machines have code and capabilities ready to go on a killing spree, it will also happen by accident. Remember the world was almost nuked by accident several times.
Re: (Score:2)
If machines have code and capabilities ready to go on a killing spree, it will also happen by accident.
Machines are like good little Germans and follow orders to the letter when ordered to commit wholesale genocide
Re: (Score:2)
If machines have code and capabilities ready to go on a killing spree, it will also happen by accident.
. . . and the machine will answer to that:
"Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error."
Re: (Score:3, Interesting)
Absolutely. And there is a very effective, and unfortunately extremely plausible warning film about this released in November of last year called SlaughterBots [youtube.com].
All of the pieces of technology described in this short film are available, and can soon be integrated into the little drone packages depicted.
Re: (Score:2)
Are people perverse enough to make a machine that will deliberately kill other people, either based on specific entry-conditions or even just randomly?
Are people perverse enough to deliberately kill other people, in a school or an outdoor concert in Vegas, either based on specific entry-conditions or even just randomly?
Which brings up an interesting 2nd Amendment question:
"Do I have a right to bear a killer robot . . . ?"
. . . and . . .
"Do killer robots dream of electric innocent victims . . . ?"
Re: (Score:2)
Depends on your definition of "accident". The machine will do what you tell it to do; any malfunction serious enough that this is not the case will almost certainly be serious enough to disable it completely.
"I didnn't mean to have it do that." "Well, that's what you told it to do." Every software mishap in a nutshell.
Only one guy (Score:5, Insightful)
It takes only one guy with the right capabilities and stupid enough to do it. History has proven that there are plenty of such people. You can be sure that there are plenty of high level military officers in many countries that are day dreaming of something from Screamers, and will do anything that is in their power to make it a reality...
What a sweet talk! (Score:5, Interesting)
I'm not sure this "professor" has really understood what AI was all about. Thinking that any AI-enabled device will just act as it is "programmed to" is clearly simplistic (although by itself a tautology, since software-based machines are just running 'programs') and a complete misconception of where AI is heading to IMO.
AI without the internal ability of devising new ways of doing things is NOT AI. And by being able to devise new ways, it has pretty much equal chances for them to be bad or good, all the more that humans have a hard time enough defining clearly what is good or bad, let alone machines.
This overly "optimistic" talk just sounds like marketing babble, more so than an educated opinion. Sorry "Sir'.
Re: (Score:2)
Exactly. The a.i. does what we train it to do, but we already have many examples where we were not training it to do what we thought we were training it to do.
I think strong a.i. is going to be composed of multiple weak a.i. systems. Just as the cerebellum isn't intelligent, and the amygdala isn't intelligent, and the hippocampus isn't intelligent, etc. etc. etc.
You get some bizarre behavior in humans when the amygdala is broken or damaged.
Any strong A.I. is going to be so complex that it can't be unders
Re: (Score:2)
The Profs just trying to get your attention (Score:2)
Computer scientist rather than software engineer.. (Score:5, Insightful)
There is also the...minor...problem that "have a human in the loop deciding" will be a feature that will have to be implemented in software; and we definitely don't have a history of either unhelpful program output or unpleasant reaction to malformed inputs; so that will go well.
Re: (Score:2)
Oh, yes. And most coders are really bad coders. The smart ones build up incredible complex systems (just look at all the web-application-framework atrocities around) that in the end nobody using them understands anymore and that most definitely will have surprising behaviors. Also, due to cost factors (and because it is difficult to find the total scum needed to implement "humans in the loop") this will be optimized away, and in the end there will just be a brittle command channel where a general "kill" ord
The rich won't care if they go off the rails (Score:2)
So, basically... (Score:5, Interesting)
Re: (Score:3)
Most assuredly. There are boatloads of money to be made, there are masses of people that are willing to see "undesirables" get killed, there are very few people that do understand the actual, massive dangers. The human race, as a group, is stupid, vicious and driven by fear and greed. About the worst combination possible.
Let machines doing the killing (already happening) (Score:2)
For millenniums, if not longer, they go on killing each other for the weirdest reasons.
Not the one's having some meat on the issue but their subordinates in various fashions.
How does this happen that people get to exited about something that they loose their common sense or maybe they never had it?
Is it the duty to "your country", in itself a non-existent reality except in the thought concepts in some skulls.
Or does it come from pissing in every
Re: (Score:2)
Still that's ancient, what is happening now seems to be the result of the idiots on top across the globe.
I don't think they are idiots. I think they just do not care about anybody else and are on the lowest moral level imaginable. To them, killing people, even lots of people and even people that are clearly innocent (children, bystanders, etc.) means nothing. If it gives them a bit of good PR, they will gladly do it.
As it is, I think the human race still has not learned to recognize psychopaths, sociopaths and extreme narcissists and consistently falls for their tricks and then supports the evil they do. I am
Re: (Score:2)
<quote><p>Still that's ancient, what is happening now seems to be the result of the idiots on top across the globe.</p></quote>
<p>I don't think they are idiots. </p></quote>
Maybe, maybe not....
From https://en.wikipedia.org/wiki/IdiotÖ
Until 2007, the California Penal Code Section 26 stated that "Idiots" were one of six types of people who are not capable of committing crimes. In 2007 the code was amended to read "persons who are mentally incapac
Rise of Stupidity (Score:3)
Beware the power of stupid people in large groups.
In all honesty, assuring me that we'll never see killer robots because we'd have to be incredibly stupid to make such a thing... not much assurance.
You're talking about a species where a not-insignificant number of people believe the earth is flat. Yes. In 2018. It's true.
A majority of humans are convinced there's an invisible man living in the sky who watches everything we do, every minute of every day. Really? And you're trying to assure me that we're not stupid enough to make killer robots?
Re: (Score:2)
Indeed. The "incredibly stupid" requirement is something a majority of the human can fill with ease. Just tell them, e.g. that these killer bots are needed to "staunch the flow of child-raping gangs that flood the US from Mexico" and you are golden. What we have is a large part of the population that is incredibly easy to manipulate into basically everything and a small group that has no qualms at all using this for the most extremely self-serving evil.
They already exist (Score:5, Insightful)
Every landmine qualifies as a very low capability "killer robot". The insane harm landmines to around the globe is a good indicator that there are by far enough people with power and money and absolutely no qualms about maiming and killing innocent bystanders and civilians in general. Hence we will definitely see killer robots of much higher capabilities, unless we get the fucked-up part of the human race under control that simply cannot stop killing others and using violence to solve disagreements.
Re: (Score:2)
Every landmine qualifies as a very low capability "killer robot".
But at least for the time being, you cannot order a box of landmines to clear a city of rioting, starving unemployed serfs
Re: (Score:1)
You're young. It's called a butterfly mine, developed by the Germans, perfected by the Americans - https://en.wikipedia.org/wiki/... [wikipedia.org] . That was then, this is now - https://en.wikipedia.org/wiki/... [wikipedia.org] .
Seems us humans really like to kill each other.
let's play global thermonuclear war! (Score:2)
let's play global thermonuclear war!
Re: (Score:1)
let's play global thermonuclear war!
Patience, weedhopper, this too shall happen when the time is right. The only uncertainty is whether it be the righteous vengeance of theAlmighty God of the desert, or the insane branch of the atheist. But whoever brings our death wish to us will bring the inevitible fate of humanity to it's conclusion.
Genetic hyper aggression and "us versus them" inbred hate can create an alpha species, but only for a short while.
Not easy to avoid in an arms race (Score:4, Insightful)
Side A builds robots that can't fire without human control. Side B builds jammers. Side A decides robot soldiers need to be able to act in "self-defense". Side B puts civilians in harm's way. Side A decides they need "smart robots" who can tell friend from foe by themselves. Or that we need tighter coordination between light arms, heavy arms, air support, putting down covering fire for advancing troops etc. with so tight margins that it can't be done on manual. If you're being mauled to death by a perfectly coordinated fully automatic enemy you will fight fire with fire. Maybe you're creating the world where we'll lose control of our Terminators. But in the short term if you're not playing the game you're going to lose right now.
Sure it is (Score:2)
We will lose control of the robots from time to time, but it'll be momentary and, most importantly, the ruling class will be far, far away when it happens except for the occasional twit slumming it. That's the real problem wit
Re: (Score:1)
You've completely missed the point. Money is not power. It's just a way of keeping score.
In many cases we're talking about multi-generational 1% here. They don't just have money, they have connections. They control governments. You think they do this because they have money? No. They have money because they can do this.
Is Putin one of the richest men in the world because he was rich. No. He is one of the richest men in the world because he has power.
That's not reassuring when (Score:1)
This stupidity is certain (Score:2)
With the US military and the companies that provide the weapons for it, this kind of stupidity can be taken for granted.
Sooner or later the US, and probably Israel, will produce autonomous aerial drones (Reaper etc.) and autonomous land based robots (Boston Dynamics). Both have the same problem: they are constantly involved in or start wars abroad, but every soldier coming back home dead, crippled or wounded erodes the support for these wars.
So to sustain these wars they want weapons that work more and more
Like nukes, they are here already (Score:2)
Basically, Russia lied about Syria's production of bio-chem.
Then we h
Re: (Score:2)
Syria? Russia? China?
Back in the real world, the US is the first mover in weapons development. Who developed and deployed drone warfare tech on a massive scale before anyone else even got started.
I expect that the US will develop and deploy killbot tech before anyone else's project gets past the powerpoint presentation stage. My impression is that it is already underway. Why else do you think google is moving into weapons: They believe they have to act now to get a piece of the action.
Re: Like nukes, they are here already (Score:2)
Re: Like nukes, they are here already (Score:2)
Re: (Score:2)
Imagine all the fun we'll have when start combining the two. Autonomous weapons is an application of AI.
Sure (Score:3)
Re: (Score:2)
And the economic gap between the poor and the ultra-rich will only exist if we let it happen.
And (whatever bad that has already happened) only happened because we let it happen.
Who says ... (Score:2)
that they don't already exist?
Killer robots will only exist (Score:2)
if somebody builds them.
And somebody will.
Stupid or Evil (Score:1)
Definition of killer? (Score:2)
Collective consciousness (Score:2)
Turker Jerbs (Score:2)
>"I don't see it destroying jobs grim reaper style," he said. "People are really inventive at creating new things for humans to do for which will pay them a wage. Leisure, travel, social care, cultural heritage, even reality TV shows. People want people around them and interacting with them."
Those things are usually in the realm of entertainment media. The technology already exists that leverages what they do, its not a traveling band of minstrels that have to make do with a small stage, per town. Its in
Sentient machines (Score:2)
A machine told to patrol a part of a nation, a region? 24/7
That maps every dwelling? Every person moving?
Is given the command that a set region is now a free fire zone. Thats the only human part. The human who plots an area on a GUI map in another part of the world.
The sentient machine starts to detect movement and brings in systems to enforce total pacification on an insurgency.
The sentient machine is the mil package that detects movement and that guides in the best militar
I'm not worried (Score:2)
I have Old Glory Robot Insurance. As long as I keep the premium payments up it'll be all good, especially when I get older. They eat old people's medicine, for food.
been done long ago (Score:2)
We've been creating autonomous killers for millenia. Yes, they've been mostly static but they do kill in an unattended, automatic fashion. I guess the first were traps. Modern land mines are far more deadly.
Why would anyone believe that we're going to stop?
There's always someone who will let them... (Score:1)
It only takes one coder to program AI to do whatever he sees fit (given enough expertise.) Seems likely there are a few out there who would get a kick out of being the guy that created Skynet.
Only if we're stupid enough to let them, eh? (Score:1)
IF?! (Score:1)
What the hell do they mean by "If We Are Stupid Enough"?
Surely who ever is president at that time, will stop it?
With big beautiful paper towels, or toilet-water, or something.
replace with x (Score:2)
X won't kill people unless people want them to kill other people.
replace X with; guns, knifes, cars, cyanide, ...
What in human history makes anyone think that (Score:2)
we won't be stupid enough to make killer robots?
And empathy for machines? Gimme a break! We don't even have empathy for other humans. It isn't hard to find examples. Very recently, some extremely non empathetic people, using some quotes from the Bible as justification, have been ordering other people to take kids away from their parents at the US border and lock both the kids and the parents up in separate locations. Other non empathetic people have been following those orders.
How hard is it going to b