Google Promises Its AI Will Not Be Used For Weapons (nytimes.com) 102
An anonymous reader quotes a report from The New York Times: Google, reeling from an employee protest over the use of artificial intelligence for military purposes, said Thursday that it would not use A.I. for weapons or for surveillance that violates human rights (Warning: source may be paywalled; alternative source). But it will continue to work with governments and the military. The new rules were part of a set of principles Google unveiled relating to the use of artificial intelligence. In a company blog post, Sundar Pichai, the chief executive, laid out seven objectives for its A.I. technology, including "avoid creating or reinforcing unfair bias" and "be socially beneficial."
Google also detailed applications of the technology that the company will not pursue, including A.I. for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" and "technologies that gather or use information for surveillance violating internationally accepted norms of human rights." But Google said it would continue to work with governments and military using A.I. in areas including cybersecurity, training and military recruitment. "We recognize that such powerful technology raises equally powerful questions about its use. How A.I. is developed and used will have a significant impact on society for many years to come," Mr. Pichai wrote.
Google also detailed applications of the technology that the company will not pursue, including A.I. for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" and "technologies that gather or use information for surveillance violating internationally accepted norms of human rights." But Google said it would continue to work with governments and military using A.I. in areas including cybersecurity, training and military recruitment. "We recognize that such powerful technology raises equally powerful questions about its use. How A.I. is developed and used will have a significant impact on society for many years to come," Mr. Pichai wrote.
in before somebody says... (Score:1)
Re:in before somebody says... (Score:5, Funny)
well to be fair, they do constantly tell us they respect our privacy, and do everything to protect our personal data. So i think we should totally give them a pass on this one.
Re: (Score:1)
Re: (Score:2)
The Devil is in the detail, Google won't be using AI to hunt and kill women and children, oh no, Alphabet will, https://en.wikipedia.org/wiki/... [wikipedia.org].. Google evil is as evil does no matter how much you bullshit brand shift.
Re:in before somebody says... (Score:5, Funny)
As long as they double pinky superduper promise!
Like they promised not to be evil...
Re: (Score:2, Interesting)
No, it's complete and utter bullshit... Just like their policy to do no evil.
That they quietly removed this policy from their manifest public image demonstrates that such claims can only be regarded as complete and utter bullshit.
There's no way google can make this statement and the fact that they did shows that the company is being spun by PR villains who most likely specifically intend this as a set of smoke and mirrors to hide their true intent.
As a military weapon designer myself, this kind of thing sca
Re: in before somebody says... (Score:2)
Re: (Score:3)
Re: (Score:3)
Um, no. That's not how defense contracting works, unless the military paid them to develop it with Government Purpose Rights, or an "unlimited" rights license...you can look up the terms.
Re: (Score:2)
Oh, it will be totally true. For Google. A wholly-owned subsidiary that "licenses" Google's AI technology on the other hand.....
Re: (Score:2)
And how about not partaking in a panopticon to continuously live-track every person, their locations, and what they are doing and buying and viewing, and their metadata AKA networks of contacts, which the Tyrant King George during the revolutionary war would have used to quickly round up the founding fathers?
Re: (Score:1)
[Anonymous because familiar w/ Maven]
It is true. Civilian departments have been more involved with it prior to DoD. To my knowledge it isn't used in any part of any firing chain for USA/USN/USMC. Unsure about USAF or SOCOM.
In terms of end state functionality, think more "sound the alarm" than "here, shoot this"
It will not (Score:3)
It will just be used to target weapons, someone will still have to push the button. For now.
Re: (Score:2)
— John Brunner, Stand on Zanzibar
Sure (Score:5, Insightful)
Re: (Score:3)
I don't remember exactly what it was, but I recall software for DOS back in the 80's or 90's that was free for personal, educational, and even commercial use. But strictly prohibited use by military or in weapons systems (I think it even called out nuclear weapons systems. the 80's had a lot of anti-nuclear activism). Theoretically you can put these sorts of restrictions in your software license. OSI might not consider it Free software, and technically such a license would be incompatible with GPL. But mayb
Re: (Score:1)
The US governmental has the right to seize intellectual property as it sees fit.
Re: (Score:2)
Yes and no, it can't be done arbitrarily. The government is able to do so if it is justified and legal, and by the Fifth Amendment is required to provide "just compensation". The IP holder can take federal entities to court in some cases, either to reverse the seizure or to receive compensation.
Google has absolutely no way to guarantee that. (Score:1)
They will do what the federal government says whether they or anyone else likes it or not and they will keep quiet about it at best, lie blatantly about it at worst. We need to stop this program and we aren't gonna get Google's help doing it. They are powerless in this situation.
Yeah, sure they won't. (Score:5, Informative)
Guess what? You're dealing with the military. They write a contract for you to develop a specific product. Part of that contract is complete documentation on how to create the product they contracted you for. Once you deliver on the contract, it's not up to you anymore how that product is used, no matter what you might have to say about it. If they want to integrate it into a weapons system, that's tough shit for you and your ethics.
Re: (Score:2)
Everything can be used for evilness. :(
Too much ado about nothing (Score:2)
Almost everything in this world can be weaponised, so stop BS'ing us, Google. You create technologies which will be used by military in one way or another.
Luckily we're not yet even remotely close to "intelligence" (which scientists have yet to define), so I'm glad this announcement is a sort of relief for some extremely gulliable people who cannot sleep at night after reading news headlines about an impending doom 'caused by Teminator like machines.
Re: (Score:2)
Luckily we're not yet even remotely close to "intelligence" (which scientists have yet to define)
If you don't have a definition, how do you know we're not even remotely close ?
Re: (Score:2)
The military can't just use their technologies (other than maybe free things like search) w/o permission.
Let's ignore the EVIL. No EVIL here at all! (Score:2)
There was a time some years ago when I would have thought this was a good thing. Now I regard it as a cheap publicity stunt, possibly over a base of real fear and cowardice.
Easiest place to start is the basis for fear. If we create a general AI dedicated to the religion of corporate cancerism, and if that AI escapes into a world of robots and self-driving cars, then at some point it is inevitable that the AI would realize that human beings are interfering with its overriding program to maximize profits. Of
Re: (Score:2)
then at some point it is inevitable that the AI would realize that human beings are interfering with its overriding program to maximize profits
Whoa there, Sparky. What, specifically, is the thought process here?
The profits come from selling products or services. AIs aren't legal entities. They can't have bank accounts, and can't own property, and thus don't have the ability to pay for anything. An AI can only pay when linked to a human that owns the money.
So your escaped AI needs humans to fulfill it's goal.
And we are pretty easy to manipulate, so "forcefully extracting" purchases would be significantly more expensive than convincing us we nee
Re: (Score:2)
I'll write an actual reply if you say something relevant to what I actually wrote. Alternatively, if you can't understand it, then your options include asking for clarification or saying nothing.
Re: (Score:2)
They can't have bank accounts, and can't own property, and thus don't have the ability to pay for anything.
We've already got people battling for legal right for animals, akin to rights currently given to humans. At what point beyond the Turing test passage do you think it will begin for AI?
Well, as they don't have any promising AI... (Score:2)
The fact that they may develop future AI which might be used for weapons wouldn't invalidate the promise at the time that it is given.
They can even further get around it by not calling any future version of AI that may be weaponizable "their" AI... but AI that they developed for someone else.
Re: (Score:2)
Doesnâ(TM)t Matter (Score:1)
Lockheed Martin, Boeing, etc have been using AI / Machine Learning / Neural Networks on weapon system for a long time. Sure a human still âoepushes the button,â but that is irrelevant. Targets are detected, identified, and prioritized in order of threat by the system, and in the case of non-ABTâ(TM)s, automatically engaged.
This type of âoeAIâ has been around since before Google existed. They really need to get over themselves.
Source- aerospace engineer working in defense industr
Re: (Score:2)
Oh please. Show some evidence or STFU.
It does not matter at all (Score:2)
Just my 2 cents
Unless... (Score:2)
IBM only provided accounting to the Nazis (Score:2, Informative)
Look into your future, Google [wikipedia.org]
Google can't make that promise (Score:3)
Google will have no control over what its users (the military) actually do with the technology. A simple AI-based robot that can identify and open doors will become a weapon as soon as the military fits a gun, mustard gas or some other biohazard to it.
To use a car analogy... it's like Ford promising cars are perfectly safe, meanwhile millions of people around the world are injured or killed in car accidents (in cars provided by all manufacturers).
Re: (Score:3)
Probably more apt to say, a truck is a harmless until you mount a PK machine gun to it...
They sure can (Score:2)
Just like "Don't be evil", they will keep promising until and after the day it became clear to everyone they have been breaking that promise for years. Then they will just hide it somewhere obscure, but that still won't stop them from saying that promise.
Empty words that have no cost and carried no penalty if broken, why not?
Re: (Score:2)
Not true. The government is required to get a license to use the products, and can't share it w/o a Government Purpose Rights or an Unlimited Rights license.
Please read up on it before spouting off.
Re: (Score:2)
Re: (Score:2)
And what is it that put America in the forefront of the nuclear nations? And what is it that will make it possible to spend twenty billion dollars of your money to put some clown on the moon? Well, it was good old American know how, that's what, as provided by good old Americans like Dr. Wernher von Braun!
Dood! U having a reaction to some medicine or sumpin?
Re: (Score:1)
Quoting the inestimable Tom Lehrer, I think you'll find... could have done with some formatting to show that it's a song though...
will it be instead used to play fun games like... (Score:1)
chess, checkers, backgammon, poker, Theaterwide Biotoxic and Chemical Warfare, and Global Thermonuclear War ?
Oblig. Booger (Score:2)
and who decides the standards?
Oh wait, they do. never mind.
Why not peace with Hitler? (Score:1)
Re: (Score:2)
Re: (Score:2)
Corporations have a fiduciary responsibility to their shareholders to do whatever makes them the most money. As Marx said, the capitalists will sell you the rope with which to hang them!
Wasn't that Kruschev? Or maybe both of them said it.
Depends on how they license their code (Score:2)
Re: (Score:2)
It's not just code. Presumably Google would publish journal papers as they make new discoveries in AI. And defense contractors can read.
Yeah, sure (Score:2)
Goobris? (Score:2)
Re:whatever...Google Promises Its AI Will Not Be U (Score:1)
Who is less trustworthy? (Score:2)
Mission Accomplished! (Score:2)
Google also detailed applications of the technology that the company will not pursue, including A.I. for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people"
So AI could be used to pinpoint the exact location of the families of enemy soldiers, but it would be an actual human who executed the command to kill or imprison them.
That's comforting!
Lazy I am (Score:1)
Re: (Score:1)
“We will reserve the right to prevent or stop uses of our technology if we become aware of uses that are inconsistent with these principles,” the company said.
Like most of the top corporate A.I. labs, which are laden with former and current academics, Google openly publishes much of its A.I. research. That means others can recreate and reuse many of its methods and ideas. But Google is joining other labs in saying it may hold back certain research if it believes others will misuse it.
The article didn't contain the 7 specific principles from what I saw.
We pinky swear (Score:2)
We will not militarize man-made islands in the South China Sea - China
We will not consider a regime change for your country - USA
I didn't inhale - Bill Clinton
No new taxes - George Bush Sr
Iraq has WMD - George Bush Jr
We don't spy on American Citizens - NSA
If you like the health care plan you have, you can keep it - Obama
Cigarette smoking is no more ' addictive ' than coffee, tea or Twinkies - Big Tobacco
Man this list can go on forever, but you get the point. Trust isn't one of Googles strong points in the
Just the (Score:2)
Unless the AI decides to do that by itself (Score:2)
Thank goodness this is resolved! (Score:1)
Well folks, they said it, you heard it! Google promises to be good boys and never do anything evil with their AI, so it's $100% guaranteed safe for everyone. Thank you Sundar! PACK IT UP BOYS, CONVERSATION OVER.
Yeah, well but what about someone else's? (Score:2)
Even if Google has all the best AI scientists eventually someone else will be good enough.
Good thing Google sold off Boston Dynamics (Score:2)
Because some of those robots are just waiting to pull a trigger or detach an arm