US launches Artificial Intelligence programme to detect hidden nuclear missiles

Gladys Abbott
June 10, 2018

The company, which is already a member of the Partnership on Artificial Intelligence including dozens of tech firms committed to AI principles, had faced criticism for the contract with the Pentagon on Project Maven, which uses machine learning and engineering talent to distinguish people and objects in drone videos.

Google's pledge to quit doing military work involving its AI technology does not include its current job helping the Pentagon with drone surveillance.

Google has reacted to negative press and employee outrage over the use of its artificial intelligence technology by setting a charter dictating code of conduct, holding itself accountable for the use of its technology. The company won't, for example, work on surveillance that falls outside "internationally accepted norms" or anything unsafe unless "the benefits substantially outweigh the risks".

"As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides", said Google Chief Executive Sundar Pichai.

Only weapons that have a "principal purpose" of causing injury will be avoided, but it's unclear which weapons that refers to.

But it will continue (it says) to work to "limit potentially harmful or abusive applications" of AI.

To be more specific, Pichai claims that Google will not be using its AI technology for use in weapons or surveillance. Representative Pete King, a New York Republican, tweeted on Thursday that Google not seeking to extend the drone deal "is a defeat for United States national security".

More news: Mercedes delay engine upgrade due to 'quality issue'

Google pledged Thursday that it will not use artificial intelligence in applications related to weapons, surveillance that violates global norms, or that works in ways that go against human rights.

"Taking a clear and consistent stand against the weaponization of its technologies" would help Google demonstrate 'its commitment to safeguarding the trust of its global base of customers and users, ' Lucy Suchman, a sociology professor at Lancaster University in England, told Reuters ahead of Thursday's announcement.

"How artificial intelligence develops and uses will have a significant impact on society for many years".

Google said it will avoid surveillance and information gathering technology that violates "internationally accepted norms".

The Indian-origin CEO also said the company would not design or deploy AI in areas including weapons or other technologies whose principal objective or implementation is to cause or directly facilitate injury to people. These fields will mainly include cybersecurity, healthcare, and training.

The Campaign to Stop Killer Robots, which said it had been in dialogue with Google about the issue, called it a "welcome commitment".

Other reports by LeisureTravelAid

Discuss This Article

FOLLOW OUR NEWSPAPER