Interesting People mailing list archives

Google Should Not Help the U.S. Military Build Unaccountable AI Systems


From: "Dave Farber" <farber () gmail com>
Date: Sat, 7 Apr 2018 14:46:03 -0400




Begin forwarded message:

From: Dewayne Hendricks <dewayne () warpspeed com>
Date: April 7, 2018 at 2:05:11 PM EDT
To: Multiple recipients of Dewayne-Net <dewayne-net () warpspeed com>
Subject: [Dewayne-Net] Google Should Not Help the U.S. Military Build Unaccountable AI Systems
Reply-To: dewayne-net () warpspeed com

Google Should Not Help the U.S. Military Build Unaccountable AI Systems
By PETER ECKERSLEY AND CINDY COHN
Apr 5 2018
<https://www.eff.org/deeplinks/2018/04/should-google-really-be-helping-us-military-build-ai-systems>

Thousands of Google staff have been speaking out against the company’s work for “Project Maven,” according to a New 
York Times report this week. The program is a U.S. Department of Defense (DoD) initiative to deploy machine learning 
for military purposes. There was a small amount of public reporting last month that Google had become a contractor 
for that project, but those stories had not captured how extensive Google’s involvement was, nor how controversial it 
has become within the company.

Outcry from Google’s own staff is reportedly ongoing, and the letter signed by employees asks Google to commit 
publicly to not assisting with warfare technology. We are sure this is a difficult decision for Google’s leadership; 
we hope they weigh it carefully.

This post outlines some of the questions that people inside and outside of the company should be mulling about 
whether it’s a good idea for companies with deep machine learning expertise to be assisting with military deployments 
of artificial intelligence (AI).

What we don’t know about Google’s work on Project Maven

According to Google’s statement last month, the company provided "open source TensorFlow APIs” to the DoD. But it 
appears that this controversy was not just about the company giving the DoD a regular Google cloud account on which 
to train TensorFlow models. A letter signed by Google employees implies that the company also provided access to its 
state-of-the-art machine learning expertise, as well as engineering staff to assist or work directly on the DoD’s 
efforts. The company has said that it is doing object recognition “for non-offensive uses only,” though reading some 
of the published documents and discussions about the project suggest that the situation is murkier. The New York 
Times says that “the Pentagon’s video analysis is routinely used in counterinsurgency and counterterrorism 
operations, and Defense Department publications make clear that the project supports those operations.”

If our reading of the public record is correct, systems that Google is supporting or building would flag people or 
objects seen by drones for human review, and in some cases this would lead to subsequent missile strikes on those 
people or objects. Those are hefty ethical stakes, even with humans in the loop further along the “kill chain”.

We’re glad that Google is now debating the project internally. While there aren’t enough published details for us to 
comment definitively, we share many of the concerns we’ve heard from colleagues within Google, and we have a few 
suggestions for any AI company that’s considering becoming a defense contractor.

What should AI companies ask themselves before accepting military contracts?

We’ll start with the obvious: it’s incredibly risky to be using AI systems in military situations where even 
seemingly small problems can result in fatalities, in the escalation of conflicts, or in wider instability. AI 
systems can often be difficult to control and may fail in surprising ways. In military situations, failure of AI 
could be grave, subtle, and hard to address. The boundaries of what is and isn’t dangerous can be difficult to see. 
More importantly, society has not yet agreed upon necessary rules and standards for transparency, risk, and 
accountability for non-military uses of AI, much less for military uses. 

Companies, and the individuals who work inside them, should be extremely cautious about working with any military 
agency where the application involves potential harm to humans or could contribute to arms races or geopolitical 
instability. Those risks are substantial and difficult to predict, let alone mitigate.

If a company nevertheless is determined to use its AI expertise to aid some nation’s military, it must start by 
recognizing that there are no settled public standards for safety and ethics in this sector yet. It cannot just 
assume that the contracting military agency has fully assessed the risks or that it doesn't have a responsibility to 
do so independently.

At a minimum, any company, or any worker, considering whether to work with the military on a project with potentially 
dangerous or risky AI applications should be asking:

[snip]

Dewayne-Net RSS Feed: http://dewaynenet.wordpress.com/feed/
Twitter: https://twitter.com/wa8dzp





-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
Modify Your Subscription: https://www.listbox.com/member/?member_id=18849915&id_secret=18849915-aa268125
Unsubscribe Now: 
https://www.listbox.com/unsubscribe/?member_id=18849915&id_secret=18849915-32545cb4&post_id=20180407144610:EE902FB2-3A93-11E8-B9C1-F7213D1B800E
Powered by Listbox: http://www.listbox.com

Current thread: