Putting Human Rights at the Heart of the Design, Development and Deployment of Artificial Intelligence

The Digital Society and Human Rights: Putting Human Rights at the Heart of the Design, Development and Deployment of Artificial Intelligence

By Jack Poulson
March 14, 2019
Human Rights Council 40
Room XXVII, Palais des Nations, Geneva

Given my experiences with Google over the last year, I find the wording of the title of this event doubly relevant. One of the reasons will become very clear later in the statement, but the other is the explicit usage of the terminology “Design, Development, and Deployment”. The distinctions between these stages, and the at-will ability of a CEO to instantly shift a project between the development and deployment phases, has played a central role in attempts from its own employees, the press, human rights organizations, and government to call on the company to clarify its red lines on censorship and surveillance.

Design’, ‘develop’, and ‘deploy’ are also terms of art within Google’s public commitments – via its “AI Principles” – to not build harmful technologies. The genesis of these commitments was Google’s contract with the Pentagon to build AI to track targets within drone surveillance footage – as part of Project Maven. Many employees felt that Google building tools to increase the lethality of a weapons system was crossing a red line as a company.

In June of 2018, as a result of months of collective employee escalations over Project Maven, Google publicly released a set of ‘AI Principles’. Beyond an explicit commitment to not develop ‘AI for use in weapons’, the AI Principles committed Google to not ‘design or deploy’ AI in four areas:

1. “Technologies that cause or are likely to cause overall harm.”

2. “Weapons or other technologies whose principal purpose or implementation is to
cause or directly facilitate injury to people.”

3. “Technologies that gather or use information for surveillance violating internationally accepted norms.”

4. “Technologies whose purpose contravenes widely accepted principles of international law and human rights.”

Yet, at the very moment these principles were released, hundreds of Google employees had been tasked – as part of Project Dragonfly – with secretly developing a version of Search to comply with the censorship and surveillance demands of the Chinese Communist Party. As employees would learn months later (and I would personally witness), this included Google building a blacklist for terms such as “human rights”, “student protest”, and “Nobel prize”. Employees – again, including myself – also observed a prototype interface allowing users’ Search queries to be trackable by their phone number.

Needless to say, there is a conflict between Google’s AI Principles commitment to not “design or deploy” “technologies whose purpose contravenes widely accepted principles of […] human rights” and Google designing and developing a blacklist which would literally censor the statement of the principle and any other discussion of “human rights” – including the title of this event.

After Project Dragonfly was first revealed to the public – and to most Google employees – on August 1, 2018, numerous human rights organizations released statements of concern and calls for clarification from Google. An important milestone was an August 28 open letter co-authored by 14 human rights organizations and several academics which called on Google to:

* Reaffirm their 2010 commitment to not proactively censor Search in China,

* “Disclose its position on censorship in China and what steps, if any, Google is taking
to safeguard against human rights violations”, and

* “Guarantee protections for whistleblowers”.

When the letter was published, I was weeks into internally escalating the issue within Google – resignation letter in hand – and had a meeting on the subject booked for August 30 with the head of AI research at Google, Jeff Dean. Given the combination of the AI Principles commitment to not violate human rights, and the coalition letter from 14 human rights organizations expressing concern, there was a clear argument that Dragonfly was violating Google’s public promises.

But I found that, without a trusted enforcement mechanism, a company can simply deny that it has violated its human rights commitments. Indeed, Dean dismissed the claim on two fronts: human rights organizations are external entities with incomplete information – nevermind that that their requests for clarification were rebuffed – and that I had no proof that Dragonfly’s forfeitures were worse than what Google must provide to the US government through FISA warrants. In other words, Google’s response to an employee’s credible human rights concerns was to discredit human rights organizations as uninformed outsiders and equivocate human rights violations as a necessary part of running a business.

Other senior leadership at the company has similarly deflected inquiry: when Google’s Chief Privacy Officer, Keith Enright, was asked by the Senate Transportation Committee in September about the details of Project Dragonfly, he replied that he was “not clear on the contours” of the project under the pretext that it was not yet in the deployment phase. Similarly, when Google’s CEO, Sundar Pichai, was asked about the details of Dragonfly by the House Judiciary Committee in December, his responses overwhelmingly centered on the defense that, at that moment, there were no plans to deploy the project. Google’s Chief Privacy Officer and Chief Executive Officer both deflected government inquiry with the implicit argument that the design and development phases were off limits for human rights scrutiny.

Beyond deflection, Google’s CEO publicly defended Dragonfly’s censorship of human rights information and political inquiry in October by arguing that “well over 99 percent” of queries would not be censored. That is, Google’s CEO publicly defended human rights violations as an ignorable edge case.

Google’s response to Dragonfly can be taken as a case study in the techniques that companies are likely to use to suppress internal and external criticism of, and inquiry into, their human rights standards. At the moment, business human rights commitments are vague gentlemen’s agreements with no trusted third party to serve as a mediator for escalations – companies can and will deny the veracity of any significant criticism.

Lastly, we must change the implied norms of inquiry into technical projects so that accountability is required not just in the deployment phase, but also during design and development. Allowing companies a pass on human rights violations during “exploratory” phases not only neutralizes whistleblowing and attempts at clarifying red lines, it also normalizes and incentivizes the violation: The exploration phase likely entails the creation of an entire division of the company whose career interests are aligned with the violation.

The UN is the central entity that defines and protects international norms for human rights. I therefore believe it has an imperative to push tech companies to clarify and engage on the human rights implications of their research and products. And this should especially hold for companies which, like Google, publicize their participation in the UN Forum on Business and Human Rights.