Google’s new Guiding Principles demonstrate the company’s attempt to set a framework for the responsible development of artificial intelligence (AI). While this is a welcome starting point for protecting human rights in the sector, ARTICLE 19’s analysis of the Principles show they fall short of protecting human rights. Among the issues raised are vagueness of language used, a lack of sufficient transparency and accountability foreseen for the developing process, weak commitment to privacy and data protection rights and an ill-defined approach to AI applications that the company will not pursue. The Principles require significant improvement to ensure that human rights, and in particular freedom of expression, can be fully protected.
On 7 June 2018, Google published its new policy on “Responsible Development of AI” (Google’s Principles). The documents contain an overview of the company’s approach to the issues, with the declared aim of fulfilling its “obligation to develop and apply AI thoughtfully and responsibly, and to support others to do the same”.
ARTICLE 19 finds that Google’s Principles represent a good effort in paying due attention to a number of important challenges related to the development of AI, and we welcome the move made by the company, hoping that others will follow the example.
Nevertheless, ARTICLE 19 is concerned that Google’s Principles fail to fully comply with the international human rights framework that should guide the development of the AI technology, in particular framework on the right to freedom of expression and right to privacy.
In particular, we wish to highlight the following concerns:
The language in the Principles is vague.
- To start with, a definition of AI is missing. This is probably due to the fact the term is currently used to refer to a diverse range of applications and technique, the lack of a definition creates uncertainty about the scope of application of Google’s Principles.
- In addition, in a number of circumstances, the policy uses relatively generic terms, rather than legal ones, leaving the doors open to various and divergent interpretations. For example, Google declares that “The internal launch process which every new project, new product or significant feature undergoes will include a check for fit against topline principles.” Which are these topline principles remains unclear and needs further clarification.
- On the same line, Principle 2 calls for “Avoid creating or reinforcing unfair bias”. Bias is a generic term that can be interpreted in various ways, some of which would not trigger the application of the international human rights framework. We believe that the term “discrimination” would be more appropriate to the purpose.
Moreover, Google tackles the problem of AI applications which raise the risk of harm. The company’s commitment is to “proceed only where we believe that the benefits substantially outweigh the risks”. Nevertheless, the meaning of “harm” remains vague, and no details are provided about how the company will perform the balancing exercise when harm to a group of users may benefit another group. Nor, more generally, does it explain how Google intends to solve the problem of dual use technologies.
In the White Paper accompanying the Google’s Principles, Google commits to use a combination of central teams reviewing issues like privacy, discrimination and legal compliance. It is unclear though what is to be considered as included in the “legal compliance” concept. Furthermore, no mention is made to how the review is to be performed, and whether it will respect the international standards of transparency and accountability.
We are doubtful about Google’s commitment to guarantee the right to privacy and ensure data protection.
For instance, Google questions whether data minimisation, a fundamental data protection principle explicitly enshrined in the General Data Protection Regulation (Article 5), should be applied across the borders, and thus also in territories where data minimisation is not a legal requirement. This approach conflicts with the effort of establishing an international policy for AI’s responsible development and appears to be contrary to the declared objective of setting internationally shared best practices.
We also note that although Principle 6 establishes commitment to uphold high standards of scientific excellence, referencing the need for open inquiry, intellectual rigor, integrity and collaboration, it makes no mention of including ethical standards while working to progress AI development.
Commitments to not pursue AI applications that contravene rights lack definition.
Google’s dedicated section on AI applications that it will not pursue raises various concerns about the scope of these commitments. For example, Google pledges that it will not develop “technologies that gather or use information for surveillance violating internationally accepted norms”. Nevertheless, the statement is highly questionable. In fact, data is often not collected for surveillance purposes in the first place; however, that doesn’t prevent law enforcers from requiring companies like Google to provide access to that data. In addition, surveillance laws often contain provisions forcing companies to cooperate with LEAs and intelligence agencies.
Google states also that it won’t pursue “technologies whose purpose contravenes widely accepted principles of international law and human rights”. Although in principle this is a welcomed statement, we raise doubts about its concrete meaning. In fact, a technology can be used for the purposes of law enforcement in ways which could more or less compatible with human rights, and a company might be more or less incline to take into account possible alternative uses.
As a general remark, we also note that, on the one hand, the White Paper contains various references to cooperation among companies, governments and universities, but on the other hand it appears to leave out civic society actors, which, on the contrary, could play a fundamental role in the discussions.
Moreover, some of the suggested governance frameworks are also to be analysed with due care. For example, the White Paper calls for consensus-driven best practices. Nevertheless, no reference is made to the process which will lead to their establishment, nor guarantees are provided in terms of multi-stakeholders involvement.
Finally, the White Paper mentions self-regulatory bodies, but again no details are provided about the characteristics that these bodies will have, nor is there any guarantee of independence, transparent process or accountability mechanism to mention a few.
To conclude this initial analysis, while we welcome Google’s commitment to responsible development of AI, ARTICLE 19 calls on Google to align its Principles more closely with international law and standards for the protection of human rights, and in particular:
- Google’s Principles should make clear and precise reference to the body of international standards, which are relevant to the use of AI, and in particular to the right of freedom of expression and right to privacy. As to the latter, the protection should be set against the international principles concerning the right to privacy, rather than the users’ expectations, which might vary depending on the context and the level of digital skills.
- Google’s commitment to the respect of human rights, and privacy and data protection in particular, should be independent from the geographic area where the company operates and aim at comply with international, rather than national or regional standards.
- Google should engage in a multi-stakeholder process, including civil society organisations, while discussing ethical codes and industry standards. The process has to be transparent and accountable in order to produce legitimate outcomes.
- Google’s Principles should be based on a holistic understanding of the impact of AI. In order to do so, Google should collect and highlight across the globe case studies of “human rights critical” AI as it pertains to its operations.
The Google’s Principles, Artificial Intelligence at Google, Our Principles is available here.