ARTICLE 19 submitted evidence to the United Kingdom’s House of Lords Select Committee on Artificial Intelligence on 6 September 2017. The submission stresses the need to critically evaluate the impact of Artificial Intelligence (AI) and automated decision making systems (AS) on human rights. It also calls for deeper understanding of various ways in which these technologies embed values and bias, thereby strengthening or sometimes hindering the exercise of these rights, particularly freedom of expression. The overarching recommendation is for the development and use of AI to be subject to the minimum requirement of respecting, promoting, and protecting international human rights standards.
Since 2014, ARTICLE 19 has pioneered efforts in technical communities to bridge existing knowledge gaps on human rights and their relevance in internet infrastructure. Our efforts have been geared towards integrating human rights into foundational documents at the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Engineering Task Force (IETF), and the Institute for Electrical and Electronics Engineers (IEEE). At the IEEE specifically, ARTICLE 19 has taken active part in the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. In December 2016, we also published a policy briefon algorithms and automated decision-making in the context of crime prevention.
AI and the Right to Freedom of Expression
Machine learning – the most successful subset of AI techniques – enables an algorithm to learn from a dataset using statistical methods. Algorithms increasingly influence the ways in which we interact with our environments. For example, AI sdetermines the information we consume online through ranking and filtering content, most notably on social media platforms like Facebook, YouTube, and Twitter. Algorithms are also increasingly used for network management of critical infrastructure, from the electrical grid, to Internet routing. As such, AI has a direct impact on the ability of individuals to exercise their right to freedom of expression in the digital age. Development of AI is not new but advances in the digital environment – greater volumes of data, computational power, and statistical methods – will make it more enabling in the future. Looking ahead, there will be strong impetus to implement AI across the board, making its potential even more pronounced. This means we need to carefully consider where, how, and whether AI should be implemented. ARTICLE 19 calls for a shared legal-ethical framework within which these technologies can function.We believe the following three issues are particularly pertinent at this time:
Respect for international human rights standards: A one-size-fits-all approach cannot work in context of the regulation of AI because of the sheer variety of AI systems and capabilities, varying degrees and instances of application, the stakeholders involved, and the nature of decisions being made. However, the minimum requirement for all AI and applications arising from AI should be compliance with international human rights standards.
Accountability of self-regulatory mechanisms: As ARTICLE 19 previously stated, AI applications are largely based on self-regulatory mechanisms for blocking, filtering, takedown and removal, largely by online intermediaries. These are often placed beyond judicial oversight due to lack of scrutability and accountability. The role of the government here is to ensure that individuals have a remedy to challenge decisions based on AI that interfere with their human rights.
Multistakeholder approach: As AI is developed by and impacts a wide range of actors, it should be developed and regulated through a multistakeholder approach.
ARTICLE 19 will continue to work towards the development of AI that is inclusive, sustainable and respectful of human rights.