Artificial intelligence systems form an increasingly important role in our daily lives, from autocorrecting our text messages to determining our employment opportunities. While these systems can bring about positive changes, their use by states and corporations risk negatively impacting human rights. In particular, AI practices of collecting and sharing data to profile and predict behaviour threaten the rights to privacy and free expression. ARTICLE 19 and Privacy International’s new report examines how one particular AI technique, machine learning, impacts these rights, the implications we can expect, and how these must be addressed.
Issues of AI and privacy came under the spotlight again this month when Facebook CEO Mark Zuckerberg testified before the US Congress on issues surrounding the platform’s controversial data sharing and approach to privacy. In both of his testimonies, Zuckerberg presented AI tools as ‘magic bullets’ for resolving complex content and privacy issues. But given that AI tools are at the heart of the company’s controversial data practices, from facial recognition to profiling, this argument is unconvincing, and further demonstrates a worrying trend of corporations and governments advancing AI capabilities with little understanding of the impact such systems may have on people’s lives.
Reliance by companies and governments on intelligent systems to moderate, filter, and remove what we post online increases the risk of overbroad censorship, and excessive restrictions to free expression. This is particularly true for those in vulnerable situations, and minority voices. Mark Zuckerberg’s testimony last week is indicative of this increasing reliance on AI for content moderation. There is a clear need to consider how such broad application of artificial intelligence impacts fundamental human rights, said Vidushi Marda, ARTICLE 19’s Digital Programme Officer on Algorithmic Decision Making.
“AI tools will fix it” was a major theme in Mark Zuckerberg’s testimony before US representatives. What he failed to mention was that AI lies at at the core of some of Facebook’s most controversial practices: from facial recognition to profiling and ad-targeting practices like LookaLike Audience. Applications of AI raise some of the most pressing privacy challenges of our time, said Frederike Kaltheuner, Data Exploitation Lead at Privacy International Data Exploitation.
ARTICLE 19 and Privacy International’s report provides an overview of the impact of AI technologies on freedom of expression and privacy. It calls for further study and monitoring of how AI tools impact human rights.
Specifically we call on states and companies to:
- Ensure protection of international human rights standards. All AI tools must be subject to laws, regulations and ethical codes which meet the threshold set by international standards.
- Ensure accountability and transparency: Corporate, technical and state actors must allow for meaningful multi-stakeholder — including civil society — participation in setting technical standards and regulations for AI systems.
We urge civil society to:
- Engage further to mitigate against potential negative impacts of AI tools on fundamental rights
- Collect and highlight case studies of ‘human rights critical’ AI
- Build civil society coalitions and expertise networks to develop knowledge exchange and advance debates on AI on to human rights impacts.
Press contacts
Vidushi Marda, Digital Programme Officer: [email protected]
Frederike Kaltheneur, Data Exploitation Programme Lead: [email protected]