The debate about artificial intelligence (AI) and online expression exploded during 2018, but that technology cannot provide the whole solution to regulating online media outlets and social media.
In April 2018, Facebook CEO Mark Zuckerberg’s testimony before the US Congress revealed the company’s increasing reliance on AI tools to solve some of their most complex problems – from hate speech to terrorist propaganda; from election manipulation to misinformation. An overarching assumption, expressed throughout the course of the hearings, was that ‘AI tools’ that can proactively flag problematic content are more desirable and effective than human beings undertaking reactive content takedowns.
AI systems are not technically equipped to understand context in speech or social intricacies – let alone evolving and subjective social constructs, like hate speech. Given the repercussions of overbroad restrictions and vague standards for content removal, companies should model community standards in line with the requirements of international human rights law – as a baseline – and improve transparency.[1]
[1] ARTICLE 19, Facebook Congressional Testimony: ‘AI Tools’ Are Not the Panacea, 12 April 2018, available at https://www.article19.org/resources/facebook-congressional-testimony-ai-tools-not-panacea/