‘Fake news’ seems destined to remain a trending topic in 2018. In early January, several countries signalled their plans to introduce new legislation to target the issue. Tech companies are reportedly also adopting measures to address “fake news” on their platforms. However, the furore the buzzword generates in public debate should not distract from the threat posed to freedom of expression by the prospect of State-controlled information, or a selective sorting of media content by dominant, and largely unaccountable, tech giants.
In the first days of January 2018, French President Emmanuel Macron announced a bill to ban “fake news” during elections. The Spanish government might be considering a similar initiative. ARTICLE 19 warns that enacting a legal duty of “truth” would create a powerful instrument to control journalistic activities: allowing public officials to decide what counts as truth is tantamount to accepting that the forces in power have a right to silence views they don’t agree with, or beliefs they don’t hold. Such laws can prevent the discussion of ideas which challenge the norm, limiting public debate and restricting criticism of societal attitudes or of those in power. Under such laws, journalists or human rights defenders could be sent to prison on accusations of disseminating untrue statements about alleged wrongdoings by the government. What has recently re-emerged as a trending news story following the US 2016 elections has fast become a more serious threat to free expression, and as the popularity of legislative responses like this grows we may be sleepwalking into censorship in response to a phenomenon we still don’t fully understand.
The attempts to regulate “fake news” are not limited to states. In January 2018, the EU Commission appointed the High Level Group on fake news and online disinformation. The Commission is also collecting input from stakeholders on the scope of “fake news” problem and “the effectiveness of voluntary measures already put in place by industry to prevent the spread of disinformation online.”
At the same time, Mark Zuckerberg has declared that Facebook plans to tweak the algorithms that build individual newsfeeds in order to reduce professional content and help users benefit from ‘more meaningful interactions.’ The modification would allow Facebook to get rid of its “fake news” problem, avoiding criticism of its role in the spread of false stories by relegating media content to a distant corner of users’ screens. However, this would have severe repercussions on the visibility and findability of news on the platform, and therefore the reach of numerous media outlets.
This revision of Facebook’s algorithm follows previous attempts to address outcry over “fake news” by collaborating with external fact-checking organisations, including media companies such as Le Monde in France, in order to flag misinformation in the hope that the audience would then avoid it. The current results of these experiments are unclear.
Facebook isn’t the only tech company that has tried to grapple with the “fake news” phenomenon. Eric Schmidt, Executive Chairman of Alphabet (Google), previously admitted that Google considered the delisting of certain Russian media outlets (the RT and Sputnik) and websites identified as the producers of ‘fake news’.
Seemingly neither states nor business are getting it right on “fake news” and free expression. The notion of ‘fake news’ is too vague to prevent subjective and arbitrary interpretation, whether in legislation or the rules of online platforms. “Fake news” laws can be (and frequently are under some regimes) used to suppress media freedom and jail journalists, but it would not be much reassurance to have private entities like the tech giants making these assessments instead. Such efforts can lead to undue censorship as a result of flawed algorithms and ill-thought out assessments of what can be considered “true” – not to mention that these businesses may be subject to the influence of non-democratic governments in certain countries where they operate.
Meanwhile, new research shows that online misinformation might have a broad reach but would in fact have only little impact on the public. This might not mean that all the agitation is entirely in vain, but before we can develop a genuine response to the spread of misinformation we need to understand what impact it’s having.
Any responses to disinformation and propaganda must be based on international freedom of expression standards. Useful guidance in this area is already provided in the 2017 Joint Declaration on Freedom of Expression and “Fake News”, Disinformation and Propaganda, developed by four free expression mandates. The Declaration has set out ways for governments and tech companies alike to ensure the protection of free expression in any attempts to address the “fake news” question.
Ongoing efforts of research and experimentation also seek solutions to find responses to misinformation, to help journalists ‘investigate misleading and viral content, memes and trolling practices online’ , to design an appropriate regulation of automated decision-making processes or to invent economic solutions to fund quality journalism.
These efforts, through the cooperation of all stakeholders, should lead to the development of inclusive and transparent initiatives to create a better understanding of the impact of disinformation and propaganda on democracy, freedom of expression, journalism and civic space, as well as appropriate responses to these phenomena. Ultimately, an effective mechanism of self-regulation which is transparent, participatory and accountable, would provide the most appropriate forum for the collective learning that democratic societies need to engage in to protect freedom of expression online.