EU: New Code of Practice on Disinformation fails to provide clear commitments, or protect fundamental rights

Online platforms and the advertising industry have agreed on a ‘Code of Practice on Online Disinformation’ that the EU Commission views as an important step in protecting trust in EU democratic processes. Beyond the vagueness of commitments inscribed in the Code, the process that lead to its adoption has failed to amount to an effective, open, transparent and accountable self-regulatory mechanism, something that is urgently needed to facilitate the essential public on the moderation of online content, including reactions to disinformation.

On 26 September 2018, the EU Commission announced that the adoption of the Code of Practice on Online Disinformation was the first concrete outcome of the EU Communication “Tackling online disinformation: a European approach” adopted earlier in the year. The Communication endorsed the 2017 Joint Declaration on Freedom of Expression and ‘Fake News’, Disinformation and Propaganda as providing “a focused treatment of the application of international human rights standards” to the issue at stake. The Communication then announced the creation of a European multi-stakeholder forum on disinformation tasked with drafting a Code of Practice that should include measurable objectives.

By that measure alone, the Code of Practice adopted by the online platforms and the advertising industry is a failure, although it includes some signs of good will. While signatories commit for instance to adopt policies in relation to the use of automated systems, the Code says nothing of the principles that should be integrated in such policies. Under the category of ‘empowering users’, the Code mentions supporting efforts to develop ‘effective indicators of trustworthiness in collaboration with the news ecosystem’, the development of technologies to prioritise ‘relevant, authentic and authoritative information’ or of tools that will ‘make it easier for people to find diverse perspectives about topics of public interest.’ The signatories further commit to ‘not discourage good faith research into disinformation and political advertising’ on their platforms. All such promises, while steps in the right direction, appear to be the most minimal possible commitment from signatories. Such commitments should include at least an indication that the development of these tools and initiatives should include a human rights impact assessment and should duly take into consideration the need to support media pluralism and diversity.

The media, fact-checkers and academics that take part in the EU multi-stakeholder forum on disinformation have described the Code as containing ‘no common approach, no clear and meaningful commitments, no measurable objectives or KPIs, hence no possibility to monitor process, and no compliance or enforcement tool’. The approach chosen by the EU Commission seems to have been to push private actors, keeping them under pressure by monitoring their work on an undefined basis: this process, which encourages online actors to comply with the implicit wishes of public authorities in order to avoid a harsher form of regulation, does not make any space for the protection of fundamental freedoms.

It is symptomatic of this that the multi-stakeholder forum was in practice divided into two distinct groups instead of facilitating collective efforts from all participants. On the one side, the industry (advertising, social media platforms) convened in a ‘Working Group’ that drafted the Code of Practice while on the other side, the media, journalists, fact-checkers and academics were gathered in a ‘Sounding Board’ that concluded that the Code does not amount to much.

In the present stage of the evolution of media landscapes, online platforms de facto hold considerable influence over the circulation of online flows of information and ideas. The way online content is distributed and moderated is a debate of the highest importance for contemporary democratic societies.

The failings around the Code once again demonstrate the urgent need for a new approach to social media content regulation. ARTICLE 19 continues to call for the creation of a Social Media Council, which is a self-regulatory mechanism inspired by the experience of press councils that would provide an open, transparent and accountable forum to address content moderation issues – such as disinformation – on social media platforms. The UN Special Rapporteur on the right to freedom of expression and information supported this recommendation in his April 2018 report, urging that ‘all segments of the ICT sector that moderate content or act as gatekeepers should make the development of industry-wide accountability mechanisms (such as a social media council) a top priority.’

ARTICLE 19 urges the EU Commission to take steps to address the issues raised by this latest agreement, and transform the current multi-stakeholder forum into an effective, transparent, independent and accountable self-regulatory mechanism that would better protect fundamental rights, including the right to freedom of expression.