ARTICLE 19 has responded to the EU survey on tackling illegal content online and commented on the European Commission’s Recommendation of March 2018. In our view, the Commission has failed to adequately protect free expression online.
ARTICLE 19 is critical of the framing of the questions in the survey, as they present a number of assumptions. Questions concerning how host services make decisions to remove content assume that intermediaries, as opposed to an independent court or tribunal, are in a legitimate position to do so. Stakeholders are not given the opportunity in the survey to challenge the assumption that community standards are an effective way of tackling illegal content online. The questions lean in favour of the Commission’s aim to develop and make internet companies employ automated removal tools. The survey also implies that the Commission wants to see the trusted flagger scheme become an automatic removal tool.
We are also very critical of the Commission’s Recommendation. In particular, we are concerned that the Recommendation promotes “proactive measures” for removing content, such as filters. While the Recommendation says such measures should only be used in “appropriate and proportionate” circumstances and is subject to “effective and appropriate safeguards”, we believe that this is disingenuous. In our view, the Commission is effectively encouraging general monitoring, which is otherwise prohibited under the E-Commerce Directive. The Commission seems to imply that as long as filters are limited to particular categories of content, monitoring is not ‘general’. The Commission has been pushing for filters to be used to identify child abuse images, then copyright and now terrorism and hate speech content. Although human intervention is alluded to, the end-game appears to be that such content should be prevented from even being uploaded in the first place, a form of prior censorship. The likely consequence of using filters is that perfectly lawful forms of expression will be censored on internet platforms.
ARTICLE 19 is also sceptical of the Commission’s clear intention to encourage further cooperation between hosting providers and Member States. We believe that while cooperation may be useful and sometimes necessary, this should not be at the expense of freedom of expression, transparency and the rule of law. We are concerned that the Commission merely seeks to push hosting providers to modify their terms of service and therefore guarantee quick removal of content outside of the law. In our view, this is very problematic, particularly when the content at issue is poorly defined and requires context-specific analysis, as in the case of ‘hate speech’. Companies’ terms of service are often broadly phrased so that their definition of ‘hate speech’ falls below international standards on freedom of expression. Law enforcement then only report content under companies’ terms of services, making it easier for content to be removed even though it may well be legitimate under domestic law. Companies may also be content to comply with notices for takedown under their terms of service as long as it prevents them from being regulated and potentially subject to fines or other sanctions.
We are also concerned about the Commission’s enthusiasm for trusted flagger schemes that are already employed by numerous internet companies. The legitimacy, independence and objectivity of trusted flagger schemes is dubious as it is unlikely that an anti-discrimination group operating within a scheme will have an unbiased view when it comes to determining whether to censor ‘hate speech’.
Lastly, ARTICLE 19 is extremely critical of the Recommendation for terrorist-related content to be removed within one hour of it being published online. This time-frame is far too short and does not allow for careful decision-making to be applied when removing content. This is particularly problematic considering that terrorist-related content is often context-specific and borderline in terms of amounting to ‘incitement’ under international law. ARTICLE 19 is not convinced that simply removing content is an effective way to fight against terrorism and that this may result in stigmatising certain groups. If this becomes the case, the trust between law enforcement and communities, both seeking to prevent radicalisation, could be broken down.
We hope that the Commission will consider its response carefully and draw conclusions from the EU survey accordingly.
Read our written response