EU fails to protect free speech online, again

On 28 September 2017, the European Commission published its long-awaited Communication “Tackling Illegal Content Online”. This follows a leaked copy, which was analysed by our colleagues at EDRi. It also follows in the footsteps of several initiatives of the European Commission, such as the EU Code of Conduct on Tackling Illegal Hate Speech or the setting up of the Europol Internet Referral Unit.

Although the Communication merely lays down a set of ‘guidelines and principles’ to ‘step up’ the fight against illegal content online, the Commission makes clear that it will look to adopt legislative measures if internet companies fail to meet its expectations.

Essentially, the Commission proposes the following:

  • It strongly encourages Internet companies to be more proactive in detecting and removing illegal content. In doing so, companies are invited to use automatic detection and filtering technologies to be more effective, i.e. to remove ‘illegal’ content more swiftly or prevent their re-appearance online. In other words, the Commission wants companies to let algorithms work out as much as possible what materials are illegal online.
  • The Commission puts strong emphasis on “trusted flaggers” as providing a “privileged channel for those notice providers which offer particular expertise in notifying the presence of potentially illegal content on their website” (our emphasis).  The Commission’s stated hope is that by using trusted flaggers, removal notices will be of higher quality and lead to faster takedowns.
  • The Commission makes some allowance for the ‘over-removal’ of legal content by encouraging Internet companies to provide some measure of due process in the form of counter-notices. It also suggests policies to deal with dad faith notices such as the revocation of trusted flagger status.

ARTICLE 19 is, once again, deeply disappointed by the European Commission’s approach to ‘tackling illegal online’.

The Commission’s overall approach is driven by the assumption that (1) companies should take responsibility for illegal content online, (2) all content that is merely flagged by police or certain ‘trusted users’ is illegal despite the lack of independent assessment of this content; (3) all content which is flagged should therefore be removed as quickly as possible.

ARTICLE 19 is especially concerned that the Commission is pushing companies to proactively filter and delete content using algorithms or other technologies. In particular, the Commission relies on a shamefully distorted analysis of Article 14 of the E-Commerce Directive (‘ECD’) by promising that companies will not be found liable for content even though they take proactive steps in identifying it since it is assumed that they will be deleting that content swiftly by default. This, as EDRi has pointed out, also plainly contradicts the Commission’s own proposals in the Copyright Directive.

The Commission’s analysis also gets around the prohibition on Member States to impose general monitoring of content online under Article 15 ECD since the Communication is not law and private companies are, in principle, not prohibited from undertaking general monitoring for their own purposes.  In reality, however, the Communication’s position fundamentally undermines the principle under Article 15 so as to render it devoid of any substance. It also seeks to put the determination of the legality of content as much as possible in the hands of algorithms despite the fact that algorithms are notoriously bad at taking context into account. That this can have serious free speech implications barely gets mentioned.

Quite apart from the fact that companies have used trusted flaggers for some time and that a quality mark is unlikely to make much difference to the quality of notices, the Commission’s faith in the trusted flagger system entirely ignores the fact that trusted flaggers are not independent. They have a mandate to combat a particular type of content, whether copyright, terrorism or hate speech, which makes them particularly ill-suited to take into account free speech arguments. They are inherently incapable of making an independent assessment of the legality of the content at issue.  This is significant, as the Commission seems to equate flagged content with content that must be immediately taken down.

If a notice provided by a trusted flagger is unsatisfactory, the Commission then relies on companies, which are equally inherently incapable of making an independent assessment, to decide on the legality of the content at issue.

The Commission therefore continues to delegate all responsibility for tackling illegal content online to the private sector contrary to international standards on freedom of expression. The Commission’s language on counter-notices – whilst welcome – does little to redress the fundamental imbalance in its approach to illegal content online.

Overall, ARTICLE 19 considers that the Commission’s Communication seriously undermines freedom of expression and amounts to an attempt to weaken the EU’s own legal framework on this issue.