In this analysis, ARTICLE 19 reviews the compatibility of Twitter’s Rules, policies and guidelines (as of August 2018) with international standards on freedom of expression.
The Twitter Rules are complemented by a range of policies on issues such as “hateful conduct,” “parody, newsfeed, commentary and fan account” as well as “General guidelines and policies” covering, for instance, the company’s policy development and enforcement philosophy (‘the Twitter Rules, policies and guidelines’). While the Twitter Rules, policies and guidelines attempt to deal with a wider range of content issues than was previously the case, our analysis shows that they are hard to follow, both in terms of presentation and application. They also generally fall below international standards on freedom of expression, particularly in relation to ‘hate speech’ and ‘terrorism.’ Although Twitter’s appeals process for the closing of accounts contains a number of positive features, it is unclear whether these policies are consistently applied in practice.
ARTICLE 19 encourages Twitter to bring its Rules, policies and guidelines in line with international human rights law and to continue to provide more information about the way in which those standards are applied in practice.
Summary of recommendations
- The Twitter Rules, policies and guidelines should be re-organised and consolidated so that the company’s rules in relation to particular types of content can be easily found in one place. Consideration should be given to making the Twitter Rules, policies and guidelines available in one consolidated document.
- Twitter should make clear when the Twitter Rules, policies and guidelines were last updated and identify which parts were amended.
- Twitter’s policies of “hateful conduct” should be clearly presented and should be more closely aligned with international standards on freedom of expression, including by differentiating between different types of prohibited expression on the basis of severity. Importantly, it should provide case studies or more detailed examples of the way in which it applies its “hateful conduct” policies;
- Twitter should align its definition of terrorism and incitement to terrorism with those recommended by the UN Special Rapporteur on counter-terrorism. In particular, it should avoid the use of vague terms such as “violent extremism”, “condone,” “celebrate,” “glorification” or “promotion” of terrorism;
- Twitter should give examples of organisations falling within its definition of “violent extremist groups.” In particular, it should explain how it complies with various governments’ designated lists of terrorist organisations, particularly in circumstances where certain groups designated as ‘terrorist’ by one government may be considered as legitimate (e.g. freedom fighters) by others. It should also provide case studies explaining how it applies its standards in practice;
- Twitter should explain in more detail the relationship between “threats,” “harassment,” and “online abuse” and distinguish these from “offensive content” (which should not be limited as such). Further, Twitter should provide detailed examples or case studies of the way in which it applies its standards in practice, including with a view to ensuring protections for minority and vulnerable groups;
- Twitter should state more clearly that offensive content will not be taken down as a matter of principle unless it violates other rules;
- Twitter should make more explicit reference to the need to balance the protection of the right to privacy with the right to freedom of expression. In so doing, it should refer to the criteria developed, inter alia, in the Global Principles on the Protection of Freedom of Expression and Privacy;
- Twitter should be more transparent and explain in more detail how its algorithms detect ‘fake’ accounts, including by listing the criteria on the basis of which these algorithms operate;
- Twitter should explain how the measures it is adopting to fight fake accounts, bots etc. are different from removing false information.
- Twitter should also define or further define what it considers to be “suspicious activity,” “bad actors” or “platform manipulation.”
- Twitter should ensure that its appeals process complies with the Manila Principles on Intermediary Liability, particularly as regards notice, the giving of reasons, and appeals processes;
- Twitter should be more transparent about its use of algorithms to detect various types of content, such as ‘terrorist’ videos, ‘fake’ accounts or ‘hate speech;’
- Twitter should clarify whether it relies on a trusted flagger system, and if so it should provide information about its system, including identifying members of the scheme and the criteria being applied to join it;
- Twitter should provide case studies of the way in which it applies its sanctions policy;
- Twitter should provide disaggregated data on the types of sanctions it applies in its Transparency Report;