Over the course of a decade, many of us have come to rely on social media—for news and information, contact with friends old and new, social and political organizing, sharing our work, and much more. At this particular moment in time—when much of the world is living in voluntary or mandatory isolation because of efforts to curb the spread of coronavirus—the ability to access information and express ourselves feels more important than ever.
As many readers know, speech on social media is subject to a complex system of governance. Whereas traditionally, what we can say in public has been regulated either by communities or states, today a great deal of public conversation is instead privately regulated by corporations. These companies—like Facebook, Twitter, and Google—enforce their own rules, most often using commercial content moderation. They also enforce laws prohibiting certain speech, by complying with requests from governments and law enforcement.
Most major social media companies are based in the U.S. and are subject to that country’s legal framework. While the First Amendment to the U.S. Constitution protects speech from government interference, private companies are able to make their own decisions about the speech that they host. They are also protected by another law, commonly referred to as “Section 230” or “CDA 230”, that states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This law protects intermediary services that host or republish speech against a range of laws that otherwise could be used to hold them legally liable for what others say or do on their platform.
As a result of this unique situation, companies like Facebook have near-total discretion over what lawful material they host or take down.
How commercial content moderation works
Before social media existed at the scale that it has in recent years, content moderation was typically conducted by communities, volunteers, or in-house by companies. But as platforms like YouTube and Facebook grew and developed community standards for acceptable expression, they recognized that moderating content at scale required a much larger workforce. Many companies soon transitioned to using third-party companies, which cost less and allowed for the outsourcing of management. Dr. Sarah T. Roberts has written extensively about the development and practice of commercial content moderation; her book Behind the Screen: Content Moderation in the Shadows of Social Media provides great insight into the working conditions of content moderators.
Content moderation today relies on a combination of user reporting, or “flagging”, and automation. Typically, the process works something like this: A user of a given platform sees something that they believe violates that platform’s rules and reports it to the company. That report then goes into a queue, and is at some point in the next hours or days reviewed by a human worker, who makes a quick decision to either allow or delete the content, or escalate the report.
If the content is deleted, the removal may be accompanied by a temporary or permanent ban on the user. If the report is escalated, it is usually reviewed by a more senior, full-time employee. In some cases, escalations where the decision about a piece of content is complicated or controversial lead to policy changes or clarifications.
In other instances, automation is used to “flag” content, which is typically then reviewed by a human. Increasingly, however, companies are looking to automation not just to flag content, but to make a decision about whether to allow or delete it
The global impact of content moderation
Neither human nor automated content moderation processes are infallible—both make a considerable number of mistakes, many of which have been documented by various groups, including the Electronic Frontier Foundation, where I work. These errors occur for a number of reasons, and they disproportionately affect certain groups, many of which are already vulnerable. Among those most heavily impacted by wrongful or erroneous takedowns are women, artists, LGBTQ+ individuals and groups, journalists, racial and ethnic minorities, and sex workers.
One contributing factor to the volume of erroneous takedowns in certain areas is policies that are already restrictive of lawful expression, such as Facebook’s rule requiring that people use their “authentic names”. Once a requirement like that one is in place, significant effort is typically put into enforcing it—in this instance, that effort includes allowing users to report one another for using a “fake name” and requiring reported users to submit a document from a Facebook-approved list of identification documents. Those who cannot or will not comply will most often find their accounts permanently suspended, regardless of how “legitimate” the name listed on their user profile.
Another contributing factor is that some rules are nearly impossible to enforce consistently. The prohibition on hate speech (whether defined as such by law or by corporate policy) enforced by most companies requires content moderators to consider a number of things, from local context and law to whether a given usage constitutes criticism, counterspeech, or satire. This is difficult enough for someone with a legal background, so when decisions are left up to less-trained workers, mistakes are inevitable.
Furthermore, as companies rely on automated technologies to police speech—as they are doing amidst the coronavirus pandemic—we are bound to see more mistakes occur. Automated technology is good at some things, such as identifying certain objects in images, but less good when it comes to things like detecting sentiment.
Over the years, the content moderation errors by major social media platforms have been piling up. We’ve seen photographs of plus-sized models, famous paintings, and images of Copenhagen’s Little Mermaid statue removed because they were mischaracterized as “adult content.” We’ve witnessed the mass suspension of political activists in Egypt for aiming, at government officials, a profanity, not unlike something used without incident to criticize President Trump on a daily basis. We’ve seen the erasure of drag queens and LGBTQ+ books and the deletion of sex workers.
Fighting back
Many communities affected by platform censorship have chosen to fight back—but fighting back against a public company can be an uphill battle. Major social media companies are beholden to their shareholders, not their users, and as such often change course on policies when subject to significant pressure or shaming.
One early example of effective advocacy came from a network of mothers and their supporters who were tired of Facebook removing images related to breastfeeding. They protested Facebook’s nudity policy and, although they were unable to get the policy overturned, the company relaxed its rules in 2015 to ensure that breastfeeding photos weren’t removed. Similarly, drag queens and other LGBTQ+ users who lost their accounts as a result of Facebook’s “authentic name” policy successfully fought the company to allow them to use the name by which they are known in life, rather than the one on their ID.
Others involved in creating “adult content” continue to fight back against what they see as overly restrictive policies. Most recently, anti-censorship activists stripped nude in front of Facebook’s New York offices (a legal act in that city) to demonstrate against the company’s prohibition on nudity. And sex workers continue to organize as companies increase crackdowns on even their most innocuous content.
As the coronavirus pandemic continues, there is cause for concern. Several companies have announced that their content moderation capacity has been heavily reduced; automation is increasingly taking the place of human moderation; and mechanisms for appealing wrong decisions are absent or broken. It is, for better or worse, on users and observers to be vigilant and hold companies accountable.