Emotion recognition technology is pseudoscientific and carries enormous potential for harm.
ARTICLE 19’s latest report, Emotional Entanglement, provides evidence and analysis of the growing market for emotion recognition technology in China and its detrimental impact on human rights.
The report demonstrates the need for strategic and well-informed advocacy against the design, development, sale, and use of emotion recognition technologies.
Above all, we emphasise that the timing of such advocacy – before these technologies become widespread – is crucial for the effective promotion and protection of people’s rights, including their freedoms to express and opine.
The report provides a factual foundation that researchers, civil society organisations, journalists, and policymakers can build upon to investigate people’s experience of this technology and illustrate its harms.
What is emotion recognition technology?
Unlike facial recognition or biometric applications that focus on identifying people, emotion recognition technology purports to infer a person’s inner emotional state.
The technology is becoming integrated into critical aspects of everyday life such as where law enforcement authorities use it to label people as ‘suspicious’; where schools use it to monitor students’ attentiveness in class, and where private companies use it to determine people’s access to credit.
Why should we be concerned about it?
Firstly, by design, emotion recognition technology seeks to impose control over people. Given the discredited scientific foundations on which it is built, the use of this intrusive and inaccurate technology encourages the use of mass surveillance as an end in and of itself.
Secondly, the technology is being developed without consulting people and with no regard for the immense potential it carries for harm. As a result, it is likely to disproportionately affect minorities and those who are already deprived of their rights.
Most importantly, by analysing and classifying human beings into arbitrary categories that touch upon the most personal aspects of their being, emotion recognition technology could restrict access to services and opportunities, deprive people of their right to privacy, and threaten people’s freedom to express and form opinions.
Emotion recognition technology is deeply problematic because
It is often pseudoscientific and does not do what it claims to do
Emotion recognition technologies are built on two fundamental assumptions.
The first, is that it is possible to gauge a person’s inner emotions from their external expressions.
The second is that such inner emotions are both discrete and uniformly expressed across the world.
On both counts, the opposite is true.
Facial expressions do not always reflect our inner emotions. Keeping a straight face or smiling when sad are common practices. In other words, people often mask or suppress their emotions, so facial expressions can tell us very little about how happy, surprised, disgusted, sad, angry or afraid people are.
Secondly, emotions are as diverse, nuanced and complex as people. Culture and societal attitudes fundamentally shape our emotions. For example, where nudity may shame or disgust people of one culture, it may be elevated to art in another. So, facial expressions are filtered through culture to gain meaning.
By claiming to infer people’s “true” inner states and making decisions based on these inferences, emotion recognition technologies would cement arbitrary assumptions about people as ground truth.
This has two significant implications:
First, it gives way for significant chilling effects on the right to freedom of expression. The notion of not only being seen and identified, but also judged and classified functions as an intimidation mechanism to make individuals conform to “good” forms of self expression lest they be classified as “suspicious”, “risky”, “sleepy” or “inattentive”.
Second, given the wide range of current applications, it normalises mass surveillance as a part of an individual’s daily life, particularly in civic spaces. Importantly, freedom of expression includes the right not to speak or express oneself.[1]
[1] Joseph Blocher. 2012. Rights to and Not to. California Law Review Vol 100, No 4. Pages 761 – 815, at 770.
It is intrinsically predicated on mass surveillance
These applications also entail the mass collection of sensitive personal data in invisible and unaccountable ways, enabling tracking, monitoring, and profiling of individuals, often in real time.
Therefore, it violates the Right to Privacy
Freedom of expression and privacy are mutually reinforcing rights. Privacy is a prerequisite to the meaningful exercise of freedom of expression, given its role in preventing State and corporate surveillance that stifles free expression.
Freedom of expression is fundamental for human creativity and innovation and the development of individual personalities. It is vital for rich, diverse cultural expression.
Equally, the Right to Privacy is essential to ensuring individuals’ autonomy. It helps us build a sense of self and enables people to forge relationships with others.
Any interference with the Right to Privacy must be provided by law, in pursuit of a legitimate aim, necessary, and proportionate.
By design it violates the right against self incrimination
In public security and national security use cases, emotion recognition often paves the way for people to be labeled as “suspicious” or meriting closer inspection. It is also used during interrogation.
This runs counter to the right against self-incrimination under international human rights law. Article 14(3)(g) of the ICCPR lays down that the minimum guarantee in the determination of any criminal charge is that every person is entitled “not to be compelled to testify against himself or to confess guilt”. This includes the right to silence.
The attribution of emotions like guilt, anger or frustration is conducted and determined by the entity deploying this technology, which collects, processes and categorises information to make inferences that can directly impact people’s freedom.
By claiming to detect and signal guilt, emotion recognition flips the script on this right.
It violates the Right to Equality and Non Discrimination
As an example, by flagging those people who find immigration and security lines more “stressful” than others, emotion recognition technology would result in both racial and religious profiling, and compound discriminatory behaviour against those who are already marginalised.
It has massive potential for function creep and is ripe for abuse
Function creep can be attributed to the absence of a valid legal basis for the use of emotion recognition technologies. It also stems from a more general tech-solutionist tendency to use new technologies to solve administrative and social problems.
A basic tenet of international human rights law is that giving unfettered discretion to entities in power can violate rights. This characteristic is a defining feature, not a bug, in these technologies.
“Emotion recognition’s application to identify, surveil, track, and classify individuals across a variety of sectors is doubly problematic – not just because of its discriminatory applications but because it fundamentally does not work.“
Toward an approach for responsible technology
Shouldn’t genuine innovation actually solve problems as opposed to creating them?
Opening Pandora’s box
Rather than placing technology at the service of human beings or designing solutions that solve existing problems, the push to develop tools and products for their own sake is fundamentally flawed.
If we’re not people centred, we fail to reflect sufficiently on the risks these technologies pose to people or the harm they can cause. We have no rules or laws in place to govern their use let alone any minimum safety standards (common to all other industries). This means we’re ill-equipped to deal with any fall-out that happens.
We need proper consultation and risk assessment for the development of tech. In other words, we need to look beyond what the tech is (or what it claims to do) and consider how it will be used (or abused), who will have a vested interest in the data, where the data will come from, and who it’s likely to hurt most.
In short, we need an approach for responsible tech – towards technology that does no harm.
Recommendations
This report has covered vast terrain: from the legacy and efficacy of emotion recognition systems to an analysis of the Chinese market for these technologies. We direct our recommendations as follows.
To the Chinese Government
1. Ban the development, sale, transfer, and use of emotion recognition technologies. These technologies are based on discriminatory methods that researchers within the fields of affective computing and psychology contest.
2. Ensure that individuals already impacted by emotion recognition technologies have access to effective remedies for violation of their rights through judicial, administrative, legislative or other appropriate means. This should include measures to reduce legal, practical and other relevant barriers that could lead to a denial of access to remedies.
To the International Community
Ban the conception, design, development, deployment, sale, import and export of emotion recognition technologies, in recognition of their fundamental inconsistency with international human rights standards.
To the Private Companies Investigated in this Report
1. Halt the design, development, and deployment of emotion recognition technologies, as they hold massive potential to negatively affect people’s lives and livelihoods, and are fundamentally and intrinsically incompatible with international human rights standards.
2. Provide disclosure to individuals impacted by these technologies and ensure that effective, accessible and equitable grievance mechanisms are available to them for violation of their rights as result of being targeted emotion recognition.
To Civil Society and Academia
1. Advocate for the ban on the design, development, testing, sale, use, import, and export of emotion recognition technology.
2. Support further research in this field, and urgently work to build resistance by emphasising human rights violations linked to uses of emotion recognition.
Ban the design, development, sale, and use of emotion recognition technologies with immediate effect.
Download our report