By Vidushi Marda (ARTICLE 19) and Ella Jakubowska (EDRi)
The European Union is on the cusp of adopting a landmark legislation, the Artificial Intelligence Act. The law aims to enable an European AI market which guarantees safety, and puts people at its heart. But an incredibly dangerous aspect remains largely unaddressed – putting a stop to Europe’s burgeoning ’emotion recognition’ market.
Many of us looked on in horror as stories emerged of so-called ‘emotion recognition’ technologies used to persecute Uyghur people in Xinjiang, China. Yet, as the same technologies are sold to governments by the European surveillance tech industry, we barely pay attention.
The European Union is on the cusp of adopting a landmark legislation, the Artificial Intelligence Act. The law aims to enable an European AI market which guarantees safety, and puts people at its heart. It categorises uses of algorithmic systems into four risk categories: including ‘prohibited’, for those that are just too dangerous.
Civil society has made important progress in influencing the European Parliament to support amendments banning public facial recognition and other mass surveillance uses. But an incredibly dangerous aspect remains largely unaddressed – putting a stop to Europe’s burgeoning emotion recognition market.
AI-powered dystopia
In the last decade, the EU has invested millions of euros deploying surveillance technology at its borders. From motion sensors that ‘predict and flag threats’ to security cameras with layers of surveillance capabilities, the EU’s approach is marked by hostility, and spurred by an aggressive turn towards ‘advanced’ technologies, even when they do not work – as is the case with emotion recognition.
The premise of this new frontier is that AI systems can help ‘infer’ a person’s inner emotional state based on physical, physiological or behavioural markers such as facial expressions or vocal tone, and sort them into discrete categories like angry, happy or afraid. If it sounds too far-fetched to be true, that’s because it is. Built on discriminatory and pseudo-scientific foundations, emotion recognition is scientifically dubious and fundamentally inconsistent with human rights.
These issues are being highlighted with increasing urgency. The UK’s Information Commissioner recently warned against the use of emotional analysis technologies and reiterated that these tools may never work. Last year, one of Europe’s top courts ruled on the lack of public transparency regarding ‘iBorderCtrl’. This dystopian-sounding project saw the EU waste public money experimenting with automated systems to analyse people’s emotions at EU borders, attempting to tell from people’s facial expressions whether they were being ‘deceptive’ about their immigration claim.
Emotion recognition systems not only find their application in the migration context. They are used around the world to (purportedly) detect whether people are potentially good employees, suitable consumers, good students, or likely to be violent.
In the proposed EU AI Act, technologies that could usher in those Orwellian measures are largely classified as ‘low or minimal’ risk. This means that developers’ only requirement is to tell people when they are interacting with an emotion recognition system. In reality, the risks are anything but ‘low’ or ‘minimal’.
A resurgence of junk science
An emotion recognition system claims to infer inner emotional states. This differentiates it from technologies that simply infer a person’s physiological state, such as a heart rate monitor which predicts the likelihood of a heart attack.
Particularly in the context of analysing faces to infer emotion, commercial applications are largely based on Basic Emotion Theory (BET), developed by psychologist Paul Ekman. BET proposes that there is a reliable link between a person’s external expressions and inner emotional state, that emotions briefly ‘leak’ onto people’s faces through micro-expressions, and that these emotions are displayed uniformly across cultures.
But evidence suggests otherwise. In 2007, the US Homeland Security Department introduced the Screening of Passengers by Observation Techniques programme, teaching airport security officers to observe the behaviours, movements and appearances of air travellers. The idea was that officers could perceive deception, stress and fear from an individual’s micro-expressions. The programme exhibited risks of racial profiling and failed to garner popularity even among trained Behavior Detection Officers who found it lacking scientific validity. The US government accountability office itself questioned the efficacy of these techniques, and called to curtail funding for the programme.
Multiple studies show that emotions gain meaning through culture, and are not uniform across societies. Micro-expressions have been found to be too brief to be reliable as an indicator of emotions. In 2019, a group of experts reviewed over a thousand scientific papers studying the link between facial expressions and inner emotional states and found no reliable relationship between the two.
Even in the face of significant evidence of emotion recognition’s fundamental flaws, the EU is turning to its AI-enabled resurgence in pursuit of easy answers to complex social problems.
At odds with human rights
Arguments in favour of continuing to train emotion recognition systems until they become more ‘accurate’ fail to recognise that these technologies, by definition, will never do what they claim to do.
Even if we were to discard this foundational issue, emotion recognition strikes at the heart of individual rights: human dignity. It classifies people into arbitrary categories bearing on the most intimate aspects of their inner lives. It necessitates constant surveillance to make intrusive and arbitrary judgments about individuals.
The technology’s assumptions about human beings and their character endanger our rights to privacy, freedom of expression, and the right against self-incrimination. Particularly for neurodiverse people – or anyone that doesn’t fit a developer’s idea of an archetypal emotion – the capacity for discrimination is immense.
Emotion recognition is particularly pernicious because individuals have no way to disprove what it ‘says’ about them: the ‘truth’ is unilaterally ‘declared’ by authorities. This makes the existing power asymmetries – between employers and employees, law enforcement officers and individuals, migrants and border control authorities – even more stark.
To use emotion recognition technologies that embed and perpetuate these faulty assumptions is both irresponsible and untenable in democratic societies. It is even questioned by Ekman himself.
This unwillingness to reckon with the foundations of emotion recognition plagues the AI Act. In legitimising the use of such technologies by classifying them as mostly ‘low’ risk, the EU’s AI proposal grossly mischaracterises them. It may even encourage the adoption of a technology that violates the proposal’s purported core principles of trustworthiness and human-centricity.
It is crucial to acknowledge the inherently flawed and dangerous assumptions underlying those technologies. The EU must not legitimise them by giving them a place on the European market. Our legislators have a unique opportunity to stand up for the rights of people all around the world. It’s time to ban emotion recognition.