Today ARTICLE 19 published its report on emotion recognition technologies in China and the impact they have on human rights, including freedom of expression.
Emotion recognition is a biometric technology that purports to be able to analyse a person’s inner emotional state. These biometric applications are used in a number of ways. For instance, it is used by law enforcement authorities to identify suspicious individuals; by schools to monitor how well a student is paying attention in class; and by private companies to determine people’s access to credit.
Our report shows the ways in which these technologies are currently marketed and used in China and why it is crucial for the international community to take note. We demonstrate the need for wider rollout to be effectively advocated against through careful strategy around the design, development, sale and use of emotion recognition technologies. Crucially, the report emphasises that the timing of such advocacy – before these technologies become more commonplace – is vital for the effective promotion and protection of people’s rights, including free access to information and free speech.
Vidushi Marda, lawyer and digital researcher for ARTICLE 19, said: “High school students should not fear the collection of data on their concentration levels and emotions in classrooms, just as suspects undergoing police interrogation must not have assessments of their emotional states used against them in an investigation. It is imperative that we unpack how these technologies are being used and assess what impact they are likely to have internationally before they become more widespread.”
“While some stakeholders claim that these technologies will improve with time, we believe that their design, development, deployment, sale and transfers should be banned due to the racist foundations and fundamental incompatibility with human rights,” Marda added.
Some of the main findings from the research on deployment of emotion recognition technologies in China include the following:
- The design, development, sale, and use of emotion recognition technologies are inconsistent with international human rights standards, including how they are used to surveil, monitor, control access to opportunities, and impose power.
- The invisible, opaque, and unfettered manner in which emotion recognition is being developed risks depriving people of their rights to freedom of expression, privacy, and the right to dissent through protest, amongst others.
- Emotion recognition’s pseudoscientific foundations render this technology untenable.
- Chinese law enforcement and public security bureaus are attracted to using emotion recognition software as an interrogative and investigatory tool.
- While some emotion recognition technology companies allege they can detect sensitive attributes, such as mental health conditions and race, none have addressed the potentially discriminatory consequences of collecting this information in conjunction with emotion data.
For more information, please email us at [email protected].