April 2021
Biometric technologies are highly intrusive, violate people’s privacy, fail to adequately protect personal data, and prevent people from enjoying their right to freedom of expression.
In its latest policy brief, ARTICLE 19 raises concerns about the rapid and increased use of biometric technologies by public authorities and the private sector.
States and private actors are increasingly using biometric technologies to analyse the way people act, look, and express themselves in public and private spheres. Their use ranges from border patrol to unlocking a smartphone, but one thing is clear: their use is being normalised.
Worryingly, individuals’ human rights have almost been forgotten in the roll out of these technologies, when they should be at the heart of how biometric technologies are developed and used.
The misuse and abuse of biometric technologies results in profiling and categorising people based on age, gender, and skin colour. In other words, biometric technologies can be used in fundamentally discriminatory ways that continue to disadvantage those who have been historically excluded.
Similarly, they can be used to surveill what people are doing, with whom, how they feel, and even how they are likely to behave in the future. These are deeply invasive practices.
As a result, the use of these technologies violate the public’s rights to privacy and data protection, human dignity, non-discrimination, self-determination and the right to access an effective remedy.
ARTICLE 19 is therefore calling for a moratorium on the development and deployment of all biometric technologies until vital human rights safeguards are in place.
We also call for a complete ban on biometric mass surveillance in public spaces, a total ban on emotion recognition technologies, and greater human rights protections in the design, development and use of biometric technologies.
Ban the deployment of all biometric technologies until vital human rights safeguards are in place.
Biometric technology: Questions and Answers
What are biometric technologies?
Biometric technologies are used to gather and analyse personal data that can enable the unique identification of a person. This includes DNA, fingerprints, voice patterns or cardiac signatures.
One of the most commonly known forms of biometric technology is facial recognition.
Why is there an increase in the use of biometric technologies?
Due to the increasing availability of large datasets, as well as lower costs and improved machine learning, the development and deployment of biometric technologies has rapidly increased. Their deployment has been accelerated by the tech-solutionist push to the COVID-19 pandemic.
Governments and private actors typically justify the use of biometric technologies with two narratives:
- The first is the protection of national security, for counter-terrorism measures, crime prevention or control and public safety.
- The second is that state or private authorities can use these technologies to deliver public services, such as the development of “smart cities” or public transport systems.
Claims that biometric technologies will increase security, reduce crime, lower costs or provide greater convenience are currently unproven.
Moreover, these claims fail to weigh the cost to individuals’ human rights. People should not be forced to forfeit their privacy and security in exchange for claims of improved security or convenience.
Equally, States must not give up on their responsibilities to abide by human rights standards, and principles of legality, legitimacy, necessity and proportionality when they deploy a biometric system.
Why should we be worried about biometric technologies?
States and private actors regularly use biometric technologies to collect and generate large amounts of sensitive personal data such as fingerprints, eye scans, racial or ethnic origin, sex and so on.
Because this data can reveal intimate information about a person, it requires additional safeguards and enhanced protection to keep the data – and the public – safe.
However, States and private actors typically store this data in massive databases, often for much longer than they need, which creates serious concerns for individuals’ privacy.
This is because state or private actors can easily reuse the data for a purpose different from that originally intended or which have not been approved. This raises the issue of ‘mission creep,’ or the fact that these technologies are open to massive security breaches and ripe for abuse.
Legally, even if people consent to their biometric data being used for a single purpose, their consent does not cover any re-purposing which then becomes illegal.
Moreover, there are no sufficient, updated legal frameworks in place to regulate the use of biometric technologies.
States and private actors do not conduct risk or impact assessments and typically disregard the principles of necessity and proportionality which are central to protection of the rights to freedom of expression and privacy. In order to meet this requirement, any technology must represent the only way to achieve the purported legitimate aim, and be the least invasive way to do this – often this is not the case.
As a result, biometric technologies can be misused and abused to entrench biases and discrimination.
How do biometric technologies impact people’s ability to exercise their freedom of expression?
ARTICLE 19 has three key concerns:
- Biometric mass surveillance has a chilling effect on freedom of expression.
Mass surveillance is the indiscriminate surveillance of the public – for example the use of facial recognition technology through cameras in subways, shopping centres, and streets.
If facial recognition technologies are used to identify individuals in public spaces, this affects their ability to remain anonymous and communicate anonymously in those spaces.
This has been shown to impact peoples’ behaviour, for example by deterring them from participating in public assemblies, or expressing their ideas or religious beliefs in public.
- This chilling effect is more severe for journalists, activists, political opponents and minority groups.
The use of these technologies can be particularly damaging for journalists, human rights defenders, and those belonging to minority groups at risk of discrimination.
The technologies can be used to target and monitor specific categories of people, track their behaviour and profile them based on their characteristics or behaviour.
This can have a chilling effect, preventing journalists from carrying out their work freely and safely, or discouraging activists or political opponents from organising or joining protests.
- Secrecy around deployment of these technologies limits the public’s ability to scrutinise how they are used.
People have a right to know what information is being gathered about them and how it is being used. However, currently, there is no sufficiently accessible way of finding out who is developing these technologies and how and why they are being deployed.
Public-private partnerships and contracts with governments are often not disclosed. Likewise, requests to access information on government use of these technologies are often denied. Information may only be released after pursuing judicial appeals which are typically long and costly.
These tactics deny journalists, activists, scientists and the public their right to information, and reduce their ability to scrutinise and report on the use of biometric technologies in society.
Case study: Facial recognition
Facial recognition technology involves processing digital images of peoples’ faces to identify them, verify their identity against existing data, or assess certain characteristics such as age, race, gender, etc. Its ability to make some of these assessments accurately is unproven, but it still raises significant human rights concerns. Why should we worry about facial recognition? Facial recognition is often used without users’ knowledge or consent. There is little transparency about its deployment and accuracy, leaving individuals’ personal data open to abuse and misuse. It can have significant impacts on free expression and other rights such as: Right to anonymity: People have a right to anonymity in public spaces and online. The use of facial recognition indiscriminately in public spaces, for example through CCTV or police body cameras, is a violation of this right and can prevent people from feeling safe to communicate and express themselves in public. Right to protest: If people are subject to facial recognition technologies during protests, it can have a chilling effect on their desire to be part of protests as they may fear repercussions. Religious freedom: Alongside risks of profiling, technologies such as facial recognition could infringe on the religious freedom of those who may be required to remove religious face or head coverings. Non-discrimination: Facial recognition can mark out protected characteristics under international law such as race, religion, sex and others, enabling potential discrimination. |
How can we protect the public from the risks of biometric technologies?
ARTICLE 19 believes that governments and companies should adopt a human rights-based approach to the design, development and use of biometric technologies.
Recommendations
Until clear safeguards are in place there should be a moratorium on the development and deployment of biometric technologies by both States and private actors.
We are calling for:
A total ban on biometric mass surveillance
States should ban the indiscriminate and untargeted use of biometric technologies to process biometric data in public and publicly-accessible spaces, both offline and online.
States should also cease all funding for biometric processing programmes and systems that could contribute to mass surveillance in public spaces.
A total ban on the design, development and use of emotion recognition technologies
By design, emotion recognition technologies are fundamentally flawed and are based on discriminatory methods that researchers within the fields of affective computing and psychology contest. They can never meet the narrowly defined tests of legality, legitimacy, necessity and proportionality.
States should establish international norms that ban the conception, design, development, deployment, sale, export, and import of these technologies in recognition of their fundamental inconsistency with human rights.
State and private actors to respect the principles of legitimacy, proportionality, and necessity in the design, development and use of biometric technologies
Both States and private actors should perform an adequate case by case assessment of the legitimacy, proportionality and necessity of the use of biometric technologies.
States should ensure that neither they nor private actors ever use biometric technologies to target those individuals or groups that play significant roles in promoting democratic values, for instance journalists and activists.
States to adopt adequate legislative frameworks for the design, development and use of biometric technologies
For the legitimate uses that meet the necessity and proportionality test, States should shape an adequate legislative framework for the development and deployment of biometric technologies.
At minimum, this must include rules protecting individuals’ data; requirements regarding the quality of data including tests for accuracy and racial bias; obligations for human rights and data protection impact assessments; obligations for developers and users to minimise risk; a binding code of practice for law enforcement; and specific provisions to avoid dual use or mission creep with such technologies.
Transparent, open and public debate on the design, development and use of biometric technologies
As the use of biometric technologies increasingly target multiple critical societal processes and democratic values, their design, deployment and development should only be allowed following a public and open debate, including the voices of experts and civil society.
Adoption and thorough implementation of transparency requirements for the sector
States should publicly disclose all existing and planned activities and deployments of biometric technologies. They must also ensure transparency in public procurement processes, and ensure the right of access to information, including proactive publication, on activities related to biometric technologies.
States and private actors should regularly publish their data protection impact assessments, human rights impact assessments and risk assessment reports, together with a description of the measures taken to mitigate risks and protect individuals’ human rights.
Guarantees of accountability and access to remedies
Legislative frameworks for the development and deployment of biometric technologies should provide for clear accountability structures and independent oversight measures.
States should condition private sector participation in the biometric technologies used for surveillance purposes – from research and development to marketing, sale, transfer and maintenance – on human rights due diligence and a track record of compliance with human rights norms.
The legislative framework should also ensure access to effective remedies for individuals’ whose rights are violated by the use of biometric technologies.
Private sector to design, develop, and deploy biometric systems in accordance with human rights standards
Companies engaged in the design, development, sale, deployment, and implementation of biometric technologies should:
- Ensure the protection and respect of human rights standards, by adopting a human-centric approach and performing human rights impact assessments.
- Set adequate and ongoing risks assessment procedures to identify risks to individuals’ rights and freedoms, in particular their right to privacy and freedom of expression, arising from the use of biometric technologies.
- Provide effective remedies in case of violation of individuals’ human rights.
Rather than placing technology at the service of human beings or designing solutions that solve existing problems, the push to develop tools and products for their own sake is fundamentally flawed. If we’re not people centered, we fail to reflect sufficiently on the risks these technologies pose to people or the harm they can cause.
Ban the deployment of all biometric technologies until vital human rights safeguards are in place.
In the EU, we’re part of the Reclaim your Face campaign to ban biometric mass surveillance in public spaces