On 14 January 2022, ARTICLE 19 submitted input to the Members of the Office of Science and Technology Policy’s consultation on public and private sector uses of biometric technologies. In our submission, we highlight the implications the use and deployment of biometric technologies have on human rights and warn about the lack of an efficient international framework to regulate the development and deployment of biometric technologies. We believe that States should ban biometric mass surveillance and, together with private actors, ensure the development and deployment of these technologies comply with international human rights standards.
The Office of Science and Technology Policy, a department of the US Government, part of the Executive Office of the US President, organised the consultation with a view to understanding the impact and usage of biometric technologies and current principles, practices or policies governing them.
In our submission, ARTICLE 19 emphasised that the use and deployment of biometric technologies have been steadily increasing over the past years and have become part of everyday life. Unfortunately, due to their rapid deployment, the legal international framework has not developed at the same rate and is often absent or outdated. Biometric systems that rely on Artificial Intelligence (AI), facial recognition, emotion recognition and other data exploitation methods have not been sufficiently regulated and have therefore severely impacted human rights.
We also highlight that biometric mass surveillance has a significant impact on a range of human rights. It jeopardises the right to remain anonymous in public spaces which has a chilling effect on the right to protest. This practice collects and stores massive amounts of data without sufficient safeguards as to its use and protection from potential security breaches. A human-rights approach is now more than ever necessary to rectify these negative impacts and bring biometric technologies under human rights law.
In order to address these problems, ARTICLE 19 recommends the following:
- States should ban biometric mass surveillance;
- States should ban the design, development and use of emotion recognition technologies;
- Public and private actors who design, develop and use biometric technologies should respect the principles of legitimacy, proportionality and necessity;
- States should set an adequate legislative framework for the design, development and use of biometric technologies;
- Government authorities must ensure that the design, development and use of biometric technologies are subject to transparency and open and public debate;
- Transparency requirements for the sector should be imposed and thoroughly implemented by both public and private sectors;
- States should guarantee accountability and access to remedies for human rights violations arising from biometric technologies;
- The private sector should design, develop and deploy biometric systems in accordance with human rights standards;
These recommendations are based on ARTICLE 19’s work and experience so far. We have previously analysed the impact biometric technologies have on human rights, particularly, on freedom of expression in our 2021 policy When Bodies becomes data: biometric technologies and freedom of expression. We also released a report in January 2021, “Emotional Entanglement”, that put forward the human rights implications of emotion recognition, with a particular focus on the Chinese market. We have noted that the current forms of deployment, particularly AI, are mostly experimental. The current frameworks allow tech companies to exploit this legal grey area to avoid responsibility. For more information, see our April 2019 report “Governance with Teeth”.