AI technologies: Where do we draw the line?

Graphic depicting digital artificial intelligence surveillance.

A growing number of states and private companies are deploying advanced AI tools to monitor, track, and surveil people. According to the AI Global Surveillance Index, at least 75 out of 176 countries are actively using AI technologies for surveillance purposes, impacting over 6.9 billion people (over 87% of the world’s population), or approximately 8 out of every 9 people.

From monitoring parking lots to watching what we buy, and from the mass gathering of sensitive data like eye scans and fingerprints to deploying facial recognition technologies at scale, some use cases may be legal, but others are unnecessary, and many leave us exposed and unsafe. Crucially, a wide range of use cases fall into an unregulated grey area – precisely because they fail to consider the risks involved to people.

On 13 March 2024, the European Parliament officially adopted the world’s first AI Act, a comprehensive rule book for AI.  While this is a step in the right direction, the final Act failed to completely ban some of the most rights-infringing uses of biometric technologies.

In the wake of this development, ARTICLE 19’s latest report allows civil society organisations, governments, and anyone concerned about the uncritical adoption of AI tech to consider the legal and political conditions at play – and, in turn, to determine which approaches may be most effective in stopping these technologies from being deployed against the public. The report provides first-hand experience from multiple organisations that have been advocating for complete bans or moratoriums on AI biometric recognition technologies.

Drawing clear lines around the uncritical adoption of AI tech gives societies enough time to evaluate their risks and advantages and ensure that these technologies serve, rather than harm, human beings.

Read the report

Who or what is responsible for this problem?

Globally, laws to regulate AI technologies are either weak or absent. Regulation and legislation have not kept pace with rapid technological developments, leaving people unsafe and exposed.

Moreover, the promotion of AI technologies as a ‘smart’ solution encourages an uncritical demand for these technologies. People consent without realising that their rights to privacy and data protection apply to all public spaces.

In the hands of authoritarian governments or greedy private companies, AI technologies facilitate mass surveillance of individuals and entire societies, restricting our movement, preventing journalists from fulfilling their vital role in society, and preventing public dissent.

What must be done to address this?

To protect people’s right to expression, ARTICLE 19 believes that governments and civil society must consider the use of AI biometric recognition technologies in a responsible and balanced way. 

Where risks to human rights cannot be mitigated or power imbalances may become severely entrenched, clear red lines should be established around the use of these technologies. 

ARTICLE 19 also challenges narratives that incorrectly pit privacy against safety or those that reject or argue against regulation.

We believe that safety is dependent on privacy, due diligence strengthens true innovation, and embedding human rights into the design, development, and deployment of the technology life-cycle keeps people safe.

Read the report

Learn more

Biometrics and privacy

Emotion recognition tech

Smart Cities surveillance

metro platform Sao Paulo Brazil

Reclaim Your Face

Ban the Scan