The Artificial Intelligence Action Summit, which took place in Paris on 10 and 11 February, has sounded a clear warning about the direction of global AI governance. While paying lip service to human rights, the event was dominated by an uncritical embrace of AI’s potential, a flurry of investment announcements, and a concerning shift towards deregulation. This approach prioritises rapid AI development at the expense of crucial protections for individuals and society. It is imperative that decision-makers change course. AI governance must be firmly grounded in human rights principles and commit to a genuinely inclusive, multi-stakeholder approach. Only by fully accounting for the human, social, and environmental impacts of these powerful technologies can we ensure AI serves to enhance, rather than undermine, our democratic values and societal well-being.
In recent weeks, the focus of western governments has been on outspending rivals in AI investments rather than addressing its long-term societal and human rights impact. In January, US President Trump unveiled the Stargate Project, a 500 billion USD investment in AI infrastructure. The summit also showcased massive private AI investment announcements in France and Europe. French President Macron announced 109 billion euros in private AI investment, while Commission President Ursula von der Leyen launched InvestAI, the largest European public-private AI initiative to date set to mobilise €200 billion for investment in AI, including a new European fund of €20 billion for AI gigafactories.
The overarching narrative at the summit echoed this trend. It rested on several questionable premises, including the assumption that AI is inherently a force for good and that regulation hinders innovation.
AI – a force for good? It depends
The declaration from the last AI Summit, which took place in 2023 in Bletchley Park in the UK, warned that ‘AI also poses significant risks’ and affirmed the ‘urgency of addressing them’. This year, the tone was certainly different. Leaders, like Ursula von der Leyen, painted an almost entirely positive picture and proclaimed that AI would enhance healthcare and drive competitiveness. Discussions on potential risks were largely absent, with the prevailing narrative suggesting that more AI is inherently better.
This approach aligns closely with the preference of the industry players present, who have heavily invested in AI and stand to profit as its expansion reinforces their dominance.
The perspective that has been given much less space at the summit is the growing evidence of AI’s impact on human rights: its potential for misuse, discrimination, exploitative supply chains, and environmental harm, which ARTICLE 19 and other human rights organisations continue to highlight as requiring urgent attention and action.
The false dilemma of choosing innovation over regulation
Industry players also stand to benefit from another questionable premise that was increasingly noticeable at the AI summit: the familiar claim that regulation stifles innovation. Indeed, the summit seemed to signal a mood shift in the EU’s approach, with a growing emphasis on AI innovation over regulation.
This new mindset and narratives raise serious questions about weakening enforcement of the recently-adopted EU AI Act. Now entering implementation, the Act establishes a risk-based framework to ensure transparency for high-risk AI applications. The US already exhibits strong scepticism toward AI regulation – and has explicitly expressed strong opposition towards the EU regulatory approach.
We warn of a potential race to the bottom, where safeguards and the protection of human rights are sidelined in the name of innovation.
The summit’s declaration contains some references to the need to respect human rights and humanitarian law. Yet, the language around it is weak and generic and does not necessarily show a strong commitment to centring AI governance on human rights.
It is also worth noting that the AI summit disproportionately focused on private AI deployment rather than use by public bodies, whether in the areas of welfare, migration, or law enforcement. Importantly, some of the most serious risks to democracy and the rule of law arise in law enforcement or often affect the most marginalised communities.
It is unsurprising that the realities of those most affected by AI’s human rights impacts did not take centre stage at the summit. As is often the case in digital governance fora, the summit amplified the voices of those already in power – primarily industry – while sidelining critical civil society perspectives. Many have long emphasised that AI governance must be genuinely multi-stakeholder, yet civil society continues to face major barriers to participation, from financial constraints to opaque invitation processes.
Human rights are not a hinderance to AI – they’re ensuring its future
Some might call raising human rights concerns a roadblock in AI development. Some might say this call is futile or merely obstructionist. We call it building a foundation.
Integrating human rights principles into AI governance will create a framework that protects individuals, benefits society and fosters innovation. It will lead to more robust, ethical, and widely-accepted technologies. It will prevent costly legal and reputational damages for companies and governments in the long run. It will make the AI ecosystem more stable and trustworthy.
The global AI race seems to have accelerated significantly, with significant human rights risks that go along with the enormous investments now pledged. As AI investment surges, it becomes even more urgent to fully assess and address the human risks of AI. Taking AI risks seriously is a prerequisite for acceleration, not a hindrance to it. If the goal is to advance this technology as quickly as possible, prioritising AI safety, grounded in human rights, is a must. This also means that meaningful civil society participation in AI governance can no longer be an afterthought.