ARTICLE 19 participated in the consultation with stakeholders on 18 February 2025, organized by the co-facilitators of the intergovernmental process and consultations to identify the terms of reference and modalities for the establishment and functioning of the Independent Scientific Panel on Artificial Intelligence (AI) and the Global Dialogue on AI Governance, emanating from the Global Digital Compact.
Both mechanisms are based on recommendations made by the United Nations High-Level Advisory Body on AI (HLAB-AI), in its Final Report titled ‘Governing AI For Humanity‘ (HLAB Report). For a comprehensive assessment of these proposed mechanisms, one must consider the full HLAB Report and its recommendations.
ARTICLE 19 believes that the HLAB Report identified an appropriate overarching approach to AI governance but missed several opportunities to operationalize this effectively, thus jeopardizing the protection of human rights and the freedom of speech and expression. While the core principles underpinning both the HLAB-AI’s Interim Report and the Final Report underscored the importance of human rights, international human rights law was not effectively incorporated into its key recommendations.
In this statement, ARTICLE 19 identifies four broad thematic gaps in the HLAB Report, namely the absence of a specific discussion on the freedom of speech and expression; the omission of human rights due diligence requirements; no engagement with representation gaps that were identified in the HLAB Report itself; and risks related to the HLAB Report’s proposal to ensure a coherent effort to international AI governance. To fill these thematic gaps, ARTICLE 19 proposes several recommendations in this statement.
Introduction and background
The Global Digital Compact (GDC) was adopted by United Nations (UN) Member States as an annex to the Pact for the Future at the Summit of the Future in New York in September 2024. Under its objective 5, ‘Enhance the international governance of Artificial Intelligence (AI) for the benefit of humanity’, the GDC sets out to ‘advance international governance of AI in ways that complement international, regional, and multistakeholder efforts’.
It decided to, first, ‘establish, within the UN, a multidisciplinary Independent Scientific Panel on AI with balanced geographic representation to promote scientific understanding’; and second, to ‘initiate, within the UN, a Global Dialogue on AI Governance involving Governments and all relevant stakeholders’.
These decisions were based on recommendations made by the UN High-Level Advisory Body on AI (HLAB-AI), as contained in its Final Report titled “Governing AI for Humanity” (HLAB Report), published on 19 September 2024, following publication of its Interim Report (Interim Report) in December 2023.
In order to undertake a comprehensive assessment of the decisions in the GDC, one needs to consider the full HLAB Report, including all of its recommendations, as the impetus and framing for these decisions.
The HLAB Report reaffirmed the Interim Report’s articulation of international human rights law, along with the UN Charter and international commitments on Sustainable Development Goals (SDGs) as a core guiding principle. The HLAB Report also stipulated that the inclusion of this principle in the Interim Report received the strongest endorsement from all stakeholders consulted, including from ARTICLE 19. Further, the HLAB-AI explicitly stated that “the foundational commitment to human rights is cross-cutting and applies to all the recommendations made in this final report.”
ARTICLE 19 believes that this is an important affirmation of international human rights law in the formulation of any domestic or global governance of AI. However, the HLAB Report missed out on this crucial opportunity to specifically incorporate international human rights law into its operational recommendations. As a result, human rights law was relegated to an abstract stipulation with limited utility in practice.
With this in mind and drawing from our previous submission to the HLAB’s Interim Report, we highlight four thematic gaps in the HLAB Report. We propose several recommendations as the HLAB Report continues to be operationalized, including throughout the set-up of the Independent Scientific Panel on AI and the Global Dialogue on AI Governance.
No specific discussion on the impact of AI on freedom of expression and media freedom
ARTICLE 19 regrets that AI’s impact on the freedom of expression, specifically on human rights defenders and journalists was not documented or addressed in the report.
As ARTICLE 19 noted in its submission on the HLAB-AI’s Interim Report, AI directly impacts multiple groups who are often subjected to intimidation, harassment and threats of violence through measures such as bot network harassment, the use of generative AI for the creation of materials to blackmail and AI driven surveillance techniques. We also noted that AI impacts media organisations in many ways, including broader dissemination strategies, curating access based on reader patterns or translating content for new audiences. ARTICLE 19 emphasized that the use of AI should not be treated as a justification for excessive media regulation in a way that undermines freedom of expression.
Despite these clear risks, freedom of expression received limited attention in the HLAB Report. The impact of AI on freedom of expression receives specific mention only once in a section categorizing AI risks. Examples of specific risks associated with freedom of expression and information are limited to nudging, personalized information and information bubbles. The illegal and repressive uses of AI that can threaten media freedom is omitted which could give states a free hand in using AI technology for repression and constraining media freedom.
Omission of requirements on human rights due diligence
In its statement on the Interim Report, ARTICLE 19 specifically recognized the inclusion of a simplified schema in the Interim Report that coordinated various aspects of global AI governance across the AI lifecycle including the categories of data, model, benchmarks and applications. Human rights due diligence (HRDD) was one of the three primary components of the application prong. Human rights due diligence must be a critical component of AI governance across the full AI lifecycle as the process compels any entity developing, deploying or applying AI based solutions to place human rights at the center of its decision-making process, rather than as an after-thought.
This omission was particularly glaring because, in line with Principle 17 of the UN Guiding Principles on Business and Human Rights, the UN Secretary General’s Office has published a Guidance on Human Rights Due Diligence for UN entities in order to prevent adverse human rights impacts associated with the use of digital technology. It lays out a detailed methodology for human rights due diligence when technology is being used. The HLAB Report therefore missed this opportunity to endorse the value of this process and emphasize its relevance and applicability in the governance of AI.
No engagement with the identified representation gaps
The HLAB Report highlighted the widespread exclusion of many nations and the pressing challenges in ensuring equitable access to advanced AI resources that need to be addressed. For example, an examination of inter-regional AI governance initiatives illustrates that while seven countries are signatories to all of them (Canada, France, Germany, Italy, Japan, UK, USA), 118 countries aren’t party to any of them. Of those 118 excluded countries, 48 are African nations, 44 are from the Asia-Pacific region, and 25 are from Latin America and the Caribbean.
However, the HLAB Report’s recommendations did not articulate any specific mechanisms to enable diplomats and policy-makers to take up an equal seat at the table. The HLAB Report did mention the Global Digital Compact and the World Summit on Information Society Forum in 2025 as two additional policy windows where a globally representative set of AI governance processes could be institutionalized to address representation gaps. However, it failed to specify financial or capacity-based enablers that can help overstretched policy-makers from the majority world to meaningfully participate in the global debates.
Risks related to the proposal ensuring a coherent effort to international AI governance
The HLAB Report highlighted that gaps in representation, coordination and implementation in the emerging international AI governance regime can only be addressed through partnerships and collaboration with existing institutions and mechanisms. It therefore proposed the creation of ‘a small, agile capacity in the form of an AI Office within the UN Secretariat’. The GDC adopted this recommendation and the UN Office for Digital and Emerging Technologies (ODET) was set up on 1 January 2025. AI governance will be a key focus for this Office. However, centralizing AI governance and other issues around digital and emerging technologies more broadly within a singular office, based in New York, moves away from the existing decentralized and multi-stakeholder eco-system of digital technology policymaking within the UN. This raises questions on how diverse and independent multi-stakeholders with an interest in AI governance will be able to participate meaningfully in AI governance decision-making processes at the UN.
Recommendations
Bearing the aforementioned gaps in mind, ARTICLE 19 recommends that the following steps and amendments be undertaken during the operationalization of the HLAB Report’s Recommendations:
First, in terms of the two recommendations that are subject to the ongoing intergovernmental process and consultations:
ARTICLE 19 recommends that four foundational principles should apply across both the Independent Scientific Panel on AI (recommendation 1) and the Global Dialogue on AI Governance (recommendation 2).
- Both mechanisms should ensure a multistakeholder model of AI governance. All stakeholders, including human rights actors and communities most impacted by AI applications, must be able to meaningfully participate in decision-making.
- All human rights must be protected throughout the full lifecycle of all AI technologies. Freedom of expression is a key enabling right in the digital sphere, and the rights to equality and non-discrimination are of utmost importance, given the identified risks of exacerbating discrimination and inequalities.
- Duplication with existing AI governance initiatives within and beyond the UN system should be avoided. AI governance requires a holistic and global approach, turning a patchwork of initiatives into a coherent approach in compliance with international law, international human rights law and the Sustainable Development Goals.
- Both the Panel and the Dialogue should be integrated into existing UN structures and leverage existing UN expertise, in particular from OHCHR, ITU and other relevant UN actors.
Second, in terms of the other recommendations made by the HLAB-AI in its HLAB report:
(a) The Capacity Development Network outlined in Recommendation 4 must have a dedicated module, training program and trainers on human rights law and its application to AI governance. As of now, human rights capacity has been mentioned in passing in relation to building AI governance capacity of public officials, but omitted when enumerating the specific kinds of training that would train researchers and social entrepreneurs. This is unfortunately an instance of a principled commitment to human rights not translating into a clear and implementable recommendation.
(b) The Global Fund for AI articulated in Recommendation 5 should specifically and explicitly identify support related to the protection of human rights. The Fund should specifically include dedicated funding towards training public officials, supporting human rights defenders and journalists impacted by AI and supporting academic research regarding the impact of AI on human rights.
It is also unclear how disbursal mechanisms outlined in the fund will engage with specific contexts and challenges related to the AI digital divide. The report does not speak to the challenges of disbursal and operationalisation in specific contexts, the trade-offs involved and guardrails to ensure its fair and accountable management. A centralised fund would only work if it is designed to actively incorporate local contexts, actors, perspectives and interests into its functioning, failing which it will fail to address foundational questions around governance and capacity.
(c) The articulation of the Global AI Data Framework (Recommendation 6) must explicitly incorporate international human rights law, including the right to privacy. The HLAB Report clarified that the full details of the underpinning principles of this framework ‘are beyond its scope’, it outlined that key principles would include interoperability, stewardship, privacy preservation, empowerment, rights enhancement and AI ecosystem enablement. While ARTICLE 19 accepts that the specific details of a global AI data framework is beyond the HLAB Report’s scope, the HLAB Report should have identified human rights as being a core pillar of this framework and articulated specific measures such as a human rights due diligence or impact assessment process that would have ensured the protection of human rights.
(d) The UN should support the development of regional AI safety institutes to address representation gaps. These institutes could consolidate resources, develop local expertise, and advocate for the needs of global-majority nations in future AI governance discussions. This could enable AI systems and governance frameworks to be inclusive and responsive to the needs of all nations, not just the most technically advanced.
(e) The UN must ensure that the newly set up ODET engages actively and meaningfully with a diverse set of multi-stakeholders, including human rights experts and those communities most impacted by AI development and applications.
(f) There should have been a separate recommendation addressing the concentration of resources and power in several components of the AI supply chain including computing power, data, technical expertise, and financial resources. Transnational governance mechanisms and rules that account for and mitigate such power concentration will ensure that the asymmetries in the global ecosystem are not misused by the powerful to undermine the rights of vulnerable communities.
Next Steps
ARTICLE 19 urges the UN, inter-governmental organizations, states and other stakeholders to consider the recommendations put forth by the HLAB-AI with the specific incorporation of international human rights law as described above. Responsible, secure, and privacy-preserving AI by design should be our collective priority. In particular, the implementation process of the GDC should be used to incorporate ARTICLE 19’s recommendations centering around human rights law, including throughout the set-up of the Independent Scientific Panel on AI and the Global Dialogue on AI Governance.
Without this specific incorporation, AI governance efforts run the risk of merely paying lip service to international human rights law at the abstract level. The costs of the development, deployment and procurement of AI by states and non-state actors alike are too high without adequate safeguards preserving international human rights law and the freedom of speech and expression.