EU: Safeguarding human rights in the Code of Practice on General-Purpose AI

EU: Safeguarding human rights in the Code of Practice on General-Purpose AI - Digital

Credit: https://www.vecteezy.com/

Over the summer, the European Union’s AI Office has launched a multi-stakeholder consultation on the Code of Practice on General-Purpose AI – a voluntary mechanism for providers of general purpose AI models, mandated under the EU Artificial Intelligence  Act. As participants in the process, ARTICLE 19 is calling for the Code of Practice to be grounded in international human rights laws and standards. The Code must embed robust and future-proof measures for comprehensive risk management for both the general purpose AI models and their downstream applications, throughout the lifecycle of the different technologies. 

The European Artificial Intelligence Act entered into force on 1 August 2024, establishing a legal framework to govern the development, deployment and use of artificial intelligence within the EU. The AI Act focuses on AI systems which are the physical products or software applications that are built around or on top of an AI model. It classifies AI systems based on their level of risk and imposes varying levels of regulation depending on the risk level, as well as purpose and deployment context. 

With the rise of general-purpose AI, driven by applications like Open AI’s ChatGPT and Google’s Gemini, the EU recognised the need to regulate general purpose AI (GPAI) at the underlying model level, not just at the system level

General-purpose AI and the need for specific regulation 

GPAI models are models that do not have a singular intended purpose. They are designed to perform a wide variety of tasks across different domains, rather than being limited to a single, narrowly defined function. 

However, through their rapidly increasing capabilities and applicability, they can also cause large scale harms. Since GPAI is designed to perform several generally applicable functions, it also has the ability to heighten risks, through:

  • Unintended consequences: the opaque nature of GPAI models makes them susceptible to uses that model developers may not, or cannot, anticipate. This can result in unintended consequences and potentially harmful outcomes, especially if applied to areas such as employment, policing, or access to social services.
  • Malicious misuse: GPAI models can facilitate the creation and propagation of disinformation and hate speech, automation of surveillance, and enable other unethical practices. 
  • Embedded biases: GPAI models often reflect, reinforce and even amplify biases embedded in the data they are trained on. This can lead to erroneous decisions or skewed outcomes that unfairly impact certain groups, especially marginalised populations, and perpetuate stereotypes and existing inequalities, particularly in areas of  employment, border management, policing, or credit scoring.
  • Privacy violations: GPAI models often involve extensive data processing which can expose and reproduce sensitive information, such as personal data. If not properly safeguarded, sensitive information can be exposed or misused, breaching user privacy and potentially  enabling misuse.

ARTICLE 19 is participating in the multi-stakeholder consultation, to develop a voluntary Code of Practice for GPAI model providers and downstream application providers under the EU AI Act. In the process, we will advocate for the EU General Purpose AI Code of Practice to be grounded in international human rights law. The Code should focus on responsible and thorough risk identification, assessment, and mitigation, alongside strong internal governance measures throughout the entire model lifecycle, from inception to deployment.  

Specifically, we will be calling for: 

  • A clearly defined, structured, and comprehensive taxonomy of systemic risks, rooted in international human rights frameworks and ethical principles, which is  essential for addressing the broad and multi-dimensional challenges posed by GPAI technologies.
  • Risk assessment and mitigation measures to be proactively implemented at every stage of the AI model lifecycle (pre and post deployment) and adaptable to the rapidly evolving landscape. They should be comprehensively adapted  through an interdisciplinary approach that combines legal expertise, technical know-how, and a deep understanding of the societal context to ensure that GPAI systems are not only innovative but also aligned with societal values and human rights, in compliance with legal requirements, and in a way that upholds human dignity.
  • Public-facing disclosure and transparency requirements to the EU AI Office for high-risk GPAI model providers. To genuinely enhance accountability, the disclosed information should lead to concrete actions. 
  • Establishing a clear tiered system of accountability for model developers and downstream application providers which will be essential to ensuring the responsible development, deployment, and use of GPAI. A tiered approach enables differentiated levels of responsibility based on the risks associated with the AI model and its applications, ensuring that the most significant risks are met with stricter oversight and stronger accountability measures.
  • Coordinated monitoring and enforcement mechanism of the Code of Practice will depend on a cohesive, well-coordinated, agile, and adequately funded governance framework, guided by strong leadership from both supranational and national institutions. Key bodies, including the AI Office, the European AI Board, the Advisory Forum, the Scientific Panel, and national authorities in each member state, will need to collaborate closely to ensure consistent human rights protections, avoiding enforcement gaps that could lead to uneven or weak implementation, ultimately undermining the effectiveness of the EU AI Act. 

The path forward: a human rights-based approach to GPAI regulation

Whether the AI Act and the Code of Practice on General-Purpose AI will become a regulatory best practice model or a cautionary tale remains uncertain. Its impact will depend on legal clarity, actionable standards and guidelines, as well as careful implementation, enforcement, proactive regulatory foresight and the combined efforts of all stakeholders to ensure AI governance respects rights and embeds safety standards.  

One thing is clear – there is no time for complacency. As the technology accelerates, civil society must proactively work to keep human rights and the awareness of potential harms of general purpose AI at the forefront of AI governance discussions.

The General-Purpose AI Code of Practice will be drafted over the course of six months (October 2024 – March 2025) and it will come into effect on 2 August 2025.