ARTICLE 19, as part of a coalition of civil society organisations, submitted a joint contribution to the European Commission’s consultation on competition and generative AI. In the submission, we highlight the challenges of regulating a sector where power is already concentrated, barriers to entry are high, and where a range of existing and potential anti-competitive practices threaten to undermine innovation, openness and fairness. We also draw attention to a number of specific issues that we believe the European Commission should focus its efforts on and recommend, among others, to create a new market investigation tool, upgrade the Digital Markets Act, investigate anti-competitive partnerships, and introduce a legal presumption against acquisitions by dominant firms.
ARTICLE 19 and partners welcome the opportunity to contribute to the European Commission’s consultation on competition and generative AI. We ground our submission in the understanding that any sensible regulatory action in the sector must begin with the hard truth that power in the digital markets sector is already highly concentrated after decades of underenforcement of competition rules. Market power, resources, and capabilities lie in the hands of a few dominant players who can easily leverage their existing positions of dominance to capture new markets and technologies and neutralise competitive threats.
We believe the European Commission should focus its attention on a number of key issues:
1. Bottlenecks and barriers to entry in the AI stack: development, distribution, partnerships
There are bottlenecks and barriers to entry in every part of the AI stack, but they can be conceptualised in terms of key inputs: data, computing power, and talent. At the data level, firms with access to proprietary and curated datasets enjoy a competitive edge, and firms with more popular models and the widest distribution channels have access to the most data on product use, creating a self-reinforcing dynamic similar to those in the digital platform economy.
The computing power needed to train and host large-scale models is held by the few largest private cloud computing providers. As such, concentration in computational infrastructure functions as a gravitational field accelerating the centralisation of the generative AI ecosystem around these key providers, and indeed appears to be a central part of their commercial strategies. Recently, several dominant cloud providers have entered the chip market and vice versa, leading to further integration across the AI stack.
The technical expertise and talent needed to train, develop and build generative AI models is in short supply. This scarcity leads to high hiring costs, solidifying the position of dominant companies with the most financial firepower.
As a result of this highly concentrated tech stack, promising startups are increasingly turning to incumbents for capital, data, computing power, and talent. Leading AI startups – including OpenAI, Mistral AI and Anthropic – have agreed to lopsided partnerships in which large tech firms provide financial investment and computing power in exchange for privileged or exclusive access to the startup’s technology and explicit or implicit influence over its technical and corporate decision-making. While it is yet to be seen whether these partnerships meet the legal thresholds to be considered mergers, they appear likely to produce many of the same anti-competitive effects that such deals have produced in the past.
2. Actual and potential harms from market concentration
If the evolution of digital markets over the past two decades is a guide, unless there is timely and comprehensive competition enforcement, concentration in the AI market is likely to result in a wide range of monopolistic or oligopolistic harms and anti-competitive practices. Through leveraging, bundling or tying and self-preferencing behaviours, firms can exploit their dominance in one market to expand or entrench their hold over another. Vertical integration across the AI stack – including semiconductors, cloud computing and foundation models – and integration into adjacent services – including search engines, browsers and app stores – will give dominant firms many opportunities to unfairly promote their own AI products over those offered by rivals. A case in point is Microsoft, which is reportedly bundling certain security features for GitHub CoPilot with subscription to Azure.
Firms in dominant positions at one or multiple levels of the AI stack will also be in a position to exploit their dominance and abuse downstream actors that depend on their services and infrastructure by extracting excessive and inconsistent rents or by imposing unfair terms and conditions on customers or end users.
Closely related to self-preferencing and exploitative practices, the infrastructural, gatekeeper role that a few dominant corporations look set to play in the emerging AI ecosystem will also give them the power and incentive to limit interoperability and access to critical inputs and functionalities, including computing power, data, and technical gateways such as application programming interfaces (APIs).
3. Open versus closed systems
For a grounded and nuanced discussion on “open” and “closed” AI systems and the competition implications of making critical AI components widely available, it is key to recognise that there is no binary distinction between open and closed models and that not all actors claiming openness adopt complete openness. The limited evidence we currently have points towards the most advanced open models lagging behind their proprietary counterparts. The evolution of this gap over time depends on factors including the regulatory landscape and the viability of open or closed business models. But it should be noted that anti-competitive behaviour by large companies (including those that claim to be open source) presents significant threats to open source development through a number of mechanisms, including resource constraints, lobbying, capture of intellectual property and ecosystem capture.
4. Proposed changes to competition policy and rules
The European Commission should use its existing competition powers as aggressively as necessary to tackle these problems, including the EU Merger Regulation, Articles 101 and 102 of the Treaty on the Functioning of the European Union (TFEU), and the Digital Markets Act (DMA). But we believe creativity is also necessary to both adapt these existing tools and to adopt new ones.
A new market investigation tool will be key to allowing the Commission to take a more in-depth and holistic view of key markets, including those relevant to AI, and to propose carefully crafted remedies in response to competition concerns. Furthermore, the inclusion of dominant cloud providers like Amazon Web Services and Microsoft Azure as “core platform services” under the DMA would allow the DMA to be used to prevent dominant cloud providers from exploiting their centrality in the tech stack to strengthen their market power in AI. Adding AI foundation models to the DMA’s list of core providers would also allow the Commission to tackle abuses inflicted on business and end users of such models.
We also believe that reform to the EU’s merger control regime, including the EU Merger Regulation (EUMR) and relevant guidelines, is necessary to facilitate investigations of partnerships and minority investments that fall outside the scope of the EUMR but which still have a significant impact on competition. Austria, Germany, the US, and the UK already have the competence to review such investments, and interpretation of the “decisive influence” test should be adapted for use in the EU.
We also recommend the introduction of a legal presumption against acquisitions by dominant firms. This could be restricted to dominant firms in the tech sector, for example gatekeepers designated under the DMA, or it could apply to dominant firms in any industry. This could help address structural issues in the current enforcement system, which requires the Commission to predict the future impact of a merger in fast-moving digital markets – a time-consuming and uncertain exercise full of information and resource asymmetries that dominant firms exploit. Many digital mergers now recognised as harmful were approved by the EU and other competition authorities, and there is a risk that this history now repeats itself in relation to AI.