The UK’s Use of Competition and Consumer Laws to Regulate AI

25 June 2024
View Debevoise In Depth
Key Takeaways:
  • As the European Union edges ever-closer to formally enacting the EU AI Act, attention is turning to how other jurisdictions will approach AI regulation. In the UK, individual regulators will oversee the use of AI within their respective areas of competence.
  • The UK Competition and Markets Authority has published its report outlining how it plans to approach AI regulation in accordance with UK Government policy. It includes a detailed description of potential AI-related risks to otherwise competitive markets, as well as six key AI principles that businesses should consider aligning themselves with, especially those in the foundation model value chain.
  • This increasing trend of regulatory scrutiny over the use of AI reinforces the incentives for businesses to adopt (proportionate) AI governance programmes to mitigate any associated regulatory and reputational risks. In particular, businesses should ensure that all collaborations they undertake with peer businesses are in compliance with UK competition law and merger control regulation.

As the European Union edges ever-closer to formally enacting the EU AI Act, attention is turning to how other jurisdictions will approach AI regulation. In the UK, individual regulators will oversee the use of AI within their respective areas of competence. This blog post analyses the UK Competition and Markets Authority’s (“CMA”) proposed approach to AI regulation.

The UK Approach So Far: A Recap

Earlier this year, the UK Government confirmed that it will be adopting a sector-specific approach to AI regulation, where individual regulators use their existing regulatory powers to supervise the use of AI within their respective spheres. Consequently, in contrast with the EU’s AI Act, which introduces a new, overarching AI-specific legal regime for the bloc, UK regulators will instead be responsible for supervising the use of AI within their respective spheres. Regulators will be encouraged to introduce new requirements only if there is a clear lacuna in their current toolkit that can be filled with an effective and proportionate measure. The UK’s approach is therefore similar to that of the U.S. While the upcoming UK General Election could result in a change in strategy, a recent parliamentary report has called for the new government to continue its current efforts in creating an AI regulatory framework.

The CMA’s AI Strategy

As one of the UK’s key regulators, the CMA has published its report outlining how it plans to approach AI regulation in accordance with UK Government policy. This supplements the CMA’s earlier summary report and detailed technical report on AI FM regulation, and Initial Report on AI Foundation Models from September 2023.

The CMA acknowledges the “genuinely transformative promise [of AI] for our societies and economies”, but flags that “without fair, open, and effective competition and strong consumer protection” the full potential of AI may not be realized in practice. The CMA is, therefore, focused on ensuring businesses can develop and deploy AI systems in a way that complies with existing competition and consumer protection law requirements. In particular, the CMA has highlighted several practices and areas which it views as potentially detrimental to competition, such as the use of AI on choice architecture within websites (including personalization, default settings and framing). To provide the necessary capability, the CMA has established a new Data, Technology and Analytics unit with over 80 employees, to work with the CMA’s preexisting Digital Markets Unit. Together these have established a ‘Technology Horizon Scanning Function’, which in December 2023 published its first report on trends in digital markets and how those might develop. The CMA is also contributing to the work of the Digital Regulation Cooperation Forum, a joint initiative by four UK regulators to coordinate on digital regulation.

In light of this increasing interest, businesses should ensure that any partnerships that they make with other businesses in the AI space are compliant with competition law. That is especially a consideration where they relate to important inputs, involve businesses with strong positions in their respective areas, or involve foundation models (“FMs”, also known as general-purpose AI or “GPAI”) with leading capabilities. Businesses should also take care to assess whether any investments or third-party partnerships they enter into fall within scope of the UK merger control regime, or may otherwise attract the interest of the CMA. Businesses should finally consider actively engaging with the CMA as it continues to educate itself about AI.

Key AI Risks for the CMA

The CMA flags that AI has “significant scope” to pose risks for consumers. This includes exposing consumers to significant levels of false and/or misleading information (for example though subscription traps or fake advertising), as well as facilitating personalized pricing that may specifically target vulnerable consumer groups. The CMA further warns that consumer risks may be exacerbated where consumers have difficulties distinguishing between AI-generated vs. human-generated content, as this could lead to reductions in public confidence in AI where it inaccurately or misleadingly describes a particular good or service.

The CMA has also described in detail the potential risks from AI to otherwise competitive markets. This could come from the power of AI to underpin recommendations, or to affect what choices consumers are presented with. It could also arise from the bigger players establishing themselves as critical bottlenecks within this structure, for example by agreeing exclusive contracts with companies in different layers of the industry. The CMA also warns that, under current circumstances, companies that control critical inputs (for example, AI chip manufacturers) could restrict access to their products and technologies to shield themselves from future competition. Alternatively, it is concerned that producers of end-products or services could restrict public choice of AI products, by making their products compatible with only certain forms of AI technology.

The CMA also acknowledges that there is a layer of additional risk when it comes to FMs, or models that can perform a wide variety of tasks that have a specific magnitude of computing power and have high capabilities in certain high-risk areas. The CMA is not alone in trying to grapple with FM regulation; the UK Government AI policy paper, EU AI Act and U.S. AI Executive Order all contain additional requirements for these systems.

The CMA’s AI Principles for FMs

The CMA has published six principles that businesses across the FM value-chain should align themselves with, including examples of how the principles may be applied in practice:

  • Open Access: To ensure that AI developers can maintain access to critical inputs;
  • Sufficient Consumer Choice: So individuals have adequate freedom to choose which FM systems they use and how they do so;
  • Choice: To ensure that businesses and end-users have sufficient freedom and knowledge so they can decide how to use FMs;
  • Fair Dealing: So that industry partnerships or other forms of integration are not used to insulate firms from effective competition;
  • Transparency: To provide end-users with adequate information about the FM services they use, to allow them to make informed choices; and
  • Accountability: To ensure that FM developers and deployers take responsibility for their relevant inputs into the value chain, and take positive action required to ensure protection of end-users.

While targeted at FMs, these principles also are instructive for other types of AI systems and across the entire AI value chain.

AI-Related Enforcement Is Already Here

The CMA had already stepped up its enforcement activities, even before its new position paper. In April 2024, for example, it announced the launch of preliminary enquiries into whether certain commercial partnerships and hiring practices involving Amazon and Anthropic PBC, Microsoft and Inflection AI, and Microsoft and Mistral AI were anticompetitive. Although the CMA concluded in May 2024 that the latter did not fall within the scope of the UK merger control regime (since the arrangement did not create a dependency between the companies), the decision is nevertheless important as it sets out in detail a framework for how the CMA may look at AI partnerships in the future. This follows the CMA having also invited comments on Microsoft’s investment in OpenAI (the manufacturer of ChatGPT) in December 2023. A decision on whether to launch a formal investigation into whether this investment amounted to a notifiable merger is expected imminently.

The CMA’s increasingly assertive approach will likely be strengthened further once it receives its new powers under the Digital Markets, Competition and Consumers Act (“DMCC”). Those include both the ability to set targeted conduct requirements on companies (including AI companies) found to have “strategic market status” in respect of their digital activities and to levy significant financial penalties for non-compliance. The law was passed as part of the Parliamentary wash-up following the announcement of the impending UK general election and is expected to come into force later in 2024.

The CMA’s imminent new powers under the DMCC demonstrates that the more immediate risk for businesses arises from the application of existing, well-tested, technology-neutral laws by regulators in the AI sphere, rather than via incoming AI-specific laws, many of which won’t come into force until 2026.

How to Prepare

Ultimately, it remains to be seen how the CMA intends to comprehensively deal with novel competition issues arising from the growth of the AI industry. Its actions to date, however, together with its imminent new powers, suggests that the CMA intends to be proactive in this space.

This increasing trend of regulatory scrutiny over the use of AI reinforces the incentives for businesses to adopt (proportionate) AI governance programmes to mitigate any associated regulatory and reputational risks.

In particular, businesses should ensure that all collaborations they undertake with peer businesses are in compliance with UK competition law and merger control regulation. This includes avoiding personalised AI-generated pricing or offers that unfairly target certain consumer classes. Further, businesses should aim to be consistent and clear with consumers in disclosing when they are interacting with AI and when they are not; a requirement that is also contained in the incoming EU AI Act.


This publication is for general information purposes only. It is not intended to provide, nor is it to be used as, a substitute for legal advice. In some jurisdictions it may be considered attorney advertising.