The Competition and Markets Authority (CMA) has issued new guidance equating a business’ responsibility for AI agents with that of its responsibility for its employees – presenting pertinent considerations for business currently using agentic AI, or planning to do so in future.
What is changing?
On 9 March 2026, the CMA published guidance for businesses on how to use agentic AI when engaging with customers while also ensuring compliance with consumer law. At the same time, the CMA published research reviewing current and anticipated use of agentic AI, its risks, and its benefits. These publications make clear that consumer protection laws, and the CMA's new direct enforcement powers as set out in the Digital Markets Competition and Consumers Act 2024 (DMCCA), apply to conduct by AI agents in the same way as they do to human conduct.
How could the changes affect your business?
Agentic AI refers to autonomous AI systems which act as agents capable of reasoning, planning, and executing complex tasks to achieve specific goals with minimal human oversight (as opposed to conventional AI systems, which provide specific outputs or create content based on inputs and prompts). Examples include using AI agents to deal with customer queries, process refunds, recommend products, and manage marketing campaigns.
The CMA publications exemplify that businesses using agentic AI will remain liable for shortcomings or errors made by the AI, which is consistent with its stance on AI in relation to competition law. As agentic AI is not considered a party to that contract under UK law, in circumstances where businesses have contracts in place with the consumers affected by the errors, actions for breach of contract cannot be brought against the AI – but rather, must be brought against the business.
Given that the CMA has wide enforcement powers under the DMCCA, and failure to comply with consumer protection provisions could result in fines of up to 10% of global turnover, this guidance presents a timely reminder for consumer-facing businesses who use AI agents to ensure their AI agents are designed and trained to comply with consumer and competition law. Indeed, the CMA states: ‘Consumer law requires you to treat your customers fairly. It does not matter whether they interact with (or get information produced by) a person or an AI agent. It’s important to remember that you are responsible for what an AI agent does in the same way you are responsible for what an employee does’.
In summary, the CMA acknowledges that AI has the potential to boost economic growth and improve people’s everyday lives, and it is committed to encouraging its use. However, in conjunction with this, it emphasises that AI must be used responsibly, and in compliance with consumer law.
The four key points arising from the CMA’s guidance are:
- Tell your customers if you use an AI agent. Businesses use AI agents in different ways, which could be confusing for consumers – so always be clear and open about when, and how, you are using them in order to build trust. Consumers should be given the information they need to be able to make an informed decision, and should not be misled.
- Train your AI agents to comply with consumer law. The starting point is to consider what your AI agent will be doing, and how this could impact your customers. Think about the data that the AI agent will need to complete its tasks, and how to prompt it to respect a customer’s statutory and contractual rights, avoid misleading customers and obtain all requisite consents. Ensure that rigorous testing of the AI agent’s performance in these regards is undertaken before it is deployed.
- Monitor how your AI agents are performing. An AI agent’s performance must be regularly checked to ensure it is delivering the right results, behaving as intended, and compliant with consumer law. This should be done by keeping a human in the loop to actively check that the AI agent is making the correct decisions and generating the expected results while ensuring legal compliance.
- Refine the AI agent quickly if there is a problem. If an AI agent is performing in a way which could result in infringements, you should act quickly to address the problem – particularly where an AI agent interacts with large numbers of consumers, or those who are considered vulnerable.
Using a third-party supplier of AI tools does not remove your risk, as you will remain liable for actions taken by your business. It is therefore important to consider the above when carrying out due diligence checks on, and negotiating contracts with, such suppliers to give you sufficient rights to audit their actions and implement suitable remedies if problems arise.
What steps should you take to prepare?
The CMA guidance contains useful worked examples of common use cases – you should review that guidance if your business is consumer-facing and you want to build agentic AI into processes which your consumers will engage with.
Before launching a new AI agent, carefully consider what it will be doing and how this could impact consumers against the four key compliance points outlined above. It is critical to develop robust processes and policies for training and testing the AI agent at the outset, as well as undertaking ongoing monitoring and refinement once it is deployed. Your processes must provide for the oversight of a human with appropriate experience and knowledge of consumer law, and you should also ensure that all staff who will be involved are trained on these issues.
Finally, you should carry out careful due diligence on any suppliers of agentic AI tools you work with to ensure that they also comply with the CMA guidance, and ensure that contracts you enter into with these suppliers give you the audit rights you need to monitor their ongoing compliance and the right to implement remedies in the event of a problem.
Key takeaways:
- Review the CMA guidance.
- Map your use cases of agentic AI and how they will impact consumers.
- Develop processes and policies to ensure compliance with the four key points set out in the guidance.
- Review due diligence processes for suppliers of AI tools, and your contracts with them, to ensure they are fit for purpose and compliant with consumer protection.