When it comes to AI governance, the ICO and CMA tend to be highly aligned. While both regulators recognise the potential significant benefits of using AI, they also each underline the key risks that need to be carefully, and responsibly, managed.
Contributors: Clare Gray and Tori Lethaby.
What is changing?
March was a busy month for the Information Commissioner’s Office (ICO) and Competition and Markets Authority (CMA) – marked by the respective publications of draft ‘Automated decision-making, including profiling’ guidance (currently out for consultation until 29 May 2026), and ‘Complying with consumer law when using AI agents’ guidance.
Agentic AI systems do more than execute fixed instructions: they learn, reason, problem-solve, and make autonomous decisions, examples include customer support agents and personal AI assistants. If you are considering using, or are already using, agentic AI to make decisions which affect your customers (who are classed as consumers), then it is likely that you will need to comply with both the CMA’s guidance and the ICO’s guidance once finalised.
If all three of the following criteria apply to you, your use of agentic AI will fall under the ICO’s automated decision-making:
- You use a system that makes a decision(s) about someone;
- The decision is a significant decision (meaning that it has legal or similarly significant effects); and
- The decision is solely automated (in other words, there is no meaningful human involvement).
The ICO sets out key measures and safeguards in its draft guidance; while the reforms remain subject to change, it is worth cross-checking your processes against the draft guidance to ensure that your automated decision-making processes are fair, lawful and transparent.
How could the changes affect your business?
The key requirements of the ICO draft guidance, and alignments with the CMA guidance, include:
- Accountability: Both regulators emphasise the need for organisations to take proactive and ongoing responsibility for agentic AI used within their businesses, instead of treating it as a one-off compliance tick-box exercise.
The ICO guidance requires you to:
- Assess and document your lawful basis for processing.
- Be responsible for the overall compliance of automated decision-making used (including third-party systems), such as by carrying out Data Protection Impact Assessments and documenting processing activities.
- Have a robust monitoring framework in place for bias and errors.
- Provide individuals with mechanisms to challenge decisions.
The CMA emphasises that you are equally responsible for the actions of AI agent as you are for those of an employee – and that the requirements of UK consumer law apply in the same way to customers regardless of whether you use AI or human agents (including in circumstances where a third party designs or provides the AI agent on your behalf).
- Transparency: Being transparent about automated processes is an important aspect of strong AI governance, and both regulators are aligned on this.
The ICO guidance requires you to provide people with information about all automated decision-making you carry out about them, with a strong focus on transparency and explainability to individuals about how and why you reached the decision about them, and the impact of the decision on them. This is of particular importance if an individual is unhappy with the automated decision-making that you have carried out about them and contests that decision, for example. In such circumstances, it is key that you understand the underlying rules that apply to your automated decision-making, or factors that have influenced it, to enable you to clearly explain the rationale behind the decision about that individual.
The CMA mandates that you must tell customers where you are using an AI agent so that it is clear they are dealing with AI instead of a human – for example, during interactions with chatbots. The CMA stresses that being clear and open with customers is a good way to build trust, that you should provide information that customers need so they can make informed decisions, and that you should not mislead them.
Monitoring and effective incident response mechanisms: Both regulators are aligned on the need for robust, ongoing monitoring of AI agent performance to diagnose any quality issues, bias, or other errors, as well as the need for effective incident response mechanisms.
The ICO guidance requires you to have mechanisms in place to diagnose any quality issues or errors as well as a process to document how you intend to resolve them. Further still, where someone is unhappy with the automated decision-making you have carried out about them, they must have the right and ability to query that decision, obtain human intervention, or contest that decision – and you need to tell them how they can do this at the point you provide the decision.
The CMA emphasises that, where it is apparent that an AI agent is not performing as expected, you need to act fast to resolve the issues: ‘If you do not act quickly to address problems, you may end up breaking the law – ultimately, if an AI agent does something illegal, you will be responsible’.
- Human oversight: As AI models can be susceptible to errors and bias, misinterpret data, and ‘hallucinate’ results that are nonsensical or inaccurate, both regulators underline the importance of human oversight.
Human intervention must be substantive, meaningful and ‘cannot be tokenistic’.
The ICO emphasises ‘the right to obtain human intervention is a key safeguard in the automated decision-making provisions’. Human reviewers should: (i) be appropriately trained or qualified so that they can carry out reviews of decisions and understand the systems outputs, limitation and risks, (ii) be able to influence the outcome, and (iii) have discretion and authority to alter the decision.
The CMA guidance requires you to have a human in the loop to check proactively decisions made by an AI agent, and states that, ‘Regular human oversight is important to catch mistakes and ensure that AI agents are completing tasks in a legally compliant way’.
What steps should you take to prepare?
If you use, or want to use, automated decision-making in your business processes, it is imperative that you review the ICO draft guidance and build in meaningful policies and safeguards to ensure that your use of agentic AI complies with the requirements around accountability, transparency, monitoring and incident response mechanisms and human oversight. If your customers are consumers, you should also review the CMA guidance to ensure compliance from all angles.
Key takeaways
- Review the draft ICO guidance on agentic AI.
- Formulate meaningful policies and safeguards to ensure compliance.
- Where relevant, review the CMA guidance on agentic AI.
Please be advised that this is an update which we think may be of general interest to our wider client base. The insights are not intended to be exhaustive or targeted at specific sectors as such, and whilst we naturally take every care in putting our articles together, they should not be considered a substitute for obtaining proper legal advice on key issues which your business may face.