ICO report on data protection implications of agentic AI (Artificial Intelligence)

While agentic AI is a useful tool for optimising processes, it also brings with it a host of opportunities for private data to be compromised – making it crucial for businesses to be informed of the risks and how to strike the delicate balance between embracing progress and ensuring protection.

What has the Information Commissioner’s Office said about agentic AI?

The Information Commissioner’s Office (ICO) has published a report highlighting the potential data risks and opportunities associated with using agentic AI. The report outlines how the ICO intends to keep agentic AI’s development and use under review.

Agentic AI systems can independently set goals and execute complex tasks with minimal human intervention. The rapidly expanding potential uses of agentic AI mean these systems are increasingly attractive to businesses looking for efficiencies.

The ICO emphasises that, ‘as developing agentic AI increases the potential for automation, organisations remain responsible for data protection compliance of the agentic AI they develop, deploy or integrate in their systems and processes’.

Click here to access the ICO’s full report.

What are the risks of agentic AI businesses should be aware of?

The ICO’s report highlights novel agentic AI data protection risks (considering the pace of development since its recent consultation series on generative AI), which include:

  • Issues around determining controller and processor responsibilities through the agentic AI supply chain; 
  • Rapid automation of increasingly complex tasks resulting in an increase in automated decision-making; 
  • Purposes for agentic processing of personal information being set too broadly to allow for open-ended tasks and general purpose agents; 
  • Agentic systems processing personal information beyond what is necessary to achieve instructions or aims; 
  • Potential unintended use or inference of special category data; 
  • Increased complexity impacting transparency and the ease with which people can exercise their information rights; 
  • New threats to cyber-security resulting from the nature of agentic AI; and 
  • Concentration of personal information facilitating Personal Agents (PAs). 

All businesses considering deploying agentic AI must be aware that they are exposed to risks. 

What can businesses do to mitigate the risk of data breaches?

All businesses considering deploying agentic AI must be aware that they are exposed to risks.

The ICO is not disincentivising the use of agentic AI. Rather, it states that it exists to ‘encourage and support data protection- focused opportunities’. The ICO’s intention is to set out clearly the risks of agentic AI’s use, along with the need for pause and detailed consideration by organisations before adopting such systems (privacy by design).

The ICO will continue to review the use and development of agentic AI to ensure that innovation is not at the expense of people’s privacy.

Until further regulatory guidance is published, organisations may look to the ICO’s innovation support services, such as Innovation Advice and the ICO’s Regulatory Sandbox, for further information and guidance.

In summary, organisations must guard against overreliance on agentic AI. They must adopt a privacy-by-design approach to be confident that the associated risks are fully assessed and mitigated – both prior to use, and on an ongoing basis.

Looking to prepare and need a helping hand?

Get in touch and let our Data Protection specialists take it from here.

Meet your Data Protection experts.

Please be advised that this is an update which we think may be of general interest to our wider client base. The insights are not intended to be exhaustive or targeted at specific sectors as such, and whilst we naturally take every care in putting our articles together, they should not be considered a substitute for obtaining proper legal advice on key issues which your business may face.