The number of people in the UK proactively using a generative artificial intelligence (AI) tool has almost doubled in the last two years. With increased usage, comes increased risk. A simple copy and paste into an AI tool could turn into a data breach and result in an opponent being granted access to legally privileged documents and communications.
In a recently-published case from the Immigration and Asylum Chamber, a solicitor typed into ChatGPT drafts of client emails explaining Home Office decisions asking the tool for improvements to them, and also uploaded Home Office decision letters to be summarised for his clients. The court confirmed that placing client documents into ChatGPT and other open-source AI tools ‘is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege’.
This case echoes a federal decision from the United States earlier this year in which the defendant inputted information he had learned from counsel into ClaudeAI, created documents for the purpose of speaking with counsel to obtain legal advice, and subsequently shared those documents with counsel. The court held that all of these interactions with ClaudeAI, and the documents uploaded to the platform, were not subject to legal privilege and were open to the court (and the prosecutors) to view. As ClaudeAI (like ChatGPT) collects data from inputs to train its language model, the communications between the defendant and his legal advisers were no longer confidential – and, by extension, no longer privileged.
In litigation, confidential communications between solicitors and their clients are, for the most part, protected by legal professional privilege when those communications relate to the giving or receiving of legal advice. The protection extends beyond emails to written instructions, strategic documents, and notes of advice.
When confidentiality is waived, privilege is lost and the communications then become disclosable to the opposing party in the litigation.
What steps should you take to ensure confidentiality?
This does not mean AI should never be used when handling sensitive confidential information in litigation. Close-sourced AI tools are considered safe, as the information uploaded to them is not then placed in the public domain. It is the open-sourced AI which retains, stores and uses user inputs for model training which lacks the confidentiality and control that privilege requires.
Sharing information in an open-sourced AI can also be a breach of an organisation’s GDPR obligations. Self-reporting to the Information Commissioner’s Office is mandatory where such a breach has occurred and should be reported within 72 hours of the data controller becoming aware of the breach.
Key takeaways
- Never put sensitive data into open-source AI tools.
- Organisations should ensure they have an adequate AI Acceptable Use Policy in place to make it clear when AI use is acceptable, what AI tools can be used, and what can be inputted and uploaded to AI.
- Parties to litigation must be careful about the AI tools they use to avoid waiving legal privilege, otherwise they risk key documents becoming disclosable to their opponents.
Looking to prepare and need a helping hand?
Get in touch and let our Data Protection specialists take it from here.
Meet your Data Protection experts
Please be advised that this is an update which we think may be of general interest to our wider client base. The insights are not intended to be exhaustive or targeted at specific sectors as such, and whilst we naturally take every care in putting our articles together, they should not be considered a substitute for obtaining proper legal advice on key issues which your business may face.