Bar Council Flags AI Pitfalls After High Court Rebuke

(November 27, 2025, 1:29 PM GMT) -- The Bar Council has updated its ethics guidance on the use of generative artificial intelligence at the bar after the High Court issued a sharp reminder earlier this year of the dangers of lawyers relying on the technology in court.

Concept picture of a screen with the word AI and a warning sign.

The Bar Council updated its guidance after the High Court rebuked lawyers for failing to check non-existent authorities they had submitted to the court. (iStock.com/Saksit Sangtong)

The representative body for practicing barristers in England and Wales unveiled the latest guidance for its members on Wednesday. It previously published advice in January 2024. It applies to those who decide to use ChatGPT or any generative AI software based on large language models, as well as systems that are specifically aimed at lawyers.

The guidance makes it clear that there is "nothing inherently improper about using reliable AI tools for augmenting legal services; but they must be properly understood by the individual practitioner and used responsibly, ensuring accuracy and compliance with applicable laws, rules and professional codes of conduct."

Generative AI can create new content by using technology such as large language models, or LLMs, which are trained on large sets of data. 

The Bar Council released an update to its guidance after the High Court referred a barrister and solicitor to their professional regulators earlier this year for citing cases that do not exist. High Court Judge Victoria Sharp criticized Sarah Forey and Abid Hussain for failing to check non-existent authorities they had submitted to the court in two unrelated cases.

Forey, a pupil barrister at London chambers 3 Bolt Court, gave a court five fake cases that she admitted might have come from AI-generated summaries of search results on Google or Safari.

Hussain, a solicitor with Manchester immigration firm Primus Solicitors, produced a witness statement with 18 fictional cases after relying on research from his lay client.

"Crucially, barristers must understand that LLMs, while sophisticated, are not infallible," the guidance says. "They are predictive tools, prone to generating plausible but entirely false information — a phenomenon known as 'hallucinations.' LLMs are not a substitute for human legal expertise, critical judgment or diligent verification. The ultimate responsibility for all legal work remains with the barrister."

The guidance says that LLMs have not been around long enough and have not been sufficiently tested for it to be clear what tasks they can or should be used for in legal practice.

The experience of the legal profession, like others, is that general purpose LLMs like ChatGPT are unreliable tools for "source-based" research. Hallucinations might be much less frequent in LLM-based legal research tools that are designed specifically for lawyers, according to the Bar Council. But it says that they still occur.

It is therefore essential that barristers verify that any sources or authorities cited by such systems are accurate and still exist, and that they actually support claims that are being made, the guidance says.

The guidance notes that people tend to project human-like traits onto LLMs to make the technology feel more familiar and easier to use, even though such systems don't understand concepts, emotions or causality in the way that humans do. The guidance says that it is therefore necessary for barristers to understand the technical process behind LLMs when they use them.

"LLMs use machine-learning algorithms, first to be 'trained' on text and, based on that 'training' (which involves the application of inter alia mathematical formulae), to generate sequential text," the guidance says. "These programs are now sufficiently sophisticated that the text often appears as if it was written by a human being, or at least by a machine which thinks for itself."

Barristers should keep in mind that the data that is used to "train" generative LLMs may not be up-to-date and can sometimes produce responses that are ambiguous, inaccurate or contaminated with inherent biases, the guidance says.

The guidance notes that AI software like ChatGPT uses the inputs from users' prompts to continue to develop and refine their systems. This means that anything that a user types into a system may be used to train the software and might be repeated verbatim in future results.

"This is plainly problematic, not only if the material typed into the system is incorrect, but also if it is confidential or subject to legal professional privilege," the guidance says.

Cybersecurity risks also arise when lawyers use LLMs, according to the Bar Council. It notes that "the increasing integration of LLMs into legal tech platforms introduces new attack vectors" for criminals.

--Editing by Hazel Vidler. 

For a reprint of this article, please contact reprints@law360.com.