AI’s production of non-existent cases in court: simply a ‘cosmetic error’ or caution warning to PI insurers?

This article was co-authored by Mya Wilhelm, trainee Solicitor.

In the recent judgment of Ayinde, R v The London Borough of Haringey [2025], Mr Justice Ritchie was shocked to find that the claimant’s barrister presented written submissions supported by fictitious case citations. Even more surprising was the fact that the claimant’s solicitor explained that these made-up cases were merely ‘cosmetic errors’ which could be ‘easily explained’, but did not need to be.

Mr Justice Ritchie stressed that ‘it is the responsibility of the legal team, including the solicitors, to see that the statement of facts and grounds are correct’, and that the act of providing five fake cases ‘qualifies quite clearly as professional misconduct’. Although the underlying judicial review was successful, costs were reduced by £7,000 as a result of the claimant’s legal team’s conduct, and the judge went as far as to order the defendant to send the transcript of the hearing to the Bar Standards Board and the Solicitors Regulation Authority (SRA).

This judgment highlights the ongoing risks of AI systems producing plausible, but entirely false, information and the liability and regulatory risks professionals face if they do not authenticate, via “old school” checks, citations and authorities. 

This is sure to become a more prominent issue as the use of AI continues to become widely used within the legal industry, as demonstrated by the SRA’s approval of the first AI-driven law firm, Garfield.Law Ltd. The SRA is, though, aligned with the judge’s concerns as to the use of AI and has reported that Garfield.Law Ltd will be required to manage the risk of “AI hallucinations” and the system will not be able to propose relevant case law, which has been highlighted as a “high-risk area for large language model machine learning”.

The risk of AI fabricating legal authorities is not just a concern for the legal profession but it represents a significant exposure for the insurance market, particularly those who insure professionals. The SRA recently reported that three quarters of the largest solicitors’ firms were using AI, nearly twice the number from just three years ago, and 72% of financial services firms were using AI. It is anticipated that legal services is likely to be one of the sectors most affected by the use of AI over the next ten years. 

And, in a tale of utmost irony, a Minnesota judge has criticised an expert specialising in “research on how people use deception with technology” for submitting expert evidence (written with the assistance of ChatGPT) which contained “hallucinations” to non-existent academic research (Kohls and Franson -v- Ellison - Case No 24-cv-03754). If even experts in the subject are not immune to hallucinations, what chance do the rest of us have….

Comment

Insurers must continue to act proactively to understand and manage what is a rapidly evolving risk. 

Whilst some insurers have begun adapting certain policies to cover losses arising from underperforming AI tools, which indicates a shift towards insurers recognising and mitigating AI-related liabilities, it remains to be seen how the PI market intends to treat claims that derive from an AI error, where the error has been relied upon by a professional. It is therefore important for insurers and insureds to discuss the AI-focused products being utilised by professionals both prior to renewal and throughout the policy period to help manage that risk.

Related articles: