AI provides significant opportunities and risks for solicitors and professional indemnity Insurers. It will enhance productivity and the decision-making process. However, it also exposes solicitors and insurers to new risks whether it is as a result of breaches of professional obligations, copyright or confidentiality, to name a few.
Over the last year there has been an increasing number of cases before the court where practitioners have been relying on fictitious cases created by AI or erroneous submission written by AI. Most concerning for solicitors is in September 2025, a Victorian lawyer was stripped of his license to practice as a principal lawyer as result of his use of false citations generated by AI in court proceedings. In an Australian first, the Victorian Legal Services Board varied the lawyer’s practising certificate on 19 August 2025. The lawyer is no longer authorised to handle trust money and operate his own law practice. He can also only practise as an employee solicitor and will undertake supervised legal practice for a period of two years, with the lawyer and his supervisor reporting to the board on a quarterly basis in that time.
Implications of the use of AI
Legal practitioners have professional and ethical obligations including those imposed under the Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015 (Conduct Rules). Relevantly, the Conduct Rules include duties to the Court not to deceive or knowingly or recklessly mislead the court, and obligations on those with carriage of matter to exercise reasonable supervision over solicitors in their charge. It is for this reason the Courts have been highly critical of lawyers unfettered use of AI. Most recently in August 2025 the Federal Circuit and Family Court referred a solicitor to the Western Australia Legal Practice Board, for relying on four fictitious cases identified by generative AI programs in written submissions and then for providing misleading explanation for their inclusion when questioned by the Court (he or she did not initially admit to using generative AI). The solicitor was also ordered to pay costs. This quickly followed a personal indemnity costs order against Massars Briggs Law in July 2025 for using AI-enables software that produced incorrect citations. Justice Murphy concluded the “use of AI … has given rise to cost, inconvenience and delay to the parties and has compromised the effectiveness of the administration of justice”.[1]
Barristers are also falling into the AI trap, with Kishi Nathwani KC filing submissions in a murder case that included fake quotes and non-existent case judgements generated by AI. The errors were discovered by the Justice's associates. Justice Elliot commented “It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified” and the ability of the court to rely on submissions is “fundamental to the administration of justice”.
Whilst the examples that have been before the Courts arise from litigated matters, the risks are also prevalent for non-contentious matters where solicitors are increasingly using AI to review contracts and to carry out due diligence. For example, where AI models generate flawed valuations or misjudge liabilities during a transaction. Use of AI also raises the following issues:
- Biased or inaccurate outputs due to limitations in certain AI tools and/or the data they use;
- Bias in AI training data can result discriminatory outputs and unfair practices which may initially be difficult to detect and correct;
- Copyright issues as AI generated data sets may contain information obtained in breach of copyright laws; and
- Privacy or breach of confidentiality for example:
- concerns given that interactions with AI tools which could lead to sensitive data or privileged information being stored by third parties and used to respond to queries from other users; and
- failure to obtain client consent before using AI to process personal data.
It is a matter of time before professional indemnity cases a brought as a result of the use of AI, especially noting that in a recent survey by Dye & Durham Australia and the Australasian Legal Practice Management Association found that 78% of the respondents expect AI to be a regular feature of the legal industry within five years.
Law firms need to remember that whilst AI is an additional helpful tool for carrying out legal research, due diligence and other services, there is no control over the accuracy of the data sourced in using algorithms.
Law firms (and insurers) should also be aware that passing on the liability or seeking a contribution from a third party AI provider for any errors arising from the use of AI tools may be difficult. For example, Open AI has waived all liability for any loss or damage arising from the use of its AI tools. As AI professional liability ligation arises, this is likely to be fertile ground for future litigation as the liability exclusions are challenged.
To avoid falling foul of AI, law firms should ensure that:
- they have adequate internal governance framework and policies, which heed to the various court jurisdictions practice notes;
- regularly monitor the outputs of the AI; and
- provide staff with training on the use of AI. Supervisors and managers must be particularly aware of the risks of AI, including when reviewing work for completeness and accuracy.
Professional indemnity insurance
Law firms and their professional indemnity insurers need to continuously review the scope of cover available and or the wording offered. For law firms, they will be concerned about possible coverage gaps. For example, does the policy cover AI-related errors or omissions or if traditional errors have been caused by AI, do these fall for cover - who caused the error: human or machine? If not, law firms will potentially be exposed to costs, damages and fines and penalties. Subject to the extent of the use of AI, law firms should consider whether they need to take out AI-specific policies.
Insurers will need to not only review the policy wording but also the proposal forms and their application and renewal procedures to include questions seeking further information or declarations from law firms regarding their AI use. Insurers may find that the wording covers AI risks (whether it be AI in the provision of a service or AI-related service) that it was not intended to cover. Beyond solicitors’ professional indemnity insurance, we are already starting to see use of exclusions to exclude ‘silent AI’ risks from professional indemnity cover.
Insurers will need to ask more questions about the law firms’ use of AI tools and the procedure in place regulating the use of AI at the firm. It is important to understand whether the use of AI is within the confines of robust internal policies, with clear guidelines. These changes will allow underwriters to adapt to the quickly evolving risks associated with AI and a law firms’ practice. Insurers will however need to keep their underwriting procedures under review and adapt as the use of AI develops. Insurers should not only review their wording and underwriting but also use the data which they hold to identify claim patterns to assist detect trends which arise from law firm use of AI. This will allow insurers to stay ahead of this ever changing landscape.
[1] Murray on behalf of the Wamba Wemba Native Title Claim Group v State of Victoria [2025] FCA 731