The speed of AI development will catch out the unwary

This article was co-authored by Joe Cunningham, Product Manager, Kennedys IQ and was originally published in Insurance Day, with thanks to FOIL for the opportunity.

The recent advancement of generative AI has the potential to transform the professional services sector. Insurers must be mindful of the new risks facing their insureds as a result of artificial intelligence, but also the risks of not embracing the technology.

In 1711, Alexander Pope wrote “To err is human, to forgive is divine” but if we take away the human element of decision-making by use of artificial intelligence (AI), who are we to forgive or, more likely, look to for compensation if things go wrong?

The recent advancement of generative AI, which describes models that can produce various types of content from text, imagery, audio and synthetic data, has the potential to transform the professional services sector. These models provide a platform for businesses to dramatically increase operational efficiencies and so reduce costs. It is already being widely adopted for key tasks such as undertaking research, drafting contracts and giving advice. These models can produce highly technical and believable text and that is the mainstay of most professional practices.

ChatGPT, probably the most widely known AI product, has already been the subject of much discussion, particularly in the legal profession, for its potential to disrupt the market. In addition to ChatGPT, there is also Google Bard and Prometheus from Microsoft.

These underlying models, referred to as large language models, are available for software developers to incorporate into products beyond the 'chat versions' of recent public discourse. For example, law firm Allen & Overy is developing an AI product called Harvey.

Billions are being invested in AI and it will be transformational. If we try to resist, it will be like King Canute trying to hold back the tide. But why would we try to hold this back? The benefits are potentially huge. However, the note of caution is that, without careful management and controls, there are risks to the professions.


Those who commentate on and advocate for AI are also quick to caution there is potential for plagiarism, breach of intellectual property (IP) rights, issues regarding GDPR regulations and also the possibility of unintentional inaccuracy by reason of incorrect or incomplete information entered into or instructions provided to the systems. These are all real risks. Like most things in life, you get out what you put in.

Let us take the legal profession as an example. There is much talk (and actual use in practice already) of AI being used to undertake research tasks, draft contracts, predict the likely outcome of cases and provide quantum calculations for injury claims. Those are all tasks that are readily capable of becoming largely process-driven.

However, the recognised risk arises when the lawyer seeks to rely entirely on software, incorporating large language models that have been trained using data from multiple unverified sources. Firms that fail to introduce appropriate safeguards that include 'a human in the loop' to authenticate the AI-generated content can expect potential exposure to claims.

To develop that point further, it is also correct to say large language models are capable of having 'hallucinations', including fabricating answers to questions posed and also quoting non-existent legal and research-based 'citations'. There are data privacy and confidentiality issues if the system does not prohibit the entry of sensitive or personal data, which could then be incorporated into responses to other users.


Similarly, as large language models are trained based on a large amount of data from the internet, there are real risks of violating copyright and other IP rights of others. This poses risk to media and IT professionals in particular and, for example, if designs are plagiarised via AI, architects need to be beware. For these reasons, compliance teams must ensure there is a secondary line of sound governance and data quality controls (that is, human input) to add a layer of informed decision-making.

By way of further illustration, last year the Solicitors Regulatory Authority warned AI will make phishing contacts and other false communications more credible and some cyber 'threat actors' now impersonate individuals by telephone and email.

On a simpler level, misuse of law firms’ names in text and WhatsApp messages and telephone calls has been reported. Beware, therefore, of sending funds to a threat actor, who is very credibly posing as the client.

This comes back to the 'Friday fraud' issues of several years ago, where cyber criminals were hacking emails and convincing unwitting accounting teams to transfer funds away from clients. Solicitors, accountants, property professionals and anyone handling client money need to beware.

As far as the insurance market is concerned, we can expect to see changes in underwriting practices, the introduction of new conditions and exclusions, possible increases in pricing and also new products in response to AI-related risks. One thing you can expect from the insurance market is it also adapts to new risks for that is its purpose.

This February Kennedys and its technology arm, Kennedys IQ, responded to a discussion paper on AI in financial services issued by the Financial Conduct Authority, the Bank of England and the Prudential Regulation Authority.

The proposed AI Maturity Framework presents the opportunity of a scaled approach to risk depending on the application of AI and associated risks: the more risk, the more risk assessment activity is required. This approach intends to offer a balance to the administrative burden from regulation and avoid stifling innovation.

In this fast-changing world we now live in, if you stand still, be ready to be overtaken and become obsolete very quickly. AI is here to stay and so embracing it is vital, but it requires the right controls.

Read other items in Professions and Financial Lines Brief - November 2023

Read other items in London Market Brief - February 2024

Related content