Deepfakes in the insurance market – a personal injury perspective

This article was first published by the Chartered Insurance Institute (CII).

AI-generated misleading content is a hot topic of conversation, especially with 2024 being a year with over 60 national elections taking place across the globe. The World Economic Forum ranks disinformation as a top global risk for 2024, with deepfakes as one of the most worrying uses of AI, posing significant threats against public trust and information accuracy.

In this article, we consider the risks associated with deepfakes in the context of insurance and offer an example of the type of deepfakes we are seeing in personal injury claims.

What are deepfakes?

Deepfake technology, which uses AI to create realistic synthetic media, can be exploited to fabricate evidence for insurance claims. For instance, a person could create a deepfake video showing property damage or personal injury that never occurred.

Although less sophisticated, shallowfakes, which involve simpler manipulation of media, such as speeding up, slowing down, or editing existing footage, can also be used to distort evidence. An example would be altering a payslip to claim a higher loss of earnings award.

A recent example

We recently received CCTV evidence in support of an alleged accident. However, after investigating it was evident that the media had been manipulated to change the date, time and the registration of the vehicle purporting to have caused the incident.

What are the consequences for insurers?

Fraudsters can create fake videos of people to impersonate policyholders, creating challenges in verifying identity, or use deepfakes to fake medical test results. Manipulated content can also falsely depict individuals or property, such as faking safety features or downplaying risks in property or casualty insurance. This can make it difficult for insurers to determine the authenticity of claims, increasing the risk of fraudulent payouts.

The rise of manipulated content may require insurers to invest more in tools and processes for verifying the authenticity of claims and documents. In turn, this could increase operational costs and delay claim processing times.

Insurers may need to adopt AI and machine learning tools to help detect deepfake content, leading to additional costs related to purchasing the technology and staff training. The human eye is often one of the best mechanisms to identify deep/shallowfakes but the technology is always improving. However, as fraudulent content becomes harder to detect, insurers may increase premiums to compensate for increased risk exposure, leading to higher costs for consumers.

If deepfakes lead to a significant number of fraudulent claims slipping through the net, insurers could suffer reputational damage. Customers could lose trust in the ability of insurers to manage claims fairly and effectively, impacting customer retention.

However, there is also potential for new insurance products and services. For example, as deepfake technology becomes more widespread, there could be demand for new insurance products that cover businesses or individuals from the financial losses or reputational damage caused by the creation or use of deepfake media. Insurers may also wish to offer or partner with technology companies to provide deepfake detection services to policyholders.

Summary

The rise of deepfakes introduces complex challenges for the insurance industry. Fraud detection, underwriting accuracy and trust in digital evidence are all at risk of being undermined by these technologies. Insurers will likely continue to invest in advanced AI-driven tools to detect manipulated content, enhance their risk assessment models and adapt to new regulatory standards. Failing to do so could lead to increased fraud, higher costs and a loss of trust within the insurance ecosystem.