Opportunistic claims fraud on the rise
The persistent challenge facing insurers with opportunistic claims fraud perhaps comes as no great surprise. The financial pressure on individuals that has arisen as a consequence of the cost-of-living crisis has inevitably resulted in claimants seeking to exaggerate or embellish claims to make what they consider to be easy money.
Whilst fraud typologies such as organised claims fraud and “cash for crash” remain significant problems, insurers have faced an influx of claims that, whilst appearing genuine at presentation, are clearly exaggerated upon closer scrutiny. This mirrors the experience of Kennedys’ Fraud team whereby many outwardly genuine claims continue to be successfully defeated as a result of a claimant’s willingness to grossly exaggerate their losses.
Deepfakes in the insurance market – a personal injury perspective
AI-generated misleading content is a hot topic of conversation. Deepfake technology, which uses AI to create realistic synthetic media, can be exploited to fabricate evidence for insurance claims. For instance, a person could create a deepfake video showing property damage or personal injury that never occurred.
The rise of deepfakes introduces complex challenges for the insurance industry. Fraud detection, underwriting accuracy and trust in digital evidence are all at risk of being undermined by these technologies. Insurers will likely continue to invest in advanced AI-driven tools to detect manipulated content, enhance their risk assessment models and adapt to new regulatory standards. Failing to do so could lead to increased fraud, higher costs and a loss of trust within the insurance ecosystem.
However, there is also potential for new insurance products and services. For example, as deepfake technology becomes more widespread, there could be demand for new insurance products that cover businesses or individuals from the financial losses or reputational damage caused by the creation or use of deepfake media. Insurers may also wish to offer or partner with technology companies to provide deepfake detection services to policyholders.
Ghost broking: the use of social media platforms
Social media platforms have become a powerful tool for both positive and negative activities, creating opportunities for fraudsters, such as ghost brokers, while simultaneously offering unique capabilities for educating and protecting users. On the one hand, social media serves as an inexpensive and far-reaching medium that fraudsters exploit to access a wide range of potential victims. Social media platforms are increasingly targeted by individuals engaged in fraudulent activities, such as ghost brokers—those who pose as legitimate insurance providers, often offering suspiciously low rates to unsuspecting consumers. These fraudsters leverage the trust placed in influencers and online personalities, using their platforms to advertise and promote fraudulent services. New or young drivers, who may be more susceptible to these schemes, often turn to social media to follow influencers and make purchasing decisions based on their recommendations. Unfortunately, the lack of regulation and oversight on these platforms makes it easier for fraudulent activity to flourish.
On the other hand, social media also offers significant potential for education and user protection. Insurers can utilise these platforms to actively engage with users, promoting awareness about the risks of fraud and providing educational content to help users make informed decisions.
The advent of autonomous vehicles may increase fraud risk
As insurers explore solutions to the questions posed by autonomous vehicle technology, addressing potential fraud risks must feature.
Where there is no apparent systems failure, the liability and causative position could quickly become unclear. This ambiguity could potentially lead to fraudulent claimants seeking to exploit uncertainty around liability.
If the command and control of autonomous vehicles in autonomous mode can be altered, this could also lead to attempts to stage accidents between vehicles if the fraudsters can obfuscate their interference after the event too.
Autonomous vehicles are highly complex machines, relying on advanced artificial intelligence, hardware and vast data inputs. This complexity increases the risk of fraud in the supply chain as well.
As the legal and regulatory framework is, in time, built out from the Automated and Electric Vehicles Act 2018 and the Automated Vehicles Act 2024, insurers will be keen to see what measures are developed to address fraud risks. We anticipate an uptick in collaboration between key stakeholders, including insurers, manufacturers and software developers, will be crucial.
Fraud in Ireland
Rezmuves v Birney & Ors [16.10.24]
This case demonstrates how difficult it is to succeed in what is known as a ‘section 26 application’, i.e. an application to dismiss proceedings where it is alleged that the plaintiff knowingly gave false or misleading evidence.
The plaintiff suffered injuries in three road traffic collisions and claimed he could not work as a result. During trial, he delivered a schedule of special damages, verified on affidavit, claiming €185,000 for past loss of earnings and €471,000 for future loss of earnings. This was notwithstanding that in the five years prior to the first collision, he had no income and was receiving social welfare payments. He also claimed €94,000 for a spinal cord stimulator even though his own consultant neurosurgeon did not support this claim. The plaintiff further claimed for loss of opportunity on the basis that he could no longer run the business he established two years prior to the first collision. The defendants claimed that the plaintiff misled his actuarial experts about the business and its prospects.
The judge refused to dismiss the proceedings because the defendants had not established to the requisite degree that the plaintiff had acted fraudulently or dishonestly.
This case is also discussed in our travel market insights.