This article originally appeared in Insurance Day, November 2024.
The prospect of autonomous vehicles presents insurers with unique opportunities and challenges, perhaps in equal measure.
As insurers explore solutions to the questions posed by this technology, addressing potential fraud risks must feature: where fraud risks may arise, practical challenges insurers may face and how fraud concerns may lean in to effect later legislation and regulations.
What happens when an accident involves two or more autonomous vehicles operating in autonomous mode and a vehicle under the manual control of a human driver? Where there is no apparent systems failure of either autonomous vehicle, the liability and causative position could quickly become unclear. This ambiguity could potentially lead to fraudulent claimants seeking to exploit uncertainty around liability.
The transition demand – the autonomous vehicle handing back control of the vehicle to the user in charge (the driver) and vice versa – may give rise to the potential for exploitation by the driver unless the in-car data is sophisticated enough to spot the fraud and the exploitation of the system and this data will be released to the relevant insurer expeditiously.
For example, it may be possible to “game” the timing of a particular autonomous vehicle’s transition demand to have it crash in autonomous mode when, in almost all circumstances, the vehicle would have crashed anyway as a result of the human driver’s negligent driving.
Staged accidents
If the command and control of autonomous vehicles in autonomous mode can be altered, this could lead to attempts to stage accidents between vehicles if the fraudsters can obfuscate their interference after the event too. It may even be possible to create the illusion of a road accident involving several vehicles, even down to alteration of the accelerometer and telematics data, if the data and background code is not inviolable.
It is also possible fraudsters will target commercial vehicles given the likelihood they will be insured, using autonomous vehicles to cause the accident but blaming a failure of the software.
Reliance on software, sensors, GPS and connectivity to the internet opens the door to cyber security threats. One conceivable risk is malicious actors may gain unauthorised access to the vehicle’s systems and potentially demand payments to restore control or prevent false claims to insurers.
Additionally, personal data collected by autonomous vehicles – such as location history, driving patterns, and even biometric data – could be stolen and exploited for identity theft or fraud.
While data created and captured by autonomous vehicles will help clarify liability, it also introduces a new risk: the manipulation of digital evidence to fabricate an innocent narrative. Without proper safeguards, it becomes challenging for insurers, law enforcement and courts to determine the true cause of incidents, increasing the potential for false claims.
The fraud may be perpetrated simply by failing to disclose deliberate modifications to in-car autonomous vehicle systems and this later not being discovered.
Supply chain fraud
Autonomous vehicles are highly complex machines, relying on advanced artificial intelligence, hardware and vast data inputs. This complexity increases the risk of fraud in the supply chain as well. Components such as sensors, chips or cameras, where the quality has been falsified, may be introduced fraudulently into long supply chains where tracking and identification may prove very difficult. A similar concern is the sale of counterfeit parts that fail to meet regulatory standards.
Fraudulent parts, pushed in through the supply chain before vehicle production or during repairs or maintenance, could compromise vehicle safety. In cases where the source of faulty parts is obscured, assigning liability could be further complicated, creating room for fraudulent claims and counterclaims in legal disputes.
The UK must introduce clear and robust legislation that sets out a framework for cases involving autonomous vehicles. The Automated and Electric Vehicles Act (AEVA) 2018 already lays some groundwork by saying insurers will cover damage caused by autonomous vehicles when they are in autonomous mode.
However, further clarity is needed, especially regarding accidents caused by cyber security breaches, software failures or fraudulent data manipulation.
Given the potential for hacking and data breaches, cyber security regulations and data privacy laws for autonomous vehicles should be a top legislative priority.
Access to in-car telematics, sensor and event data recorder data is essential to insurers in the determination of liability and identification of potentially fraudulent claims. Similarly, law enforcement agencies will need to access this data when assessing both criminal culpability and appropriate sanctions. So too will regulators, when assessing and reporting back on autonomous vehicle safety and performance.
The legislative framework must play an important role in countering the risk of tampered evidence. Blockchain or similar technologies may play a role in ensuring that data collected from autonomous vehicles is immutable and cannot be altered after the event too.
Understandably, the focus of government and key stakeholders has been on ensuring this technology is brought to our roads safely and under a clear and transparent insurance and liability framework.
As the legal and regulatory framework is, in time, built out from the AEVA 2018 and the Automated Vehicles Act 2024, insurers will be keen to see what measures are developed to address fraud risks. We anticipate an uptick in collaboration between key stakeholders, including insurers, manufacturers and software developers, will be crucial.