Speaking earlier this year, Google’s Sergey Brin warned us of the dark side of artificial intelligence (AI) and said that “serious thought” must be given to a number of issues, including the impact on employment and the challenges of making unbiased algorithms. As AI gains more applications, the insurance industry knows it needs to keep pace in order to address the risks associated with its rise.
AI goes across every aspect of insurance, but here we concentrate on the three sectors we consider it has particular impact.
The construction industry - which was condemned to “modernise or die” by the Farmer Review in 2016 - is now embracing all manner of new technologies with firms (including JCB and Komastu) substantially investing in the R&D of automated plant and equipment that will replace labour activities on site.
Drones are anticipated to become a common feature on construction sites, replacing tower cranes, helicopters and other surveying equipment to create 3D site models, monitor the works, produce progress reports and conduct building inspections.
The role of construction professionals, contractors, sub-contractors and their suppliers will change. In the future, the plant will replace the labour and, increasingly, the skills of the on-site plant maintenance technicians, robotics engineers and soft and hardware developers will be key to the nature of the risk being insured.
This move comes in tandem with the development of single project professional indemnity policies, covering all professional service providers including contractors, architects, engineers and surveyors. In time, we are likely to see claims arising under these new policies from common errors in the AI across numerous sites, software failures causing damage and failures to adequately survey or consider hazards when creating a 3D site model.
Insurers need to be aware of gaps in cover and the possibility of picking up the substantial liabilities of the software developers; a negligent survey or flawed software has the potential to run up sizeable construction claims a few years down the line.
Wannacry was a key example of the volume - and vulnerability - of healthcare data and as AI is deployed across multiple systems, its exposure to cyber attack increases. We are now seeing medical devices such as pacemakers with the ability to receive firmware updates remotely and joint replacements that are capable of being tracked.
However, there is an even greater risk in this industry as interference with medical devices or implants through cyber attacks (bio-hacking) can cause bodily injury, such as remotely reducing the battery life of a pacemaker or intercepting alerts on devices being tracked. This gives rise to some fundamental policy issues as to who would cover the consequences of such an attack.
Whilst cyber insurance products cover first and third party claims, defence costs and expert assistance, they generally do not extend to personal injury. Insurers offering multiple medical products will need to ensure that they are clear as to where these types of attacks sit between their liability, medical malpractice and D&O cover – with the majority of these policies specifically excluding any cyber-physical risk.
Similar issues arise regarding whose advice and expertise is being relied upon - is it the clinician, the medical device manufacturer and/or the device and software designers and which cover responds when the AI fails?
As we move closer towards the reality of autonomous vehicles with the passage of the Automated and Electric Vehicles Act 2018 (the Act), key issues arise as to the share of liability between motor insurers and manufacturers.
The Act places first instance liability on the insurer for an accident caused by an automated vehicle. However, it is estimated that by 2040 there is likely to be a shift towards a 50/50 split in risk allocation between personal lines and product liability insurance.
The manufacturer is likely to have responsibility for programming the software that is installed during the manufacture of the vehicle, and therefore will need to ensure that they have back to back indemnities from their software and hardware suppliers and their developers. Ensuring clear contractual arrangements will allow manufacturers to continue to pass liability down the chain.
While AI may, in many sectors, reduce risk - and may even eradicate human error - the risk will not disappear entirely but will change. If, for example, the collapse of the steelwork on site is caused by an error in software used to programme the automated plant, will the project managers’, the manufacturers’ or the software developers’ insurance respond to the claim?
As the level of human touch in programming is continually reduced, not only will questions over liability have to be re-examined but insurers will also have to consider the potential inability to insure against a pre-determined outcome. As AI moves from human input to machines learning from other machines, there may be no way of testing the risk of the final outcome.
AI offers many opportunities, including a reduction of certain risks. Insurers will need to accommodate the shift in allocation of risk and new policies will need to reflect the changes in behaviours to enable the insurance market to benefit from all that AI can offer.