The current and future impacts of AI in the insurance sector

At a recent event held at Kennedys offices and organised by the Forum of Insurance Lawyers (FOIL), FOIL members met to discuss the current and future impacts of AI within the sector. The event served as a platform to explore the transformative potential of AI technologies, with a particular focus on current gaps and limitations in professional indemnity (PI) coverage in the construction sector.

AI is already contributing substantial value to many insurers by delivering savings and efficiencies while acting as a catalyst for growth. In recognition of its undoubted future importance, several use cases continue to gain traction as insurers explore potential areas of the value chain where a competitive advantage can be exploited.

As AI continues to permeate the sector, its implications for PI are becoming increasingly significant. Legal professionals, consultants, and other service providers must, therefore, navigate the complexities introduced by AI, particularly regarding coverage limitations, liability issues, and evolving policy wordings.

The discussion opened with an emphasis on the remarkable capabilities of GenAI, specifically Large Language Models (LLMs), which are transforming the claims process and represent a significant area of investment among insurers. This investment is set to grow, with the ongoing advancements in GenAI holding the promise of deep operational efficiencies and customer experience improvements. However, the rapid pace of AI development also presents challenges, compelling firms to adapt and rethink traditional practices.

This event highlighted the immediate effects of AI and underscored the need for the insurance sector to remain vigilant and proactive in navigating the evolving landscape. It is unknown whether AI will ever develop sentient capability, emphasising the duty of care held by humans engaging AI and ensuring the integrity of its output.

Coverage limitations and gaps

A primary concern surrounding PI in the context of AI is the potential for coverage limitations or gaps in existing policy wordings. Traditional PI policies are designed to cover errors and omissions resulting from human actions or advice. However, as AI systems become more autonomous and integral to decision-making processes, it raises the question of whether standard PI policies provide adequate cover against claims arising from AI-generated outputs.

  • AI output errors: Many existing policies may not cover errors from AI algorithms or systems. If an engineer or architect relies on an AI tool that gives incorrect analysis or recommendations, the liability may fall outside the scope of traditional PI coverage.
  • Third-party claims: AI can lead to third-party claims, particularly if the technology inadvertently harms clients or third parties. For instance, if AI is used in a consultancy project and miscalculates data that leads to financial loss for a client, the professional may find themselves without adequate protection.
  • Regulatory compliance: As regulatory frameworks around AI evolve, PI policies must continue to align with these requirements to ensure wording continues to cover against claims for potential non-compliance.

Establishing liability

Establishing liability in cases involving AI presents unique challenges, with the complexity of AI systems often obscuring the line of accountability and making it difficult to establish whether the fault lies with the inputs, the third-party provider, or the AI itself and, therefore, to manage emerging risk.

  • Human oversight: A key factor in determining liability is the level of oversight exercised over AI tools. Professionals are expected to maintain a duty of care, which includes ensuring that AI systems are used appropriately and their outputs are critically assessed. Neglecting this duty exposes the risk of facing liability for any resulting errors.
  • Contractual agreements: Clear agreements that outline the responsibilities of all parties involved can help establish liability. Contracts with clients and technology providers should explicitly address the use of AI, including responsibilities for oversight, data integrity, and compliance with relevant regulations.
  • Documentation and transparency: Thorough documentation of the decision-making processes involving AI is essential, including records of how AI tools were used, the rationale behind relying on specific outputs, and any human intervention. Such documentation can serve as crucial evidence in establishing liability or defending against claims.
  • Professional standards and best practices: Following industry best practices and professional standards when integrating AI into services can mitigate liability risks. Staying informed on the latest developments in AI technology and understanding the implications for their practice can help to proactively address potential issues.

Work at Lloyd's Lab

Launched as the centre of innovation for the insurance sector, the Lloyd's Lab has been a key enabler for the acceleration of InsurTech start-ups over the past six years. Playing a central role in supporting the development of innovative products and solutions within the insurance market, Lloyd's Lab fosters progress in AI technology across several critical areas, including enhancing decision-making processes, advancing data modelling capabilities, improving data-gathering methodologies, and encouraging creative approaches to solving industry challenges.

Among the Lab's programme participants is Intelligent AI, a GenAI-driven solution delivering near real-time, location-based risk insights designed to support underwriting. A demonstration of its value was given when analysing the correct sums insured  for a portfolio of 300 commercial properties, which highlighted a gap in coverage to the tune of £500M.

Another innovation is ZeroEyes, a pioneering AI-powered gun detection technology that can scan billions of CCTV images daily to provide an advanced security solution for vulnerable buildings. Climate risk modelling specialist Vayuh.ai, part of the Lab's 2023 cohort, have developed the capability to build highly accurate weather forecasts and RMS models by combining AI with physics. Its work has led to the building of better cat risk models linked to climate change, with its new products across various lines of business set to benefit insurers worldwide.

The need for governance

Members agreed the use of AI requires robust governance to mitigate the risks associated with its use, particularly addressing the 'Black Box' challenges that can arise when there is a lack of transparency around how AI tools are trained. To ensure the reliability of outputs, rigorous testing should be mandated, including verifying the quality of training data and prompts to safeguard the integrity of results.

Equally important is establishing clear responsibilities for ongoing training to ensure firms maintain technical proficiency and the ability to critically assess AI-generated insights. The insurance market needs to keep pace with the evolving nature of AI, developing a deep understanding of its correct use to mitigate exposure to risks, such as increased D&C claims related to construction projects where AI has been used.

A comprehensive strategy should also be developed to integrate AI with human oversight, enabling a system of checks and balances that enhances the defensibility of decision-making. The risk of AI “hallucinations”, for example, could lead to incorrect risk assessments, mispricing of policies, or inappropriate claims decisions. This can subsequently undermine the reliability of underwriting processes, distort financial forecasts, and cause reputational damage. Combining AI's capabilities with trained human input allows organisations to ensure AI-driven processes are transparent, accountable, and aligned with industry standards before being used to inform critical decisions.

Regulatory landscape

The session concluded with a discussion on the current legislation and its suitability in managing the impacts of AI. The UK Government's current position is set out in its AI Regulation White Paper published in 2023 and centres on an outcome-driven approach based on five pillars that foster responsible AI development and deployment. Recognising AI's enormous potential while acknowledging the inherent risks, the UK seeks to create an environment where AI can flourish while safeguarding public trust and ethical principles. This approach is characterised by contextual regulation, tailored to specific sectors and applications, and collaboration with stakeholders to ensure a balanced and evolving regulatory framework.

A key development is the introduction of ISO 42001, which represents the first international standards for managing AI and focuses on establishing guidelines for the ethical, transparent, and accountable use of AI. Certification offers organisations a way to demonstrate their commitment to supporting robust governance and employing AI responsibly, ensuring they align with best practices for risk management, data integrity, and decision-making.

Within the EU, the AI Act establishes a comprehensive framework for regulating AI in the EU and aims to ensure the safety, reliability, and fairness of AI systems. Recognising the emergence of GenAI as a sign of the rapid evolution of AI technology, the Act aims to retain the flexibility to adapt to upstream developments. However, specific characteristics of the legislation suggest additional standards and guidance may be necessary to facilitate its implementation in the insurance sector.

As AI develops rapidly, frontier systems will continue to push the boundaries of what is possible, and regulatory bodies will need to adopt an increasingly flexible approach to keep pace with emerging risks and opportunities. While they will inevitably catch up, insurers must remain proactive in developing their internal policies to manage AI responsibly.

Future outlook

AI is set to continue disrupting the insurance industry in several ways including customer service, underwriting, pricing, and fraud detection, as well as creating new products and business lines. Cost containment or reduction is a constant challenge for insurers, and innovation is central to achieving this while improving the customer experience.

The scope of tasks undertaken by AI is constantly expanding and its wider use introduces further complexity for insurers in trying to assess risk and assign liability. Other factors, such as whether an AI system is proprietary, off the shelf, or whether the professional relies on a third-party specialist for AI output, extend the number of stakeholders and add further uncertainty.

Currently, insurers perhaps manage AI claims through existing PI policies on a case-by-case basis, but the future may see the scope of coverage expand under such policies to categorise AI failures as a standard risk and close any gaps in coverage. Alternatively, the market could see new solutions that specifically cover AI developers against claims by third parties against the ultimate user in the event of failure.

The best way to close coverage gaps before claim submission is for all parties to have an open dialogue on the type and level of AI usage employed by insureds to allow insurers to price coverage appropriately. The reality of the situation and where the PI exposures lie may remain unclear in the short term, but coverage must try to stay aligned with technological developments to provide clarity on the best policy types and to mitigate against uninsured losses.