Silent AI – a new and unintended threat to traditional policies?

This article was first published by Financier Worldwide Ltd. ©2025 Financier Worldwide Ltd. All rights reserved. Permission to use this reprint has been granted by the publisher.

The rapid increase in the adoption of artificial intelligence (AI) and its impact on operational resilience remains a priority for businesses and insurers. This is in large part due to the new and potentially unexpected impact it could have on traditional insurance policies.

The adoption of AI by insurers and insureds brings into focus how risks are managed and how services are delivered to policyholders. Regulation of AI is hotly debated and philosophies vary across jurisdictions, and those requirements may also bring new and unexpected risks.

As with any new technology or risk, insurers will likely be reviewing how those changes may interplay with existing policies and coverages, even if such results may not have been intended.

Silent risks

In addition, the insurance industry is examining how AI may impact existing insurance products and policies, including professional liability, general liability, directors’ and officers’ (D&O), product liability, fraud and cyber.

The insurance industry has already learned lessons from ‘silent cyber’: instances where some risks have been covered in non-cyber lines even though that was not the intention. For example, despite a background of continuous technological advances in the aviation and motor industries, as well as in the use of smart-house technology, concern was expressed that insurers of these sectors needed to give further consideration to their management and pricing of silent cyber risk.

This led to Lloyd’s of London mandating that, with respect to first-party property damage polices, “all policies provide clarity regarding cyber coverage by either excluding or providing affirmative coverage”.

Silent AI

Consideration is now being given to the AI revolution. Lessons from silent cyber are under review to avoid the rise of ‘silent AI’ – the unintended cover provided for losses generated as a result of the implementation, embedded or otherwise, of AI technologies and unexpected risks.

Embedded AI covers instances where companies knowingly use AI to advance their business practices. However, the results of embedded AI are only as good as the data used, which can lead to unintended results or consequences. As a result, insurers may need to be aware of the risks of how and to what extent AI is integrated into the insurable risk.

Alongside this is self-procured AI, which is when a company is not aware that it is being used by its employees. This raises additional issues around source reliability and data privacy and may be even more difficult to detect and forecast.

Claims and liability impact

If a traditional policy does not consider AI-related risks, it could, in some instances, lead to unintentional cover in the event of an AI-related loss, even though such loss had not been priced into the policy.

Uses of AI include assessment of financial applications, providing professional services, asset management, document review and analysing medical results. It is easy to envision the potential benefits, but also the extent of potential exposure in all of these instances.

If data itself is flawed or is manipulated by the tool, there is potential for unintended results and potential avenues of liability on insureds. Unreliable AI-generated data could weave its way into strategic decision making, raising reputational risk. Accordingly, it would benefit insurers to know how their insureds are using AI to better understand potential risks their insureds face.

Furthermore, regulators are increasingly focused on ensuring that automated decision making does not unfairly target certain groups, which may create compliance risks. Another aspect to consider is how to apportion liability between wrongdoers when AI is involved.

It can be unclear as to where liability would likely fall in scenarios where AI fails. For example, in giving the correct advice or failing to identify a threat, is the liability with the entity providing goods or services, or with the software developer that created the AI tool?

Such considerations have already been brought into the spotlight with the advent of autonomous vehicles (AV). Who is responsible when an accident occurs involving an AV – the car manufacturer or the human driver?

Spotlight D&O risks

AI is anticipated to add between $2.5 to 4.5 trillion to the global economy per year, according to McKinsey & Company. Therefore, the race for companies to integrate and use AI is at full speed, heightening the potential for silent AI.

As part of the AI revolution, commentators seem convinced that one day AI will join or replace human directors and will be running corporations and making decisions autonomously. While there are some current examples of purported ‘AI directors’, these certainly appear to be the exception, and in reality are not truly autonomous.

However, any level of increased AI autonomy in board of director decision-making processes raises issues related to imposing and apportioning liability, fallibility of historical data sets used in decision making and application of directors’ fiduciary duties, which are not currently tailored to consider AI involvement.

While autonomous ‘AI directors’ are further away in reality, reports show that most companies are using AI in some capacity. Businesses appear to be using generative AI to gather, analyse and synthesise large data sets that can assist in corporate decision making and strategy.

Naturally, some industries are using AI at a higher rate than others. Consumer goods and retail, technology, financial services, professional services and healthcare, each reportedly experienced nearly double digit growth in the use of AI in the last several years. As a result, these industries may experience higher levels of AI-related risk in the near term, and as a corollary, their insurers may face a greater risk of silent AI exposure.

Presently, corporations and their D&Os are facing very real risks related to AI. These current risks can be broken down into two broad categories: (i) traditional AI risks from the actual use or creation of AI products; and (ii) risks related to disclosures about a company’s use of AI.

Traditional risks include a company’s use of data or personal information to train AI systems, or using a company’s own confidential information as inputs to a third party AI system, raising issues related to the security and ownership of that data. There have also been examples of inadvertent discriminatory decisions as a result of AI training on historical data sets.

Furthermore, attribution and intellectual property lawsuits were among the initial wave of AI risks. These traditional risks have primarily been associated with AI developers whose entire business is the creation of AI and therefore, may have been slightly easier to forecast and prepare for from an insurer’s perspective.

However, more recently, with the rise in companies and their D&Os wanting to be part of the AI revolution, several lawsuits have been filed with allegations centred on the companies’ AI disclosures. These disclosure lawsuits are impacting companies across industries and the level of risk and exposure to AI might be harder to detect, increasing the potential for silent AI.

‘AI-washing’, a term coined by the US Securities and Exchange Commission, involves companies touting their use of AI to attract certain investors or attention. The reality may be that the company only uses AI minimally or not at all. There are now several active US securities class action lawsuits, the first of which was filed in early 2024, against companies and their D&Os for their statements surrounding the use of AI.

It is these more indirect AI-related risks that could have the largest and most unintended impact on insurers and their policies.

Responses

Capital and resources are flowing into both AI and non-AI companies and the pace of technological change is rapidly evolving. Insurers may wish to keep closely apprised of economic and legal trends having an impact on AI-related claims on traditional policies.

Other policies may be unintentionally impacted by AI risks. Underwriters also may need to explore how their insureds use AI, both internally and externally, to examine those risks.

New insurance products may arise to address AI-specific risks, including in certain industries. The frequency and severity of such claims will likely drive how the market responds, including how such ‘AI claims’ may be treated under traditional policies. It is possible that this will include specifically written AI risk policies.

Other policies may seek to add specific AI exclusions. Finally, we may even see the creation of a new class of insurance products built specifically for underwriting AI-related risk.