Emerging silent AI risks in Asia Pacific (APAC)

Understanding emerging silent AI risks

As policyholders increasing y integrate artificial intelligence(AI), insurers face the growing challenge of managing new categories of risk.. This has the potential to trigger claims under traditional insurance policies that were not specifically designed to cover those AI risks, for example professional indemnity, business interruption, and officers’(D&O) liability insurance.

Over the past 10 years or so we have seen the industry face  "silent cyber risks" where insurers unknowingly covered cyber incidents under their general policies. We have helped insurers verify whether they have a silent cyber problem and to draft specific coverage for cyber risks. We may now be seeing the emergence of “silent AI” where insurers inadvertently cover AI risks including financial, operational, regulatory and reputational risks arising out of the deployment and use of AI.  

Key legal and regulatory risk factors

It's crucial to analyse the relevant AI risks, and the losses and damages associated with them, to ascertain whether these are covered or excluded. In order to do so, we need to consider how policyholders use AI including the level of human supervision over AI-generated outputs. For example, if a product or service is AI-assisted and is defective as a result the liability for that may well fall on the policyholder as the provider of that product/services and not the AI developer. This allocation of liability becomes critical when assessing policy coverage under professional indemnity or product liability policies.

Data privacy considerations

Significant data privacy issues arise when AI systems are trained on, collect or generate sensitive personal information. Consent may be necessary to use personal data collected for other purposes to be used in training AI models. Ensuring the security of  training datasets and outputs is also crucial to avoid unlawful disclosures or breaches.

Algorithmic bias and discrimination

Bias in AI training data can result discriminatory outputs and unfair practices which may initially be difficult to detect and correct.  Indeed, if an insurer or or policyholder uses biased AI in underwriting, recruitment, or customer decision-making, they may face claims for unlawful discrimination or regulatory enforcement. .

These are just some of the issues that insurers need to address to ensure they are not providing silent AI cover or to design specific terms or products to cover AI risks. 

Regulatory developments in the Asia-Pacific region

Asia-Pacific (APAC) regulators are actively developing governance frameworks to address the responsible use of AI in financial services and in data privacy .

The Monetary Authority of Singapore (MAS) has recently published two notable frameworks:

  1. an Information Paper on AI Model Risk Management setting out good practices for AI and Generative AI model risk management, and encouraging all banks and financial institutions to reference it when developing and deploying AI; and
  2. FEAT Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the use of AI and Data Analytics (AIDA) in Singapore’s Financial Sector. It is meant to provide firms with foundational principles to consider when using AIDA as developed in partnership with the Singapore Personal Data Protection Commission.

Meanwhile, several data protection authorities – including those in Australia, Hong Kong and South Korea– have also recently published their own guidelines on AI-specific privacy to address the risks posed by large-scale data processing, particularly in public-facing AI applications, such as chatbots and generative AI.

Claims Landscape and insurance considerations

There could be many types of claims arising from AI technologies. Potential exposures include:

  • consumer protection claims relating to erroneous or biased results, misleading, or defective AI-driven decisions;
  • data protection claims arising from the use of personal data without consent, or inadequate transparency around how they collect and use personal data; .
  • employment disputes resulting from AI-driven decisions in hiring, promotion, or termination;

Intellectual property litigation, particularly where generative AI  is alleged to infringe copyright.

There is also a temptation at the moment to label any technology as “AI”, whether or not AI is actually involved. There is certainly the risk that this kind of misleading claim or “AI washing” could breach consumer laws.

Insurers may find that traditional wordings are insufficiently clear as to whether such risks are covered or excluded. Proactive analysis of policy language—particularly in relation to exclusions, insuring clauses, and definitions—is essential. Perhaps slightly ironically, AI may itself be part of the solution. It may be able to assist insurers identify silent AI exposures by reviewing policy wordings, claims patterns, and emerging risk signals.