A new liability framework for products and AI

An update on the new EU Product Liability Directive and the proposed AI Liability Directive.

With the EU’s landmark Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework for AI, having entered into force on 1 August 2024, attention has turned to the EU’s new liability rules that form part of its broader legal framework aiming to address the unique risks and challenges posed by the digital age.

The new EU Product Liability Directive (EU) 2024/2853 (PLD): The new PLD, which imposes strict (‘no-fault’) liability on manufacturers, suppliers and other entities for defective products, will replace its nearly 40-year old predecessor.

Having been published in the EU Official Journal on 18 November 2024, it came into force on 8 December 2024. Member States will have until 9 December 2026 to implement the new PLD into national laws.

The proposed AI Liability Directive (AILD): The proposed AILD was first published in September 2022, alongside the EU Commission’s proposal for the new PLD. It proposes a fault-based civil liability regime for AI to complement the AI Act and the new PLD. The proposal remains under review by the EU.

As AI and digital technologies play an increasingly fundamental role in our daily lives, including their integration within products used in personal and professional capacities, the potential for consumers and businesses to suffer harm as a result of their use continues to grow.

The EU’s new liability rules will apply to products integrating software and AI, as well as standalone software and AI (with some exceptions), to ensure that users alleging harm caused by these technologies have an effective legal route through which they can seek compensation.  

Overall, the new rules are expected to make it easier for persons, who are harmed by AI systems, to bring claims against developers of AI systems and their liability insurers. 

The new EU Product Liability Directive

Key takeaways

  • Products placed on the EU market after 9 December 2026 will be subject to the new PLD. Products placed on the market prior to this date will be subject to the existing PLD.
  • It is expected to make it easier for claimants who suffer injury or loss from a defective product, to successfully bring claims (including group actions) against a range of entities, including manufacturers, suppliers and online platforms, due to the wider definitions of products and the expansion of the potential target defendants, among other things.
  • Providers of software and digital services to be exposed to product liability risks.
  • Pharmaceutical and medical device manufacturers, which continue to develop products that integrate digital technologies, AI and other software, may be amongst the first to face major test cases under the new regime.
  • The interplay between non-compliance with safety requirements and liability gives rise to an increased litigation risk for businesses already trying to navigate an increasingly complex regulatory environment.  
  • Insurers must consider the impact of these new rules on any product liability cover that they write. They should also engage with their policyholders to ensure that they are familiar with the new rules and that the products that they produce and/or develop and/or sell across the EU market are compliant with relevant product safety laws so as to mitigate against any potential claims risk.

The new PLD: key provisions

While the new PLD retains many of the features of the existing regime, it contains new and potentially far-reaching provisions that could significantly change the litigation risk profile for certain types of products, particularly intangibles such as software and AI-enabled technologies.

Expanded definition of “product” to include digital manufacturing files and stand-alone software including AI.

Expansion of the potential defendants who might be liable for defective products, to include providers of software and digital services, online marketplaces and fulfilment service providers (e.g. warehouses).

Expanded definition of “damage” to include the destruction or corruption of data and medically certified psychological injury. Software and cybersecurity vulnerabilities are likely to become a product liability risk.

A product is considered defective where it does not provide the safety that a person is entitled to expect or that is required under EU or national law. The latter part of this definition is a late addition to the text and introduces the concept of non-compliance with safety requirements constituting a defect.  It is unclear whether this is referring to mandatory and/or voluntary safety requirements.  Further clarification will be required; uncertainty will leave businesses exposed.

New circumstances relevant to safety to be accounted for when assessing defect, including those “required under Union or national law” – meaning that the court will look at a very broad range of circumstances, including (i) product recalls or regulatory interventions relating to product safety; and (ii) compliance with relevant product safety requirements, including mandatory safety requirements, and safety-relevant cybersecurity requirements.

The new PLD introduces broader disclosure obligations, with defendants required to provide (at the request of the claimant) “necessary and proportionate” evidence once the claimant has presented a plausible case. Courts will however consider protecting confidential information and trade secrets.  A failure to comply with disclosure could give rise to a rebuttable presumption of defect.  

Extended ‘longstop’ (expiry) limitation period in latent personal injury cases. Currently, claimants must bring a product liability claim within three years from the date of injury or date of knowledge AND within 10 years from the date a product was put into circulation. A right of action is extinguished after 10 years. 

This still applies under the new PLD, however, in cases of latent personal injury, i.e. where symptoms emerge much later, claimants will be able to bring a claim within 25 years from the date the product in question was placed on the market.

Rebuttable presumptions of defect and/or causation in certain circumstances, such as where it would be excessively difficult for the claimant, particularly in technical or scientifically complex cases, to prove product defect and/or causation.

As these are rebuttable presumptions, a defendant will have an opportunity to produce evidence to rebut the presumption. 

While the EU has maintained that these provisions do not amount to a de-facto reversal of the burden of proof, once a presumption has been made, the burden does shift to the defendant to show that the product is not defective and/or that the defect did not cause the damage/injury complained of.

Under the new PLD, a product will be presumed defective if:

  • A defendant fails to comply with an obligation to disclose relevant evidence.
  • A claimant demonstrates that the product does not comply with mandatory product safety requirements set out in EU or national law; and
  • The claimant demonstrates that the damage was caused by an obvious malfunction of the product during reasonably foreseeable use or under ordinary circumstances.

Causation will also be presumed where:

  • It has been established that the product is defective; and
  • The damage complained of is of a kind “typically consistent” with the defect.

Defences

The new PLD broadly retains the defences that are available under the current regime, including the ‘development risks’ defence.  If a defect is established, the defendant must show that the objective state of scientific and technical knowledge at the time was such that the defect could not have been discovered at the time the product was placed on the market.

Next steps

Members States now have two years to implement the new rules into their national laws.  It is crucial that businesses that sell and/or supply products in the EU use this time to prepare for the potential impact of the new rules.  Businesses should consider:

  • Compliance checks: Regulatory compliance is a key feature of the new PLD. Claimants are likely to scrutinise technical documents and files for evidence of non-compliance with safety requirements.  Businesses should carry out regular internal audits and risk assessments of documentation and quality management systems to ensure compliance.
  • Adequate labelling of products: Businesses should ensure that all product risks, warnings, and potential adverse events are sufficiently described and documented in all relevant product documentation.
  • Preparing for disclosure: Businesses will need to prepare for potentially larger disclosure exercises in jurisdictions (such as France and Germany) where disclosure is usually confined to a limited number of documents. Contracts with suppliers may also need to include more specific document retention clauses, if they do not do so already.
  • Insurance: Businesses will need to ensure that they have adequate insurance cover in place, particularly given that they could be required to defend latent defect claims that arise up to 25 years after the product was placed on the market.

The proposed AI Liability Directive

 

The AILD was introduced in September 2022 to establish harmonised, non-contractual fault-based rules specific to damage (i) caused by an output of an AI system; or (ii) the failure of the system to produce an output. 

Whereas the new PLD provides for a strict liability regime for defective products (i.e. the claimant does not need to prove fault / negligence on the producer or manufacturer), the AILD, in its current form, enables claimants to bring non-contractual fault-based claims (i.e. claims in negligence) in respect of harms caused by AI.

The AILD was proposed as the European Commission considered that current national liability rules do not adequately respond to liability claims for damage caused by AI based products and services owing to their inability to account for the complex and autonomous features of AI, including the opaque features of “black box” systems.  

Key takeaways

  • Although AI system operators will be subject to national liability rules, the AILD proposes the following features to enable claimants to overcome the complex and evidential burdens that typically make establishing a fault-based claim more challenging
  1. A rebuttable presumption of causality making it easier for claimants to demonstrate a causal link between an AI system failure and the harm caused.
  2. A right of access to evidence from AI providers or users of high-risk AI systems, providing that the claimant can establish a plausible claim.
  • Despite some industry calls to withdraw the AILD on grounds that it would create legal uncertainty and discourage innovation, the AILD is expected to now advance following the European Parliament’s Research Service’s (EPRS) Study on the AILD published on 19 September 2024 which sets out recommendations for changes to the draft proposal and the scope of its application.

Key provisions

A rebuttable presumption of causality

This aims to make it easier for claimants to demonstrate a causal link between an AI system failure and the harm caused.  A claimant must be able to show that:

  • It was reasonably likely that the defendant’s negligence influenced the AI system’s output, or failure to produce an output;
  • The output produced by the AI system / failure to produce an output gave rise to the harm complained of.

Crucially, a presumption of causation may arise where providers and deployers of high-risk AI systems have not complied with requirements under the AI Act.  For systems that are not deemed ‘high-risk’ under the AI Act, the presumption would only apply where a national court considered it excessively difficult for the claimant to prove causation. 

The presumption of causality will not apply if a defendant can show that the claimant has sufficient evidence and expertise to prove the causal link between the fault of the defendant and the output/absence of an output that resulted in the harm complained of.

A right of access to disclosure

As the design, development, deployment and operation of high-risk AI systems are expected to make it very difficult for potential claimants to identify the entity liable for the damage caused, the AILD provides a right of access of evidence from AI providers or users of high-risk AI systems.  This could include a range of information, including specific documents such as instructions for use and logging requirements. 

The disclosure of evidence must be necessary and proportionate to support the claim for damages.  Similar to the new PLD, national courts will be required to consider the legitimate interests of all parties, including the protection of confidential information and trade secrets, or any information that could impact national security. 

A defendant’s failure to comply with a disclosure order will entitle the national court to presume that the evidence requested was intended to prove non-compliance with a relevant duty of care obligation.   Similar to the new PLD, the defendant will have the right to rebut such a presumption.

In July 2024, it was reported that an amended AILD was shared with EU governments and lawmakers for further scrutiny. That draft is reported to align with the AI Act but also significantly expands the liability of those who make a high risk AI application operational.  

The new draft text reportedly includes a new condition that will trigger the rebuttable presumption, which is aimed at deployers that did “not monitor the operation of the AI system, or where appropriate, suspend the use of the AI system”.  Accordingly, deployers of high risk AI systems are also at risk of being exposed to a rebuttable presumption of causation in cases where they did not properly monitor the operation of the AI system or take action in the event that a problem arose with the system.

The European Parliament Research Service

The EPRS’ study makes a number of recommendations, including:

To make the AILD a regulation that is directly applicable in all Member States. This would be instead of its current form as a directive which requires transposition into national laws.  This would help prevent market fragmentation and avoid potential differences in the approach between Member States and their liability frameworks for AI.  It would also aim to promote innovation and consumer protection by establishing consistent legal standards across the digital single market.  This approach would align with current trends in product safety law, where numerous directives are being replaced by regulations (e.g. the General Product Safety Regulation; Medical Devices Regulation).

 

Expanding the scope to include non-AI software. This is to align with the definition of ‘product’ under the new PLD.

The inclusion of intangible harms attracting compensation. An expansion of the types of harm that would attract compensation, namely intangible harms including as discrimination, personality rights, IP rights and pure economic loss.  This approach aims to complement the new PLD which does not provide for redress in respect of these types of harms.

Establishment of causal link where non-compliance with certain provisions of AI Act. The establishment of a causal link between the output of an AI system and any consequential damage where there has been non-compliance with the human oversight provisions of the AI Act. So where an AI system has failed to provide for adequate human oversight or supervision, that failure should be presumed to have been the cause of the output of the AI system (or the failure to produce an output) that resulted in the harm complained of. a

Allowing claimants to apply for a court order requiring a defendant to disclose any information or evidence necessary for the claimant to bring the claim, by demonstrating that there was (i) harm; (ii) the involvement of an AI system; and (iii) by demonstrating that it is not implausible that the AI caused the harm. 

This would not apply to claimants who are competitors of the defendant, in order to avoid vexatious litigation and to protect trade secrets.  The current version of the AILD requires claimants to provide sufficient evidence to support the plausibility of the claim.

Finally, the study examines whether the AILD should, in future versions, include strict liability, potentially in the context of an impact assessment for a regulation on AI liability.  The study recommends that: “while truly strict liability could simplify compensation processes and regulate activity levels, it must be balanced against its potential to reduce AI innovation and deployment, particularly among SMEs”. 

This recommendation appears to align with Mario Draghi’s recent EU report on competitiveness, which considers that that the EU’s regulatory approach could stifle innovation. The report acknowledges that EU businesses, especially startups and small and medium-sized enterprises, face significant challenges due to the significant technology gap between Europe and other jurisdictions like the US and China.

For further information on the new PLD and the proposed AILD, please contact our experts in our Products Law and Life Sciences team whose contact details can be found above.