Intangible risks of modern products

This article was originally published on Thomas Reuters Practical Law article, January 2023.

The modern products of today are significantly more complex than traditional products of the past. With rapid advancements in technology affecting businesses operating across many different product sectors, the risks posed by these products, in terms of the types of harm arising and also the legal exposure under product laws, are more complex and diverse than was the case until very recently.

The EU and UK mainstay product safety and liability legislative regimes are undergoing one of the biggest shake-ups since their introduction over 30 years ago. This has been triggered by product digitalisation, the introduction of innovative and new technologies, and the increasing use of online platforms (OPs). The concept of what is a "safe" product, both in safety and liability regimes, is expanding beyond physical, tangible items and more "traditional" safety risks.

Legislators are seeking to broaden the scope of existing legislation and capture the more modern concept of risk and safety within foundational definitions. The revised regimes are intended to better tackle the unique challenges and risks arising from rapidly developing new products and modern supply mechanisms.

This article explores the novel risks inherent in modern products. It outlines the less traditional types of damage and loss that can be sustained and the claims to which they can give rise. The article also sets out the favoured mechanisms for pursuing such claims.

New risks, new harms

Modern products and supply mechanisms encapsulate a variety of intangible risks. These range from exposure to psychologically damaging content online, data privacy and cybersecurity breaches (and any resultant physical harm, for example, from online stalking) to reputational and brand risks.

The complexity of the risk profile of these newer products is reflected in the increasing number of claims brought before the UK and EU courts in recent years for pure intangible, non-material damage and loss, such as pure psychological injury or distress. Claims are particularly common in the data privacy sphere and are often large-scale group actions. Also on the rise are claims made in respect of reputational damage and certain types of theoretical or abstract economic damage that cannot be financially determined in a clear and quantifiable way.

Although these types of intangible harm are not new or unique, until recently they had typically been brought alongside or as part of larger claims for material, quantifiable losses, such as property damage, financial losses or personal injury. Further, the likelihood of such intangible harms occurring without associated material damage has previously been perceived as unlikely. Now, however, these types of non-material harms are increasingly under the spotlight as product safety and liability legislation evolves and the intangible age progresses.

In an increasingly digitised and interconnected world, cybersecurity risks are a dominating concern for product manufacturers, suppliers and other actors in the supply chain, as well as consumers. Products which use new technologies and are interconnected (for example, smart medical devices and virtual reality (VR) or augmented reality (AR) headsets) are at risk of unauthorised access to data or malicious interference by third parties, including ransomware and malware attacks. This could infringe consumers' privacy rights, cause reputational damage to businesses, and allow access to intimate and varied types of personal data.

These risks are particularly exemplified in the Internet of Children's Things market, where design flaws can leave products vulnerable to hacking. Such risks materialised in the case of a toy doll, where, in 2017, a German watchdog ordered the destruction of the doll following concerns that unauthorised users could eavesdrop on child users' conversations. Similarly, in 2019, a children's smartwatch was recalled by an EU product regulator and withdrawn from the market following concerns that the child user's location could be tracked and their personal data stolen.

In recognition of these cybersecurity risks, EU and UK legislators have proposed to further incorporate cybersecurity provisions into their more mainstay general product safety regulatory regimes, as well as further introducing specific, stand-alone pieces of legislation with cybersecurity as its singular focus. These legislative developments are part of a drive to regulate product cybersecurity and to expand the concept of product safety to include cybersecurity as a mainstay consideration.

EU framework

The EU Cybersecurity Act came into force in June 2019. It established rules and requirements in relation to the certification of ICT products, services and processes, see Practice note, EU cybersecurity framework: EU Cybersecurity Act. Additionally, as part of the EU's Cybersecurity Strategy presented in December 2020, which aims to improve the cybersecurity of connected products, particularly Internet of Things (IoT) devices, the European Commission (EC) proposed a Delegated Regulation for the Radio Equipment Directive (2014/53/EU) (Delegated Regulation) (see Legal update, European Commission draft Regulation on cybersecurity of internet-enabled products).

Although the Radio Equipment Directive has contained provisions governing the cybersecurity of products, the Delegated Regulation places specific obligations on product manufacturers to ensure the improvement of cybersecurity of particular wireless devices that have radio capabilities, such as wearables, smartphones, toys, smartwatches and fitness trackers. Medical devices (which have generally led the way in the development of product-based cybersecurity regulations) and motor vehicles do not fall within the scope of the Delegated Regulation as they are subject to their own specific legislation which contains cybersecurity provisions.

It has been proposed that the Delegated Regulation, the Cybersecurity Act and the replacement of the Directive on the security of Network Information Systems ((EU) 2016/1148) (NIS Directive) (with what is known as NIS 2) will be complemented by a new EU Cyber Resilience Act. The Cyber Resilience Act seeks to introduce common cybersecurity rules and standards for manufacturers and vendors of tangible and intangible digital products and ancillary services with a view to creating greater transparency over the cybersecurity of such products (see Cyber Resilience Act: legislation tracker).

NIS 2 (published in the Official Journal on 27 December 2022) seeks to:

  • Extend the scope of the NIS Directive to include all essential entities providing services listed in Annex I.
  • Introduce new requirements for the public and private sectors in relation to incident response, supply chain security, encryption and vulnerability disclosure.

(See Practice note, NIS 2 Directive: overview).

These legislative initiatives aim to tackle the gap in the current EU framework applicable to digital products. To date, that framework has only addressed the cybersecurity of tangible digital products and, where applicable, embedded software concerning those tangible products (see Practice note, EU Cybersecurity framework).

More widely, the EU's mainstay product safety framework, the General Product Safety Directive (2001/95/EC) (GPSD), does not prescribe specific cybersecurity requirements covering the whole product lifecycle. However, the EC's proposal for a General Product Safety Regulation (GPSR) to replace the 20-year-old GPSD (agreed between the European Council and European Parliament on 29 November 2022) identifies various areas of improvement (see Revising and replacing General Product Safety Directive (2001/95/EC) (GPSD): legislation tracker).

These areas include market surveillance, product recalls, online marketplaces and new technologies such as connected products and artificial intelligence (AI). Most of these areas are subject to separate pieces of draft EU legislation which are currently being considered in parallel to the proposed GPSR. The proposed GPSR seeks to update and modernise the framework for the safety of non-food consumer products, including in relation to cybersecurity and privacy risks that are increasingly impacting consumer safety. The proposed changes include:

  • A new definition of "product" to encompass items that are "interconnected or not to other items", which is understood as a reference to IoT products.
  • Free software updates for the consumer as a right of remedy where an economic operator recalls the product.
  • Accounting for the effect a product has when interconnected with another product, and a product's cybersecurity features that protect it from malicious third parties, when assessing product safety.
  • A broad range of standards, including European and international standards, the opinions of recognised scientific bodies, and even reasonable consumer expectations as relevant considerations to assist in assessing product safety.

UK framework

The UK Government launched a National Cyber Strategy in January 2022 which proposes a series of measures to improve the UK's cybersecurity. These include the introduction and implementation of the Product Security and Telecommunications Infrastructure (PSTI) Act, which aims to protect consumer connectable devices such as smart TVs and internet-connectable cameras from cybersecurity attacks (see Legal update, Product Security and Telecommunications Infrastructure Bill receives Royal Assent (coverage of Part 1: cybersecurity of consumer connectable products)).

The PSTI Act is more focused on cybersecurity than other general product safety legislation. It provides a power for ministers to specify security requirements relating to relevant connectable products. The requirements could include a ban on universal, easy-to-guess passwords, and informing customers about the minimum amount of time before a product receives crucial software updates.

The UK has also taken steps to reform and strengthen its own data protection regime by way of the Data Protection and Digital Information Bill (DP&DI Bill). It was introduced to Parliament on 18 July 2022 with a view to creating a clearer regulatory environment for personal data use to fuel responsible innovation. However, second reading of the DP&DI Bill was stalled due to the change in the Conservative government leadership in September 2022. It will now be subject to a further consultation with a view to returning in some form in 2023, particularly as clarity is required for UK-EU data adequacy.

The intangible risks presented by new technologies are not confined to the products themselves: they can also manifest in the forum in which the products are sold. The sale of products on online marketplaces is by no means a new concept but has become an important focus for product safety legislators in recent years. The COVID-19 pandemic in particular triggered a rapid and exponential growth in online sales.

The proposed EU GPSR seeks to improve the safety of products sold on online marketplaces by regulating the conduct of online marketplaces and laying down specific obligations for the companies that operate them. In a similar vein, the Digital Services Act (DSA), which came into force on 16 November 2022, also lays down rules for intermediary service providers (including online marketplaces), imposing mandatory provisions for removing illegal goods (see Practice note, Online platforms: dealing with consumers and business users: EU Digital Services Act (EU only)).

In tandem with the proposals to amend the GPSR, the Market Surveillance Regulation ((EU) 2019/1020) (MSR), in force from 16 July 2021, brings OPs such as marketplaces within the remit of the EU's product safety framework. It aims to "complement and strengthen" existing EU product regulations by establishing more robust processes for market surveillance, compliance controls and promoting closer cross-border co-operation among enforcement authorities. The MSR applies to products that are subject to at least one of the 70 EU product regulations and directives listed in Annex I, ranging from aerosol dispensers to medical devices, unless such products are subject to other EU legislation which regulates aspects of market surveillance and enforcement in a more specific way.

In a post-Brexit era, the UK is expected to implement similar measures for online and marketplace sales to strengthen its current product safety laws to ensure that they are fit to deal with new technologies. Responses to the UK Product Safety Review call for evidence, which will be used to shape policy proposals for the UK's future product safety regulatory framework, called for the new framework to be adaptable and responsive to new technologies that encompass the physical and virtual worlds to facilitate safe innovation and to ensure that there are no gaps in enforcement.

The metaverse is the online virtual world where physical, digital, and AR and VR technologies unite. It has yet to be properly considered by EU and UK product regulators and legislators, despite increasing enthusiasm among retailers and manufacturers about its potential opportunities. Existing regulation, such as the landmark proposal for a regulation laying down harmonised rules on AI (also known as the Artificial Intelligence Act (AI Act)) (discussed in Diversity and inclusion) and the General Data Protection Regulation ((EU) 2016/679) (EU GDPR), may be applicable to some extent given the interaction between the metaverse, AI and the processing of personal data. However, either these regimes will need to be adapted or new ones introduced in order to respond to the unique and seemingly unlimited challenges posed by the metaverse.

The metaverse is typically accessed through VR headsets and VR or AR glasses. Its popularity was founded in the gaming arena, although its potential is already being realised on a commercial level. It is increasingly used as a platform for buying and selling goods and non-fungible tokens (NFTs). For example, there has been significant investment in this space in recent years by fashion houses, who have started selling clothing, jewellery and footwear that can only be worn in the metaverse. The trading of NFT artwork, gaming assets and metaverse real estate in metaverse marketplaces is also common. As in the physical world, regulators will need to consider product safety considerations when the time comes to regulate goods and digital assets sold in the metaverse. This will create challenges for legislators considering the novelty of these consumer products and the data generated in connection with them.

Products that are used to access or are sold in the metaverse have real potential to cause psychological damage to the user. These technologies (for example, a VR headset) present a new medium in which to consume content, which arguably presents a more life-like experience than interacting with traditional technologies. That experience could expose users to psychological trauma. For example:

  • If a user with arachnophobia enters a virtual environment in the metaverse that hosts a spider.
  • If an electronic device purchased in the metaverse catches fire because of a malfunction.

However implausible it may seem, either of these experiences could cause a user to experience anxiety or trauma symptoms in the real world, owing to the distortion between the virtual and physical world. It could give rise to users bringing legal action against metaverse digital product vendors for psychological harm.

Also, companies are introducing ways to make interactions in the metaverse even more realistic. For example, one start-up has created an armband (for use in tandem with a VR headset) to give feedback while users are in the metaverse. The armband intends to enable users to "feel" things in the metaverse, such as pain and touch, as well as to animate their avatars by real-life movement. Such developments will necessarily blur the lines between the physical and virtual world even more.

Modern technologies typically comprise hardware and software features which work in tandem, but the functioning of software elements is often at the behest of the datasets inputted by programmers or the device user. This gives rise to bias and diversity and inclusion (D&I) implications, and grounds on which to pursue claims for non-material damage such as psychological distress.

R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058 concerned the alleged unlawfulness of live facial recognition (LFR) technology and consequential breaches of the Data Protection Act 1998 (DPA 1998) and the Data Protection Act 2018 (DPA 2018) and the Equality Act 2010. It arose from the profiling of the claimant based on his skin colour and his unlawful arrest, causing him significant distress. The case illustrates that new technologies such as LFR that collate biometric data can produce biased outcomes with potentially devastating consequences for the individuals who have suffered harm as a result.

Similar risks arise in the context of AI-powered technologies which are governed by specially programmed algorithms. There are increasing concerns that inadvertent human biases can incorporate unintended discriminatory features into algorithm designs, giving rise to biased outputs. There might be limited human oversight and involvement in the development or implementation of AI technologies, and limited transparency and accountability on how predictive tools reach their decisions. This could make it difficult to identify potential bias.

With AI having the potential to amplify stereotypes, manufacturers have a responsibility to "get it right" when ensuring datasets used in the software recognise and do not unfairly discriminate against certain cohorts of people or impact underrepresented or marginalised populations. As D&I continues to be a priority for manufacturers and suppliers across all sectors, limiting scrutiny to product hardware design and testing is no longer sufficient in the context of modern technologies.

The sharp rise in the use of AI technologies over the last decade has outpaced regulatory scrutiny, although this is now being addressed in the EU and the UK. In April 2021, the EC published the draft AI Act, with supporting guidelines focusing on diversity, non-discrimination and fairness, see Practice note, Legal aspects of artificial intelligence: Proposed AI Act. The AI Act will also be supported by an AI Liability Directive which will enable an individual who has been harmed by AI or AI-enabled systems to bring a claim for compensatory damages.

Although it has not yet published draft legislation, the UK government has published a policy statement setting out proposals for AI regulation in the UK, with a strong emphasis on establishing a framework that encourages AI innovation while prioritising fairness and transparency, see Practice note, Legal aspects of artificial intelligence: The UK approach.

Bias also has an impact where certain genders and ethnicities are underrepresented in innovation. The risks of this are that the product may be used by a significantly reduced consumer cohort and there may be significant and widespread safety implications for users. Historically, many products across different sectors have been designed with men in mind. This has placed women at risk of injury or discomfort when using such products.

In the life sciences sector, bias in product design, testing and clinical trials may result in some devices and medicines not being as effective on certain patient groups. This was reflected in the UK Government's recent response to its consultation on the future regulation of medical devices in the UK, published in June 2022. The Medicines and Healthcare products Regulatory Agency (MHRA) pledged to provide extended guidance on how manufacturers of medical devices, including software and AI-based medical devices, can demonstrate and ensure the safety and efficacy of their products across diverse populations.

Failing to respect D&I principles in product design, testing and use may give rise to novel discrimination cases. It may also provide a different angle to "traditional" product liability cases whereby the user's reasonable expectations are relevant.

Claims giving rise to intangible harm

The concept of intangible harm, such as psychological injury, is familiar and understood in the context of personal injury claims, and often accompanies a claim for associated physical harm.

However, pure intangible loss and, in particular, the concept of non-material damage has had a resurgence in recent years. There has been an increasing number of data privacy actions brought in the UK and EU, particularly in the context of cyber breaches and alleged misuse of personal information by data controllers which have given rise to claims for distress. Such claims have not involved a related claim for property damage or physical injury and thus may be regarded as claims for pure non-material loss or damage.

As technologies continue to develop, we can expect to see an increase in these types of claims in the following areas.

Data privacy claims are typically brought under the relevant data protection legislation:

  1. In the EU, claims are brought pursuant to the EU GDPR.
  2. In the UK, claims are brought pursuant to (depending on the timing of the alleged breach):
  • The DPA 2018 and the retained EU law version of the General Data Protection Regulation ((EU) 2016/679) (UK GDPR); or
  • The DPA 1998.

In respect of claims for non-material damage, Article 82(1) of the EU GDPR provides that "any person who has suffered material or non-material damage as a result of an infringement of this regulation shall have the right to receive compensation from the controller or the processer for the damage suffered". The recitals to the EU GDPR indicate that non-material damage may include:

  • Loss of control of personal data or limitation of one's rights.
  • Discrimination.
  • Identity theft or fraud.
  • Financial loss.
  • Loss of confidentiality of personal data.
  • Damage to reputation.

The UK GDPR contains the same provision and preamble.

The concept of non-material damage in respect of privacy actions has come under scrutiny in the UK and EU courts in recent years.

UK cases

While much of the UK case law in this area concerns the DPA 1998, it still provides guidance on the courts' current approach to data privacy actions. Previously, damages for distress arising from breaches of the DPA 1998 were not recoverable unless there was also pecuniary or material damage (Johnson v Medical Defence Union [2007] EWCA Civ 262). However, the decision in Johnson was reversed by the Court of Appeal in Vidal-Hall v Google [2015] EWCA Civ 311, which held that there was no such requirement. More recently, the High Court ruled that in order to bring a successful claim for distress arising from breaches of data protection legislation, the distress or damage suffered must be more than de minimis (Rolfe v Veale Wasbrough Vizards LLP [2021] EWHC 2809 (QB)).

The recoverability of non-material damages in the context of data privacy was further considered by the UK Supreme Court (UKSC) in the landmark case Lloyd v Google [2021] UKSC 50, which concerned allegations relating to the "loss of control" of data. Mr Lloyd brought a group action against Google pursuant to the representative action procedure provided for under Civil Procedure Rule (CPR) 19.6. He acted as the consumer representative on behalf of a class of more than four million iPhone users seeking damages on the basis that the users' data subject rights had been infringed. As none of the four million users were identifiable in the leaked material, the court considered they had not been able to suffer distress or material damage. Mr Lloyd claimed a uniform amount by way of damages on behalf of each person in respect of alleged data breaches. The issues considered by the UKSC included:

  • Whether the DPA 1998 allowed for compensation to be paid out to claimants for loss of control of data on a per capita basis in the absence of distress or material damage, as was the factual context in this matter.
  • Whether Mr Lloyd was entitled to bring a representative action under CPR 19.6 on the basis that members of the class had the "same interest" in the claim.

The UKSC held that to bring a claim for compensatory damages for a breach of the DPA 1998, a claimant must establish that there has been a breach, and that damage, in the form of material damage or distress, has been suffered as a result. As this would involve an assessment of individual damages and loss, the claim could not proceed as a representative action under CPR 19.6 because the "same interest" requirement had not been met. In cases requiring an individual assessment of damages, the UKSC suggested that the representative action procedure could still be used to determine common issues of fact or law, leaving issues that require individual determination to be dealt with subsequently, and that future claims arising under the UK GDPR regime might be decided differently.

Subsequent representative actions have been pursued against Big Tech companies in respect of data privacy breaches. Such an approach was taken in SMO (A Child) v TikTok Inc [2022] EWHC 489 (QB), in which it was alleged that TikTok had failed to be transparent about the extent of children's data it processed and the purposes for which their private information was collected. The representative claimant sought to distinguish the claim from Lloyd, relying on Article 82(1) and recital 85 of the GDPR, where "loss of control over a subject's personal data" is cited as an example of non-material damage, giving rise to a right to compensation. Although the claim was later withdrawn due to reported financial concerns and legal uncertainty following Lloyd, recent actions by the Information Commissioner's Office, the UK's data protection regulator, indicating that a potential fine may be issued could reignite attempts to recommence the action.

EU cases

In the EU, there is limited guidance on the interpretation of Article 82 of the EU GDPR. However, in the Austrian case of UI v Österreichische Post AG (Case C-300/21) EU:C:2022:756, in which the claimant sought compensatory damages for "discomfort", questions were referred to the ECJ in April 2021 to determine whether:

  • An infringement of the EU GDPR in and of itself is sufficient to allow a claimant to seek an award of damages or whether they must have suffered harm.
  • A claim for non-material damage requires the legal infringement to be more than just a mere "annoyance".

Similar to the UKSC's ruling in Lloyd, the Advocate General's (AG) opinion of 6 October 2022 said that mere infringement of the EU GDPR does not give rise to compensatory damages, and that the claimant must show that they have suffered either material or non-material damage. The AG also opined mere "annoyance" or "upset" is insufficient to award compensation.

Businesses operating in the tech space should keep a close eye on the ECJ ruling which, if it follows the AG's opinion, should prevent the floodgates from opening in respect of claims for compensation arising from mere GDPR infringements.

EU claims

The EU Product Liability Directive (PLD), which came into force in July 1985, remains the law governing redress for defective products. It was transposed into UK law by the Consumer Protection Act 1987. At the time these regimes were introduced, products on the market were more simplistic in nature and tended to be physical items (such as a dishwasher or car) that were not subject to updates or modifications once they left the factory. The traditional risks of personal injury and property damage associated with these products were tangible and widely understood.

Products have evolved considerably since these regimes came into force more than 30 years ago. The features and characteristics that they now comprise, such as AI algorithms, wireless applications and automated software modifications, have raised questions among legislators about whether the existing liability regimes apply to products exhibiting these features. Of particular concern is whether the definition of "product" extends to intangible products such as software or whether they instead constitute a service.

This longstanding debate culminated in the EC publishing its long-awaited proposal to reform the PLD (Proposal) on 28 September 2022 with a view to ensuring liability rules reflect the nature and risks of products in the digital age and circular economy (see Legal update, European Commission proposal for a revised Product Liability Directive (full update)).

Key aspects of the Proposal include:

  • An expanded definition of "product" to include "all movables, even if integrated into another movable or into an immovable. 'Product' includes electricity, digital manufacturing files and software". This means that intangible products, such as software, AI systems and AI-enabled goods will fall within the scope of the revised PLD.
  • A focus on the manufacturer of a product, rather than the producer. Providers of software and digital services, as well as online marketplaces, will potentially be liable for defective products.
  • An expanded definition of "damage" to include "loss or corruption of data that is not used exclusively for professional purposes", which complements the wider definition of "product" to account for emerging technologies. Notably, damage must comprise material losses which can include "medically recognised harm to psychological health".

Although the Proposal does not provide for a complete overhaul of the entire framework, it has substantially widened its scope. Consequently, if adopted in its current form, it has real potential to drive an increase in compensation claims arising from the use of new technologies, including those brought by way of group litigation. This is facilitated by the provision that a person acting on behalf of one or more injured persons can bring product liability claims and by the EU-wide Directive on representative actions for the protection of the collective interests of consumers ((EU) 2020/1828) (RA Directive). The RA Directive provides a mechanism by which consumers affected by the same alleged infringements of EU law (including the PLD) can bring a representative action for redress or injunctive relief.

In tandem with its proposal to revise the PLD, the EC has also proposed a civil liability regime for AI, known as the Artificial Intelligence Liability Directive (AI Liability Directive) to balance the interests of victims of harm relating to AI systems and of businesses operating in the AI sector (see Legal update, European Commission proposal for a Directive on adapting noncontractual civil liability rules for AI (full update)). The AI Liability Directive would also permit claims for any type of damage covered under national laws, including those arising from discrimination or a breach of a fundamental right such as privacy.

The Proposal runs alongside other legislative proposals relating to product safety, including the draft AI Act, the proposed Machinery Regulation, the proposed GPSR and the proposed Cyber Resilience Act, all of which aim to provide a robust framework to address the risks of today's modern products (see Commercial legislation tracker: Product liability and safety).

UK claims

The UK is also considering reform of its product liability framework. In March 2021, as part of its consideration of potential future law reform projects for its 14th Programme of Law Reform (Programme), the UK's Law Commission sought the public's views on whether the Consumer Protection Act 1987, which implemented the PLD into UK law, should be extended to cover products impacted by software and tech development. It cited the similar issues and challenges considered by the EC as part of its proposed reform of the PLD.

In April 2022, the Law Commission announced that following receipt of more than 500 responses to its consultation, it was extending the timetable for finalising the Programme, having concluded that it was currently unable to finalise a list of projects that would determine a significant proportion of its work over the next four to five years. There is no current indication of when the Programme may be concluded.

Claims for pure psychological damage are not a new phenomenon but are becoming increasingly prevalent in the context of new technologies. Today's society is dominated by business and personal use of social media platforms. Algorithms embedded in the technology use machine learning to generate and sort targeted content based on users' behaviour. There is now increased awareness that these algorithms promote content that is attention-grabbing but also potentially harmful to the user. For example, if tragic events are live streamed or other sensitive content, such as that relating to self-harm, is promoted. Similarly, other users can harass or bully others in a virtual environment. That content and untoward activity can cause an individual to suffer serious psychological distress and harm.

In the UK, the government has introduced the Online Safety Bill, which is designed to protect online users from harmful content. In the EU, individuals' rights will soon be protected by the DSA. However, the EC has gone one step further by introducing the AI Liability Directive, which will enable users who are negatively affected by AI to claim compensation from its creators.

Impact of class actions and collective redress

The types of claims and consequential intangible loss that may arise from the use of new technologies are often pursued by way of a class action. Often described as a group or collective action, a class action is a procedural mechanism by which a group of individuals with similar or common interests can bring a claim against one or multiple defendants.

Class actions are increasing in the EU in the light of:

  • The legal exposures that have developed in contemporary products.
  • Landmark legislative changes.
  • The proliferation of third-party litigation funders.
  • Experienced US class action law firms entering the EU market.

In particular, with the advent of the new RA Directive, the use of class actions (which now reach cross border) is expected to expand significantly.

The UK is following suit. The English courts have traditionally managed product-related group claims either informally or by a Group Litigation Order. These mechanisms operate on an opt-in basis, meaning that every interested claimant has to take proactive steps to join the proceedings. However, in 2015, an opt-out collective proceedings regime was established by the Consumer Rights Act 2015 for bringing private competition claims in the Competition Appeal Tribunal (CAT). Until recently, this regime had been used relatively infrequently. However, the use of the regime has gained traction since the granting of the first Collective Proceedings Order (CPO) in Merricks v Mastercard [2021] CAT 28 in August 2021, a claim which concerned unlawful bank charges impacting more than 46 million consumers. Actions brought against global technology corporations are particularly common, with the CAT having since granted several CPOs.

The CPO regime is currently limited to the competition sphere only. However, there have been growing calls from claimant law firms and consumer action groups for a generic opt-out regime beyond the scope of that available in the CAT. This pressure could intensify further in the light of the EC's draft AI Liability Directive and its Proposal to revise the PLD, which could make it significantly easier for claimants to pursue claims in respect of defective new technologies and AI-enabled products. A generic opt-out regime, if implemented, would be likely to significantly alter the future litigation landscape in the UK, paving the way for claimants harmed by new technologies to bring large-scale group litigation.


As can be seen, there is increasing regulatory and legal focus on the concept of intangible, non-material loss and damage in line with the new technological age. As technology continues to develop at pace, and with the legislative landscape continuing to evolve to address the complex risks arising, we can expect to see consumers in the EU and UK continuing to test the courts' resolve in relation to claims for these types of losses.

More from the blog...