Commercial Brief innovation market insights - March 2023

There has certainly been a noticeable shift on how individuals and businesses are perceiving artificial intelligence (AI). For years, it has operated quietly in the background, sitting behind the interface of physical products and technology applications, but as ChatGPT and other AI-enabled technologies continue to dominate the headlines in 2023, there is greater visibility over how it can drive innovation, transform and value to businesses across all sectors. It might even be fair to say that we are now on the cusp of an AI revolution.

With many having naturally voiced concerns that AI-enabled technologies will change, or replace, certain jobs traditionally performed by humans, or risk giving rise to bias and discrimination, conversations on the application, risks and regulation of AI are becoming more relevant than ever.

As Kennedys launches its campaign on Fostering Innovation, we share our insights on:

  • How and to what extent AI is transforming different industries, including focus on recruitment and the workplace in the Australia;
  • The deployment of AI in the workplace and the risks facing UK employers;
  • The status of the development of AI-specific legal and regulatory frameworks in the EU and UK and their potential impact on the life sciences sector; and
  • The novel risks inherent in modern technologies such as AI.

The impact of AI on workplaces: creating a new generation of Luddites or converting reformists?

AI and, in particular, machine learning (a subset of AI) have rapidly expedited the digital transition by reforming the way we work by automating work tasks with decreasing human input.

Kennedys IQ, the technology arm of Kennedys, combines AI and machine learning with human expertise. The bespoke IQ platform consists of several features that adopt AI in a way which is explainable and able to be demonstrated to its users. This software-driven offering enables efficiencies and insights to multiple parts of the claims handling and underwriting processes, distinguishing Kennedys as the legal profession transitions and adopts AI to augment legal practice.

Media content creation is similarly being reconfigured by AI. The CEO of Buzzfeed, the American digital media giant known for ceaseless lists and quizzes, announced that ChatGPT, an AI chatbot designed by OpenAI, would be utilised to enhance and personalise their content, with a view to later adopting it within their editorial and business operations. Buzzfeed’s adoption of AI for content creation shows how corporations are shifting to adopt AI’s offerings to automate previously labour-intensive drafting work.

AI is also increasingly dictating HR decision-making in recruitment. For example, Fetcher, a machine learning software tool, assists recruiters by managing key hiring criteria, capturing gender and demographics, and providing a refined set of profiles to a business for review. However, in Australia, government agencies have used AI for HR decision-making but with problematic results. Careful consideration needs to be given to the type of AI being deployed and to assess AI-backed service providers, as to how the AI is developed and then test against their current practices.

AI is also giving rise to certain risks in the workplace, as exemplified by the unlawful ‘Robodebt’ debt assessment scheme. This involved the use of machine learning technology that resulted in the incorrect issuing of debt notices and serious impacts on the recipients as a consequence.

The Australian Human Rights Commission (AHRC) has recommended legislative reform for the use of AI to address threats towards marginalised groups (i.e. people of colour, women and people with disabilities). The AHRC’s December 2022 report recommended that a significant overhaul is required to address the threats AI poses towards historically marginalised groups where bias (conscious or unconscious) can be entrenched in data sets relied upon by AI.

Contacts: Justin Le BlondSophie Fletcher Watson, Jack Kelly and Joe Cunningham

Related item: The impact of AI on workplaces: creating a new generation of Luddites or converting reformists?

AI in the workplace – an invaluable tool or a slippery slope?

The global COVID-19 pandemic brought about many changes in working practices, including the rapid acceleration of the use of AI in employment to allow individuals and businesses to continue operating remotely. AI continues to be a hot topic and in late 2021, the UK Government published a National AI Strategy, setting out its ten year plan to make Britain a “global AI superpower”.

However, although AI continues to develop rapidly with advancements in machine learning, no new legislation has yet been introduced in the UK to regulate and control the ways in which it may be used. There is of course a raft of employment law legislation which governs how employers may treat their employees, but this was not designed with such technological developments in mind. This, therefore, gives rise to the question as to whether this legislation is still fit for purpose and furthermore, an examination of the risks presented by the use of AI in the workplace, including discrimination, indirect discrimination, the inability for AI to fully address an employer’s obligation to make reasonable adjustments for disabled individuals and data protection issues.

There is little doubt that AI will continue to innovate employment practices and that the use of AI technology can have significant benefits for both employers and individuals alike. However, there are clearly also risks and employers need to be cautious about blindly accepting and relying on AI systems without understanding the technology behind them and how automated decisions are made.

Contacts: Erica Aldridge and Eva Rurlander

Related item: AI in the workplace – an invaluable tool or a slippery slope?

Regulating AI in the life sciences sector

This article first appeared in the January 2023 issue of Financier Worldwide magazine. ©2023 Financier Worldwide. All rights reserved. Reproduced here with permission from the publisher.

AI poses a number of risks to the life sciences sector. Whilst there are different types of AI models that are built in different ways, one of the most prominent risks that arises in some of these models is the presence of bias in a product’s (AI) model, giving rise to discriminatory outcomes. For example, a system designed to diagnose cancerous skin lesions may prove ineffective, where it has been trained with a data set that does not exhibit a diverse range of skin pigmentation.

Automated, adaptive algorithms that learn on their own also pose considerable risks as they can potentially cause a product to evolve, and therefore not work as intended, potentially compromising patient safety. Sensitive, personal healthcare data is also often vulnerable to third-party hackers. If AI systems are not robust against attack, products may be maliciously hijacked or clinical information may be stolen, resulting in harm to the individual and reputational damage to the operating company.

In view of these risks, EU and UK regulators are seeking to create regulatory frameworks specifically for AI. In April 2021, the European Commission (EC) published the world’s first proposal for regulation of AI, known as the Artificial Intelligence Act. The EC has adopted a risk-based approach, so that the ‘riskiest’ forms of AI are subject to the most stringent requirements and obligations. For the life sciences industry, this means that high-risk products, for example AI-powered medical devices, will be subject to stringent obligations, including conformity assessments, human oversight and continued maintenance. Companies found to be in breach of the AI Act, once in force, could face significant penalties.

The UK is also proposing to update its mainstay product safety regulatory framework and its medical device regulatory framework to reflect the risks posed by emerging technologies. However, in comparison to the EU, it has taken a more innovation-centric approach to AI regulation, with the stated goal of remaining an “AI and science superpower”.

The EU has also introduced a proposal for a draft AI Liability Directive to provide consumers with a route for compensatory redress in the event of harm caused by AI. This acknowledges that the specific characteristics of AI, including “complexity, autonomy and opacity”, also referred to as the ‘black box effect’, can make it extremely difficult for those affected to identify the liable person and succeed in any subsequent claim that may be brought. The Directive therefore proposes placing more onerous obligations on companies in respect of disclosure, enabling those affected to better understand the AI system and making it easier to identify those potentially liable. With AI being used across the life sciences sector globally, from drug discovery to clinical trials to medical technology, life sciences companies face myriad AI-related liability risks.

Contacts: Samantha SilverSarah-Jane DobsonCharlie Whitchurch and Paula Margolis

Related item: Regulating AI in the life sciences sector

Intangible risks of modern products

This article was originally published on Thomas Reuters Practical Law article, January 2023.

Modern products and supply mechanisms encapsulate a variety of intangible risks. These range from exposure to psychologically damaging content online, data privacy and cybersecurity breaches (and any resultant physical harm, for example, from online stalking) to reputational and brand risks.

The complexity of the risk profile of these newer products is reflected in the increasing number of claims brought before the UK and EU courts in recent years for pure intangible, non-material damage and loss, such as pure psychological injury or distress. Claims are particularly common in the data privacy sphere and are often large-scale group actions. Also on the rise are claims made in respect of reputational damage and certain types of theoretical or abstract economic damage that cannot be financially determined in a clear and quantifiable way.

Although these types of intangible harm are not new or unique, until recently they had typically been brought alongside or as part of larger claims for material, quantifiable losses, such as property damage, financial losses or personal injury. Further, the likelihood of such intangible harms occurring without associated material damage has previously been perceived as unlikely. Now, however, these types of non-material harms are increasingly under the spotlight as product safety and liability legislation evolves and the intangible age progresses.

Legislators are seeking to broaden the scope of existing legislation and capture the more modern concept of risk and safety within foundational definitions. The revised regimes are intended to better tackle the unique challenges and risks arising from rapidly developing new products and modern supply mechanisms.

Contacts: Sarah-Jane DobsonPaula MargolisKatherine Ciclitira and Yaseen Altaf

Related item: Intangible risks of modern products

Related comtent