The future of UK AI regulation: more than just a light touch?

Update

Lord Holmes’ Artificial Intelligence (Regulation) Bill has been dropped following the dissolution of Parliament on 30 May 2024.

As Artificial Intelligence (AI) technologies become increasingly embedded across all industry sectors, lawmakers globally are seeking to tackle and manage the risks that they pose through the introduction of AI policy and regulation. 

In the UK, the regulatory outlook appears uncertain. Recent reports indicate that the government is potentially looking to pedal back on its current “pro-innovation” approach to AI regulation. In tandem, a private members’ bill that provides for AI regulation is also gaining traction.

Clear, consistent and coordinated policy around AI regulation is essential for the UK to fulfil its ambition as an AI superpower and to retain its status as an innovation hub.

For now, businesses and insurers operating in the UK market must ready themselves for the real possibility of future, horizontal AI legislation, alongside the obligations currently mandated by individual industry regulators. 

The state of play

The EU has taken the lead with the strictest rules. The AI Act, due to come into force this year, is the world’s first comprehensive horizontal legal framework for AI. It provides for a risk-based approach to AI regulation that is guided by a set of ethical principles, including data quality, technical robustness and safety, fairness, transparency, human oversight and accountability.  

In stark contrast, the UK government has adopted a light touch, principle-based, “pro-innovation” approach to AI regulation, having established a framework for existing regulators to interpret and apply within their sector-specific domains. The UK Prime Minister endorsed this approach at the AI Safety Summit in November 2023, stating that “The UK’s answer is not to rush to regulate”. 

Is AI regulation in the UK on the cards? 

On 15 April 2024, several sources indicated a potential change in direction of AI policy. The Financial Times reported that “Officials are exploring moving on regulation for the most powerful AI models” amid increasing concerns of regulators about potential AI harms. In particular, concerns about large language models that support generative AI systems were cited.

This change of heart sits against the backdrop of:

  • The UK Artificial Intelligence (Regulation) Bill (the Bill) –a Private Member’s Bill and focus of this article, introduced to Parliament on 22 November 2023.
  • The ICO consultation series on how aspects of data protection law should apply to the development and use of generative AI models.
  • International discussions on global AI governance and regulatory alignment. This includes the G7 Code of Conduct and agreements struck between the UK and US, Australia, Singapore and Canada to strengthen collaboration across innovation.

The Artificial Intelligence (Regulation) Bill [HL]

The Bill (starting in the House of Lords) seeks to “put regulatory principles for artificial intelligence into law”. Its main objective is to establish a central ‘AI Authority’ to oversee the regulatory approach to AI with reference to principles of trust, consumer protection, transparency, inclusion, innovation, interoperability and accountability.   

While the success rate for Private Members’ Bills is not very high, they are often used as a tool to generate debates on important issues; thereby testing opinion in Parliament on areas where legislation might be required. 

Lord Holmes’ rationale

The Bill was introduced by Lord Holmes of Richmond and would set up a central, horizontal regulator to coordinate and manage the government’s current sectoral approach.

Lord Holmes advocates for the construction of an agile but comprehensive regulatory framework for AI and considers that:

  • The “right sized regulation will support, not stifle, innovation and is essential for embedding the ethical principles and practical steps that will ensure AI development flourishes in a way that benefits us all – citizens and state”.
  • The government must legislate quickly to preserve and promote the UK’s position on innovation. Pointing to eminent bodies such as the Alan Turing Institute, he noted that the UK has the requisite knowledge and expertise on key issues such as citizens’ rights, consumer protection and IP. 
  • Failure to create regulatory certainty risks alignment outside of the UK:

Other peers support the expediting of legislation to create the necessary conditions for innovation and economic success – particularly for those sectors, like life sciences, which thrive in countries where there is strong regulation. Regulation is also seen as necessary to address particular dangers, including copyright infringement. 

AI definition

“Artificial intelligence” and “AI” mean “technology enabling the programming or training of a device or software to (i) perceive environments through the use of data (ii) interpret data using automated processing designed to approximate cognitive abilities; and (iii) make recommendations, predictions or decisions; with a view to achieving a specific objective.” 

AI includes generative AI, meaning deep or large language models able to generate text and other content based on the data on which they were trained.

The establishment of an AI authority

The AI authority will:

  • Sit above other industry regulators and ensure alignment of approach.
  • Review relevant legislation, such as consumer protection, to assess its suitability in addressing challenges presented by AI. 
  • Conduct horizon-scanning to inform a coherent response to emerging AI technology trends.
  • Support testbeds and sandbox initiatives to enable AI innovators bring new technologies to market.
  • Promote interoperability with international regulatory frameworks.

Regulatory principles

The AI Authority must have regard to the following principles:

  • Safety, security and robustness
  • Transparency and explainability
  • Fairness;
  • Accountability and governance
  • Contestability and redress.

Designated AI officer

Businesses that deploy or use AI must have a designated AI officer to ensure that AI (and data) is used in a safe, ethical and unbiased manner.

Creation of AI regulatory sandboxes

The AI Authority will be required to collaborate with relevant regulators to construct regulatory sandboxes for AI. Sandboxes are defined as an arrangement by one or more regulators which allows businesses to test innovative propositions in the market with real consumers.

Transparency, intellectual property (IP) obligations and labelling

Any person involved in training AI must:

  • Supply the AI Authority with a record of all third-party data and intellectual property used.
  • Ensure all data and IP is used by informed consent.  

Greater public engagement

The AI Authority will:

  • Establish a series of programmes for “meaningful, long-term public engagement about the opportunities and risks presented by AI”.
  • Provide further consultative opportunities.

Comment

A principle-based, pro-innovation approach to regulation has appeal. Indeed, when British AI company Wayve secured a $1.05bn investment in March to develop the next generation of AI-powered self-driving vehicles, Wayve co-founder cited the UK’s approach as integral to the ability to build AI for assisted and automated driving so quickly.

However, the harms associated with AI cannot be ignored. As AI rapidly advances, we would expect the UK, like other countries, to become increasingly mindful of the need to address issues that could harm public trust in AI and in turn, undermine the potential gains. The challenge, of course, is striking the right balance in AI regulation that provides protection yet does not stifle innovation and / or result in overly complex rules that become meaningless as the technology changes.

We would go so far as to say that developing AI responsibly is probably one of the most significant global issues of today. As AI continues to gather pace, so will attempts at regulation. We can already see how countries are looking to have their own set of rules, which in turn risks further fragmentation of the digital world. 

In the meantime, in the UK, while the current attempt to legislate may not pass into the statute books, the call for well-designed regulation should not be ignored. With the Automated Vehicles Bill having just been made law, proactive policies and smart governance can be an enabler of innovation by removing potential harms and unlocking more cautious capital investment. We will continue to monitor the progress of the UK AI Regulation Bill, alongside global developments that impact AI.

Artificial Intelligence timeline: key developments

Visit our timeline
Website Coding

Related content