The UK’S AI opportunities action plan – innovation catalyst or regulatory gamble?

On 13 January 2025, the UK government issued a response to the AI Opportunities Plan, a commissioned report by Matt Clifford CBE to develop AI and incentivise investment in the UK. The government endorsement of this plan and recommendations indicate a light touch approach to AI regulation with a focus on unleashing AI to boost economic growth.   

In this article, we unpack key aspects of the AI Opportunities Action Plan, explore the evolving regulatory landscape and what this means for healthcare and the insurance sector.

Overview of the AI Opportunities Action Plan

Laying the foundations to enable AI:

The UK's AI progress relies on substantial investment in computational power, with data centres essential for training models and running AI applications. While the UK does not need to own all its compute resources, access to high-performance computing will be crucial for economic growth, innovation, and national security. To secure this, the government should ensure a balanced compute portfolio: Sovereign AI compute for national priorities, domestic compute to drive AI-driven economic growth, and international compute through partnerships for shared research and capabilities. Strengthening AI infrastructure will attract talent, create jobs, and establish the UK as a leader in AI.

Changing lives by embracing AI:

AI adoption is crucial for achieving the government’s five missions, enhancing service delivery, citizen experiences, and productivity. To drive adoption, the government must strengthen AI foundations—data, skills, talent, and assurance—while also leveraging its role as a major user and catalyst for private sector uptake. AI is already proving beneficial across sectors, from automating administrative tasks and drafting reports to improving diagnostics in healthcare and enhancing security. However, scaling these successes requires a strategic “Scan > Pilot > Scale” approach, backed by more robust procurement strategies. The government has recommended regulatory sandboxes to support AI innovation in high-growth areas with regulatory challenges, such as autonomous vehicles, drones, and robotics. Ensuring clear guidance on AI procurement and testing will be key to helping, domestic AI startups compete with global tech giants. 

Securing our future with homegrown AI:

By 2029, AI is expected to be a key driver of economic performance and national security, making it essential for the UK to have national AI champions. As AI capabilities rapidly advance—surpassing human performance in many areas and expanding into agentic systems—the economic and strategic stakes will be enormous. The UK must ensure its economic and strategic interests remain protected. The UK has the talent, research strength, and geographic advantage to lead, but success requires a proactive, interventionist approach rather than reliance on market forces alone (tax incentives or venture capital support). The government must ensure frontier AI research happens in the UK, maximise economic benefits, and leverage all policy tools to make the UK the best place to build and scale AI companies.

Regulatory landscape

The UK has adopted a pro-innovation, sector-led regulatory approach to AI, distinguishing it from more prescriptive global frameworks like the EU AI Act. Instead of introducing overarching AI-specific laws, the UK relies on existing regulators to apply AI principles within their respective sectors. The government's framework is based on five core principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. This flexible approach is intended to promote AI innovation while ensuring responsible deployment.

Key regulatory developments include:

  • The establishment of the AI Safety Institute to assess advanced AI models and risks.
  • The Regulatory Innovation Office (RIO) to coordinate AI-related regulatory initiatives.
  • The use of regulatory sandboxes to test AI applications in high-growth sectors such as finance, healthcare, and autonomous systems.
  • A commitment to aligning AI governance with the proposed UK Data (Use and Access) Bill, particularly on data-sharing frameworks.
  • Increased funding and guidance for sector regulators to develop AI-specific capabilities.

Healthcare & AI

Recent research commissioned by the General Medical Council (GMC) into doctors’ experiences with generative AI and diagnostic/decision support systems, focusing on associated risks and responsibilities found that whilst AI can improve efficiency and save time, concerns around potential bias or inaccuracy in AI generated data remain.

In particular, uneasiness about bias and inaccuracy in AI generated data.  Bias can arise from data that does not adequately represent diverse patient populations, potentially leading to disparities in diagnosis and treatment. Inaccuracies may stem from flawed algorithms, incomplete datasets and a lack of understanding as to how the AI decision-making process works.  The ramification of misdiagnoses and inappropriate treatment recommendations being significant on a human level.

Whilst there are risks associated with patient safety, data privacy and the potential for over-reliance on AI-driven recommendations, there remains scope for significant opportunity. Freeing up consultation time, allowing better interaction between patient and doctor. 

AI is already being used in the NHS for administrative efficiency and clinical decision support. However, the extent to which existing regulatory and liability frameworks sufficiently address AI-driven decision-making in healthcare remains a subject of ongoing discussion.

Careful oversight, clear professional guidelines and ongoing evaluation of AI systems in healthcare remain the safety way forward.

Impact on insurance

The insurance industry stands to benefit significantly from AI. Insurers may be pleased to note that the unlocking of data assets in the public and private sector forms a central part of the governments approach to laying the foundations that enable AI development. Data will be central for developments that improve customer experience such as the hyper-personalisation of insurance, which will allow insurers to offer a more tailored and unique experience, delivering better risk assessment and customised policies. However, insurers will also have to consider potential challenges such as the risks that may arise if customers find themselves uninsurable due to discriminatory AI outputs.  The Financial Conduct Authority (FCA) has raised concerns about AI-driven hyper-personalisation potentially leading to discriminatory practices. The FCA continues to oversee fairness in insurance decision-making, and existing regulatory frameworks, including the Consumer Duty, already require firms to act in customers' best interests. While no AI-specific consumer protection regulations exist, future adaptations may be necessary. The FCA has signalled that insurers using AI for pricing and underwriting should ensure explainability and fairness in decision-making.

Conclusion

Despite AI regulatory developments across various jurisdictions, the regulatory landscape remains highly fragmented and global alignment remains elusive.

The UK appears to be in its nascent stages of defining its AI regulatory framework. However, regulation is likely to remain light-touch and pro-innovation given growth remains the main priority of the Labour Government. The government will have to strike a balance between addressing risks posed by the development of AI, whilst ensuring it creates a landscape that makes investment appealing both domestically and internationally. Can the UK’s flexible regulatory approach maintain its competitive edge while ensuring alignment with evolving global AI governance frameworks?

Next steps

  • In Spring 2025, the Department of Science, Innovation and Technology (DSIT) has committed to releasing a long-term compute strategy which will also set out how the UK will seek to address the sustainability and security challenges of AI infrastructure.
  • In Spring 2025, the Department for Business and Trade (DBT) intends to provide a public update as part of its wider approach to regulation as to how DBT and DSIT will work together to empower the Regulatory Innovation Office (RIO) to drive regulatory innovation.
  • In Spring 2025, DSIT will work with HM Treasury, DBT and lead departments, to identify opportunities for AI adoption in key industries.
  • In Summer 2025, the government expects to set out further details on the National Data Library and data access policy.
  • In Summer 2025, DSIT will develop frameworks for sourcing AI.
  • In Summer 2025, DSIT will pilot the AI Knowledge Hub, publishing best practice guidance and case studies.
  • In Autumn 2025, DSIT will provide an update on its commitment to appoint an AI lead for each mission to help identify where AI could be a solution.

As AI governance becomes a defining element of global tech policy, the UK’s approach raises a crucial question: Will the UK’s pro-innovation stance become a competitive advantage, or will its regulatory flexibility prove to be a long-term gamble?