Regulating AI: insights on the FCA and ICO’s planned Code of Practice

This article was co-authored by India Tisbury, Solicitor Apprentice. 

The Financial Conduct Authority (FCA) has recently announced the development of a statutory code of practice relating to the use of AI and automated decision-making for regulated firms. Developed in collaboration with the Information Commissioner’s Office (ICO), the initiative aims to strike a balance between supporting good practice and enabling innovation, while also safeguarding privacy. We consider the impact of this announcement for regulated firms.

The current guidance

Despite previous joint efforts between the FCA and ICO to encourage responsible AI adoption, current guidance for firms remains scarce, leaving many uncertain about where they stand. What is in place is, arguably, not fit for purpose. At present, the only tangible support available is from the ICO, in the form of guidance and a toolkit to help assess risks presented by their own AI systems - resources that many consider inadequate given the pace and complexity of AI developments.

The ICO has also published an analysis on how data controllership should be allocated across the generative AI supply chains. In contrast, the FCA has so far issued only high-level statements and exploratory discussion papers, outlining firms' potential responsibilities when deploying such technologies, but offering no practical tools or implementation frameworks.

This lack of comprehensive guidance leaves firms in a difficult position, potentially criticised for not using AI where it could have prevented harm, yet equally exposed to liability if AI adoption results in loss. This regulatory ambiguity creates a no-win scenario the FCA now appears ready to address.

What’s changing?

Following a roundtable held with industry leaders in May 2025, the FCA has announced that it will develop a statutory Code of Practice for firms developing or deploying AI and automated decision-making systems, a move aimed at setting clearer expectations and reducing regulatory uncertainty.

Alongside this, the FCA is ramping up its AI Lab initiative, which will help firms to develop, test, and evaluate AI technologies in a controlled and supervised setting. In June 2025, the FCA also launched a new collaboration, giving firms the green light to trial AI in real-world (live) environments, in an attempt to speed up innovation.

Key implications

The FCA’s proposals mark a shift from abstract principles to practical support for firms dealing with AI implementation and a Code of Practice will be welcome news to many. Whilst the extent of the Code remains unclear, it is a decisive step toward regulatory clarity on AI. Firms will soon have a clearer rulebook for how to deploy AI in a compliant way. Of course, firms should also keep in mind that the Code of Practice will also provide further protection for consumers.

The FCA’s AI Lab gives firms the opportunity to test and refine AI tools in a controlled environment, helping to identify potential risks and ensure systems are robust before deployment at scale. This will allow for faster adoption of AI, with the added reassurance of regulatory oversight and early intervention if needed.

Together, these developments suggest a more constructive and engaged regulatory approach, aimed at fostering innovation while ensuring that consumer protection, accountability, and trust in AI remain front and centre.

Comment

AI is a complex beast, but something we all have to embrace. It is not surprising that uncertainty and lack of familiarity are (according to the FCA) the main factors causing industry hesitation. There are, of course, benefits to AI, but these must be balanced with the risks. For example, uncertainty and lack of familiarity could lead to an increase in claims.

As we have explained previously, if a professional blindly follows AI recommendations and / or findings, without scrutinising what is being said, they risk overreliance, putting them at risk of claims. There is also the chance that many professional indemnity policies do not explicitly cover AI and so professionals are exposing themselves to coverage arguments.

We anticipate the Code of Practice will be published later this year. In the meantime, firms should continue to make the most of the resources signposted by the FCA to ensure  they are following existing guidance, minimising room for criticism.

We await with interest to see just how far the FCA’s code of practice will go, how it might help or hinder AI usage, and what the impact will be for firms using AI.