Agentic AI: what businesses need to know to comply in the UK and EU

Agentic AI refers to artificial intelligence systems capable of goal-directed behaviour and autonomous decision-making without direct human intervention. From personalised digital assistants to AI agents that can code, transact, or even manage customer relationships, agentic AI is moving from science fiction into commercial deployment. However, with autonomy comes responsibility: these systems challenge traditional accountability models in both data protection and AI regulation. In this article, we explore what businesses need to know to stay compliant with emerging UK and EU requirements.

This article covers two main areas:

  1. Regulatory classification and legal obligations under the UK and EU AI regulatory frameworks;
  2. Data protection compliance implications of deploying agentic AI systems.

1. Regulatory classification and legal obligations

As AI systems evolve toward higher levels of autonomy, regulatory regimes must grapple with how to classify, monitor, and govern such systems without stifling innovation. Agentic AI, defined by its goal-driven independence and ability to initiate action without direct human input, raises particularly complex compliance issues.

EU AI Act: where Agentic AI fits

The EU AI Act adopts a risk-based approach to regulating AI. Agentic AI will likely fall into the "high-risk" or potentially even "prohibited" categories depending on its function and deployment context. For example:

  • Used in high-risk contexts under Annex III (e.g., recruitment, credit scoring, biometric identification, education, law enforcement).
  • General-purpose AI (GPAI) if the agentic system is released as a GPAI model (e.g., usable across multiple high-risk domains), subject to transparency obligations and additional requirements for GPAI with systemic risks (Title VIII).
  • Embodied in physical products (e.g., robots or drones with agentic capabilities), where those products are already regulated under existing EU product safety regimes (Machinery Regulation, Medical Devices Regulation, etc.), dual regulation applies.
  • an agentic AI that manipulates behaviour in a subliminal manner could fall under the prohibited practices.

Although the term "agentic AI" does not appear in the AI Act, the legislation is designed to be technology-neutral and future-proof, which means agentic systems fall within scope.

Businesses must:

  • conduct conformity assessments under Annex IV of the AI Act;
  • implement a robust risk management framework;
  • maintain technical documentation and logs of decisions made by the agentic system;
  • ensure human oversight and robustness.

Agentic AI's capability to act independently and continuously may increase the risk profile under Article 6 of the Act. The European Commission may also assess the degree of autonomy as a relevant factor when determining whether a system poses unacceptable risks. Additionally, Article 14 of the Act requires that high-risk systems are subject to effective human oversight. This provision is especially important for agentic AI, given the challenges of maintaining supervisory control over autonomous agents.

UK position: context-specific regulation

The UK does not have an overarching AI law but instead takes a context-specific, pro-innovation approach based on five cross-sector principles: safety, transparency, fairness, accountability, and contestability.

The 2025 AI Opportunities Action Plan and the earlier 2024 White Paper identified "autonomy risks", especially from agentic systems, as requiring further regulatory attention. As a result, sector-specific regulators like the FCA, ICO, and CMA are expected to issue guidance that captures agentic behaviours.

Additionally, the UK’s Code of Practice for the Cyber Security of AI (published by DSIT in January 2025) outlines baseline cybersecurity requirements across the AI lifecycle. This voluntary Code includes principles on human responsibility and auditability that are especially pertinent to agentic AI. The Code’s guidance around secure design, human oversight, and documentation of model behaviour can support compliance in use cases involving proactive and continuously learning agents.

For agentic AI, UK regulators (such as the ICO, FCA, and CMA) expect:

  • sector-specific risk assessments;
  • clear allocation of responsibility for automated decision-making;
  • robust governance, especially where the agent could cause legal or financial harm.

While not yet legally binding, the UK government intends to issue statutory guidance. Also, the new ICO's AI Code of Practice is expected in 2025.

2. Data Protection compliance implications

As agentic AI systems become more embedded in business processes, their compliance with data protection law raises novel and complex questions. The UK and EU GDPR was not designed with fully autonomous AI agents in mind. Yet, businesses must adapt their governance to account for the unique challenges these systems pose. The first and perhaps most fundamental issue lies in role allocation.

Role allocation and the problem of autonomy

Agentic AI challenges the traditional data controller/processor dichotomy under the UK AND EU GDPR.  Businesses must carefully consider:

  • Who determines the purpose and means of processing when an AI acts autonomously?
  • How to attribute legal responsibility for decisions taken without direct human intervention?

A recommended approach is to build in dynamic oversight mechanisms and document human supervisory structures. This ensures the business remains the "controller" and avoids claims of relinquishing control to an unaccountable agent.

Regulatory authorities are likely to assess the adequacy of governance structures and will expect documentation showing how agentic behaviour is bounded by human-defined parameters.

Transparency, explainability and the Right to Information

Under Article 13/14  of the UK and EU GDPR, individuals have the right to be informed about the use of their data. For agentic AI, this becomes complex:

  • What if the system self-generates decisions in ways not foreseen at deployment?
  • How can explanations be meaningful, especially for highly technical or emergent behaviours?

Organisations should develop layered notices and use plain language to explain the logic and scope of the AI’s autonomy. The European Data Protection Board (EDPB) has highlighted that black-box AI cannot justify a failure to comply with transparency requirements. This is especially important for agentic AI, where actions may derive from intermediate steps or model outputs not directly supervised or even understood by human operators.

Automated decision-making (article 22 UK and EU GDPR)

Agentic AI may trigger Article 22, which grants individuals the right not to be subject to a decision based solely on automated processing that produces legal or significant effects.

Key compliance actions include:

  • assessing whether human review is meaningful and not merely tokenistic;
  • ensuring individuals can contest decisions, especially in high-impact use cases;
  • documenting safeguards and legal bases for relying on exceptions under Article 22(2).

Where agentic AI is used to conduct complex decision chains (e.g., in financial or employment contexts), organisations must ensure that automated recommendations do not evolve into fully automated final outcomes without legally compliant oversight.

Data minimisation and purpose limitation

Agentic systems often rely on real-time learning, continuous data ingestion, and autonomous goal-seeking behaviour. This raises questions around:

  • compliance with data minimisation and purpose limitation principles;
  • managing drift from original purposes as AI "learns" new patterns.

To mitigate risks, businesses should:

  • define strict purpose boundaries within the model's operational parameters;
  • implement technical constraints and human-in-the-loop protocols;
  • regularly audit for function creep and unintended data uses.

Comment: a call for proactive governance

Agentic AI is no longer theoretical.  From AI co-pilots to autonomous decision agents, these systems are reshaping how businesses operate and interact with users. But autonomy doesn’t mean freedom from regulation. In fact, agentic AI increases the regulatory burden, especially in demonstrating explainability, accountability, and legal responsibility.

To remain compliant, organisations must:

  • conduct detailed risk assessments under both the EU AI Act and UK’s context-specific principles;
  • maintain control and traceability of AI behaviour;
  • adapt GDPR compliance frameworks to account for the autonomous capabilities of agentic systems.

Proactive governance, not passive oversight, is the only viable strategy.  Businesses deploying agentic AI must future-proof their compliance posture, or risk facing enforcement, reputational damage, and legal uncertainty as regulators catch up with technological reality.