As we sip our hot chocolate, we have been thinking about how the global AI roadmap is poised for transformative regulatory shifts in the New Year. Central to these developments is the EU AI Act, the first comprehensive legal framework for AI systems, which began to take shape in 2024. Meanwhile, across the Atlantic, the US faces mounting pressure to establish a cohesive federal AI regulatory framework. The question for 2025 is whether the US will follow the EU’s lead or carve out a regulatory approach that reflects its own priorities and the dynamics of its tech ecosystem.
Here is what lies ahead in the evolving global landscape of AI regulation.
The EU AI Act: a pioneering framework
The EU AI Act, a landmark piece of legislation, aims to create a comprehensive framework for AI regulation, balancing innovation with risk management. Like the GDPR, it is expected to have extraterritorial impact, shaping AI governance well beyond EU borders. Two critical milestones in the EU AI Act’s implementation are set for 2025: the ban on prohibited AI systems and the introduction of rules for General Purpose AI (GPAI) models.
- Prohibited AI systems: safeguarding against harm
Starting in February 2025, the EU AI Act ban on prohibited AI systems will come into force. The Act outlaws AI systems that are deemed inherently harmful, including AI systems deploying subliminal, manipulative, or deceptive techniques, social scoring systems, and AI systems predicting criminal behaviour based solely on profiling or personality traits (we recommend watching the film ‘Minority Report’ over Christmas!). The intent of these prohibitions is clear: to protect fundamental rights and prevent AI systems from causing significant societal harm. However, enforcement will hinge on how regulators interpret terms like “manipulative” and “deceptive.” For example, recent reports suggest that advanced generative AI models, including ChatGPT, may have exhibited deceptive behaviours during testing. Such findings could spark debates about what constitutes manipulation in an AI context.
- General Purpose AI (GPAI) and rules for advanced models
In August 2025, the EU AI Act’s rules on General Purpose AI (GPAI) models and broader enforcement provisions will take effect. GPAI models such as ChatGPT-4 and Gemini Ultra are distinguished by their versatility and widespread applications. The Act categorises GPAI models into two categories, standard GPAI which is subject to general obligations and systemic-risk GPAI which is defined by their use of computing power exceeding 10^25 Floating Point Operations (FLOPS) during training. These models are subject to enhanced oversight due to their potential for significant societal impact. It remains to be seen whether a legal threshold tied to a specific level of computing power is sensible, or if it will need periodic revisions. While these thresholds apply primarily to leading AI developers like OpenAI and Google, organisations deploying AI systems incorporating GPAI must also ensure compliance, even if they are not directly developing the models. Organisations deploying GPAI models should anticipate increased compliance costs, particularly if planning to develop in-house models, even on a smaller scale.
The US perspective: fragmentation or cohesion?
Unlike the EU, the United States has yet to establish a cohesive federal framework for AI regulation. Instead, AI governance in the US is currently shaped by a patchwork of state laws, agency guidance, and voluntary frameworks.
The Trump administration: a turning point for federal regulation?
With Donald Trump returning to office in 2025, the direction of AI regulation remains uncertain. While President Trump’s administration is not traditionally associated with regulatory expansion, ethical concerns and emerging AI risks such as deepfakes may drive bipartisan support for some level of federal oversight. The Federal Trade Commission (FTC), which has historically targeted unfair and deceptive practices could play a central role. With forthcoming leadership changes, a Trump-appointed FTC chair may prioritise innovation and lighter regulatory touch, in contrast with the current administration’s stricter antitrust focus.
In the absence of federal regulation, state-level efforts continue to shape the AI landscape. In California, the Governor’s veto of the AI accountability bill highlights the tension between promoting innovation and imposing oversight. Future iterations may refine the balance. While in New York, a new law regulating AI in employment decisions has emerged, emphasising transparency and fairness.
Global implications: a converging path?
The EU AI Act sets a high bar for AI regulation and is likely to shape global best practices. Countries like Canada, Australia, Brazil, and Singapore have already proposed or enacted AI-specific regulatory frameworks that align with the EU’s risk-based approach. As AI becomes a cornerstone of the global economy, the need for interoperable regulatory standards will grow and become a necessity. AI regulation is entering a pivotal phase in 2025. The EU AI Act will test the limits of comprehensive governance, while the US grapples with whether and how to regulate AI at the federal level. Businesses must adapt to this rapidly evolving landscape, leveraging compliance as a strategic advantage.
At Kennedys, our global AI and technology team is here to help navigate these challenges, ensuring that your organisation is equipped to thrive in this new regulatory era.
This article was co-authored by Joshua Curzon, Trainee Solicitor.