Key insights into AI regulations in the EU and the US: navigating the evolving landscape

The rapid rise of artificial intelligence (AI) has brought a surge in regulations, frameworks, and policies across the globe. From the comprehensive EU Artificial Intelligence Act (EU AI Act) to the U.S. White House Office of Science and Technology Policy (OSTP) Blueprint for an AI Bill of Rights, these regulations aim to balance fostering innovation with mitigating potential risks. However, the fragmented nature of these frameworks, their varying definitions of AI, and differences in scope and obligations pose significant challenges for international businesses.

This article explores key global AI regulations in the U.S. and in the EU, identifying common priorities before offering a deep dive into the EU and US AI Frameworks as a practical guide for organisations to navigate this complex regulatory environment.

Common priorities across AI regulations

Despite differences in jurisdiction and scope, the EU AI Act and various US AI regulations share a core set of priorities to address the risks and opportunities of AI. These frameworks converge on the need to mitigate harm, ensure transparency, and establish accountability.

Mitigating algorithmic bias and discrimination

The EU AI Act and emerging US regulations emphasise preventing or mitigating bias in AI systems, especially those that impact individuals' fundamental rights in critical areas such as education, financial services, employment, or by law enforcement: 

  • EU AI Act: High-risk AI systems, such as those used in employment, education, and law enforcement, are subject to strict requirements, including robust data governance and regular monitoring, to minimise discriminatory outcomes.
  • US Regulations: The New York City Bias Audit Law mandates regular audits of automated employment decision tools to identify discrimination against protected groups , while the Colorado AI Act requires annual impact assessments for high-risk AI systems affecting employment, healthcare, and housing.

These measures highlight the global recognition of AI's potential to perpetuate bias and the need for proactive safeguards.

Ensuring transparency and explainability

Transparency is another shared cornerstone, ensuring consumers are informed when AI is used and understand its decision-making process:

  • EU AI Act: Requires that users of AI systems are informed when interacting with AI, with additional transparency obligations for high-risk systems, such as disclosing key data sources and providing detailed documentation.
  • US Regulations: By way of example, the Minnesota Consumer Data Privacy Act grants consumers whose data is subject to automated decision-making with significant legal or similar effects the right to be informed of the reasoning behind significant AI decisions. Consumers also may obtain guidance on factors that could cause another outcome. Furthermore, the Utah Artificial Intelligence Policy Act mandates for companies in certain sectors disclosure to consumers before they engage with generative AI content.

Both frameworks aim to build trust by empowering individuals to understand and challenge AI-driven decisions.

Accountability and oversight throughout the AI lifecycle

Accountability mechanisms are embedded in the EU AI Act and US regulations, ensuring that organisations developing or deploying AI systems are responsible for their safe and ethical use:

  • EU AI Act: Introduces extensive lifecycle oversight, including risk management systems, conformity assessments, and post-market surveillance for high-risk AI systems. It establishes the European AI Office. Providers must also register such systems in an EU database.
  • US Frameworks: The White House Blueprint for an AI Bill of Rights and the Colorado AI Act stress the importance of internal governance and external audits to maintain accountability, particularly for high-risk systems.

While the EU AI Act is a comprehensive, extraterritorial regulatory framework, US AI governance remains a patchwork of federal and state-level initiatives. However, both prioritise fairness, transparency, and accountability, underscoring the universal need for responsible AI governance.

Deep dive into the EU and US AI frameworks

While common goals like transparency, accountability, and bias mitigation are universal, specific AI regulatory approaches differ in scope, legal form, and enforcement mechanisms.

We take a closer look at the EU AI Act and key U.S. AI frameworks to understand how these regulations are shaping the global landscape.

The EU AI Act: A comprehensive framework with global impact

The EU AI Act, which entered into force on August 1, 2024, establishes a comprehensive and pioneering regulatory framework for AI, setting a global benchmark for governance. Its risk-based approach categorises AI systems into four risk levels: unacceptable, high, limited, and minimal risk, ensuring proportionality in compliance obligations while safeguarding individuals' rights.  

The majority of the EU AI Act’s provisions will become applicable on August 2, 2026. Notably, the EU AI Act has extraterritorial application which  underscores the EU’s ambition to shape global AI governance, compelling providers and deployers worldwide to evaluate their role and risk classification to meet compliance with the Act.

The EU AI Act distinguishes between:

  • providers, which are entities that develop and place AI systems on the market; and
  • deployers which are organization or individuals that use these systems.

Additionally, the EU AI Act also requires importers to ensure compliance of third-country AI systems entering the EU, distributors to verify that the AI systems they market meet regulatory requirements, and authorised representatives acting on behalf of non-EU providers to ensure adherence to the Act. Obligations under the EU AI Act vary depending on an entity's role and the risk classification of the AI system.

AI systems classification into four categories

  1. Unacceptable risk: These systems, such as those involving social scoring by governments, are strictly prohibited due to their potential for significant harm.
  2. High risk: This category includes systems that impact fundamental rights, such as biometric identification or AI used in education, healthcare, and employment. High-risk systems are subject to stringent requirements, including:
     - Data governance to ensure high-quality datasets.
     - Human oversight to mitigate risks.
     - Conformity assessments to verify compliance.
     - Post-market monitoring to address emerging risks.
  3. Limited risk: Systems in this category are subject to transparency obligations, such as informing users when interacting with AI.
  4. Minimal risk: These systems have little to no regulatory obligations, encouraging innovation without unnecessary compliance burdens.

Enforcement and Penalties

The EU AI Act includes robust enforcement measures, with fines of up to €35 million or 7% of global turnover, whichever is higher. These penalties highlight the EU’s commitment to strict compliance and the protection of individuals' rights.

U.S. federal frameworks: A patchwork approach

In contrast to the EU’s unified and comprehensive regulatory framework, the U.S. takes a fragmented approach to AI governance, relying on a mix of federal initiatives, state-level legislation, and voluntary industry standards. This decentralised approach reflects the complexity of the regulatory environment for AI in the U.S. This is not dissimilar to the lack of federal data protection law, leaving businesses to navigate a patchwork of state laws, frameworks, and initiatives in the U.S. 

At the federal level

Executive Order (E.O.) 14110 on Safe, Secure, and Trustworthy Development and Use of AI, issued by the White House in October 2023, outlines key principles for safe, ethical, and responsible AI development. It prioritises areas such as fairness, consumer protection, privacy, and international leadership. The Order tasks over 50 federal entities with developing policies across eight key policy areas, including algorithmic bias mitigation, AI safety, innovation and competition, worker support, consumer protection, data privacy and transparency, federal use of AI; and international leadership. . While not legally binding, it establishes a tone for responsible AI practices.

The EO builds on prior guidance, including the White House OSTP Blueprint for an AI Bill of Rights, released in October 2022. The Blueprint outlines a framework for ethical and equitable use of AI systems when those systems impact individuals’ rights. It emphasises core principles which include: safe and effective systems, protection from algorithmic discrimination,  data privacy, transparency, notice and explanation when AI is used, human alternatives and the right to opt-out. Although aspirational, the Blueprint provides a roadmap for ethical AI deployment.

On January 23, 2025, President Trump issued his own Executive Order 14179 “Removing Barriers to American Leadership in Artificial Intelligence". President Trump's administration appears to be taking a different approach to AI policy, focusing on deregulation and promoting rapid development of AI technologies.

At the state level

Some states are stepping in to address AI regulatory gaps. For example, the Colorado AI Act, enacted May 17, 2024, is the most comprehensive state-level legislation to date. It is set to take effect in February 2026.

Similar to the EU AI Act, it imposes stringent obligations for developers and deployers on high-risk AI systems to protect from algorithmic discriminatory or harmful consequential decisions made by or with AI systems or the unlawful differential treatment or impact that disfavors groups based on protected statutes.

The Colorado AI Act defines high-risk AI systems as those that make or are a substantial factor in making consequential decisions in the areas of education, employment, financial services, essential public services, healthcare, housing, and legal services. Similarly, New York City’s Bias Audit Law mandates regular audits of automated employment decision tools to ensure fairness and prevent discrimination.

Key obligations under the Colorado AI Act include annual impact assessments, transparency and disclosure requirements, and notification to consumers of the role of AI in consequential decisions along with an opportunity to correct or appeal. 

Colorado has set the standard for governing AI in the United States, and more states are expected to follow in the absence of federal legislation.

Industry-led initiatives also play a significant role in shaping AI governance. The NIST AI Risk Management Framework, developed by the National Institute of Standards and Technology, provides voluntary guidance to organisations for identifying and mitigating AI risks. These frameworks are becoming essential tools for businesses navigating the U.S.’s decentralised landscape.

However, the absence of comprehensive federal legislation leaves businesses grappling with inconsistencies and legal uncertainties. Companies must adapt to a patchwork of state laws and federal guidelines, underscoring the need for flexibility in AI governance strategies.

Conclusion

The growing patchwork of AI regulations across the globe presents both challenges and opportunities for businesses.  Despite these differences, common priorities emerge: transparency, accountability, fairness, and the mitigation of algorithmic bias. These shared principles underline a global recognition of the need to balance innovation with responsibility.

To navigate this rapidly evolving regulatory environment, international businesses must adopt proactive strategies. Key steps include:

  • Conducting risk assessments aligned with the highest regulatory standards.
  • Building AI governance frameworks that can be adapted to jurisdictional requirements.
  • Fostering transparency and trust through clear communication with stakeholders.

By aligning practices with these core principles, organisations can mitigate risks, enhance compliance, and position themselves as leaders in ethical AI innovation.