The EU Artificial Intelligence (AI) Act which came into force on 1 August 2024, marks the world’s first comprehensive regulatory framework for AI. While most provisions apply from 2 August 2026, some key requirements, including AI literacy obligations, the definition of AI systems, and bans on prohibited practices, took effect from 2 February 2025. These early milestones signal the beginning of a new regulatory framework for AI across Europe.
To help businesses navigate these early compliance obligations, the European Commission has released two sets of Guidelines in February 2025, including those covering the definition of AI systems (the Guidelines on AI Systems) and prohibited AI practices (the Guidelines on Prohibited AI). Whilst these guidelines are not binding they assist business in assessing how these rules affect their AI operations and ensure they are prepared for compliance.
This article outlines the two critical areas businesses need to address in the immediate term: (1) embedding AI literacy as an operational compliance requirement, and (2) understanding how AI systems are defined and what practices are prohibited under the Act.
AI literacy: a foundational compliance requirement
The EU AI Act mandates AI literacy as a fundamental compliance requirement. Organisations deploying AI must ensure that their employees, contractors, and relevant third parties have the necessary skills and knowledge to deploy AI responsibly and manage associated risks.
What AI literacy means in practice
AI literacy is not simply about delivering training programmes. Under the EU AI Act, it is a demonstrable compliance requirement for organisations to ensure all personnel involved in the deployment and oversight of AI systems understand both the technology and its risks.. This shift places a greater onus on businesses to focus on meaningful education programmes that demonstrates comprehension and application beyond one-off training sessions.
One of the biggest challenges businesses face is the fast-evolving nature of AI technology.AI literacy programmes should be tailored to reflect sector-specific risks and updated regularly to keep pace with rapid technological developments. Smaller organisations may also find it difficult to allocate resources to comprehensive AI training, while different industries will require tailored approaches to reflect sector-specific risks.
Governance integration and regulator expectations
Rather than treating AI literacy as a standalone obligation, businesses should integrate AI literacy into their existing governance and risk management frameworks. This not only will help organisations build a culture of responsible AI use but also enhances AI oversight, improves decision-making, and strengthens stakeholder trust. While failure to implement AI literacy does not attract direct penalties, regulators may take it into account when determining fines for broader breaches of the AI Act.
Scope and prohibited AI practices: understanding the boundaries
The AI system definition is a key pillar of the AI Act, determining which technologies fall under its scope.
Defining an AI system under the AI Act: what businesses need to know
The EU AI Act provides a lifecycle-based definition of AI, encompassing both development (building phase) and deployment stages(use phase). The Guidelines on AI Systems confirm that, due to the wide variety of AI applications, it is not possible to provide a definitive list of AI systems. Instead, AI is defined through seven key elements:
- A machine-based system
- Designed to operate with varying levels of autonomy
- That may exhibit adaptiveness after deployment
- Operating for explicit or implicit objectives
- Inferring from inputs to generate outputs
- Producing predictions, content, recommendations, or decisions
- Influencing physical or virtual environments
However, not all seven elements need to be present at all times for a system to qualify as AI under the Act. The definition is intended to reflect the complexity and diversity of AI systems while ensuring alignment with the AI Act’s objectives.
Organisations should take note that this definition should not be applied mechanically. Each system must be assessed individually based on its specific characteristics. While many AI systems will meet the definition set out in the AI Act, not all will be subject to regulation. Ultimately, the Court of Justice of the European Union will be responsible for authoritative interpretations of AI system classification.
Prohibited AI practices: what’s off-limits?
Article 5 of the AI Act outlines AI practices that pose unacceptable risks to fundamental rights, public safety, and democratic values. These prohibitions will be reviewed annually by the European Commission, allowing the list to evolve alongside technological developments.
While some prohibitions mainly target governments and law enforcement, others have direct implications for businesses. Two of the most significant restrictions affecting commercial AI applications are the exploitation of vulnerabilities and social scoring.
- Exploiting Vulnerabilities (Article 5(1)(b))
AI systems that intentionally exploit individuals’ vulnerabilities based on age, disability, or socioeconomic status, especially of children or at-risk individuals, resulting in significant harm, are strictly prohibited. The Guidelines on Prohibited AI define vulnerabilities broadly, covering cognitive, emotional, and physical susceptibilities.
A key example is AI-powered toys designed to manipulate children into engaging in risky behaviour, such as spending excessive time online or making unsafe decisions. Another example includes addictive AI-driven mechanisms, such as reinforcement schedules that exploit dopamine loops to increase user engagement.
- Social Scoring (Article 5(1)(c))
Social scoring refers to AI systems that evaluate or classify individuals based on their social behaviour, personal characteristics, or inferred traits, leading to detrimental or disproportionate treatment.
This prohibition applies in two cases: (1) when social scoring results in negative consequences in an unrelated context, such as using an individual’s financial spending habits to determine their employability; and (2) when the consequences of social scoring are disproportionate to the behaviour being assessed.
The Guidelines on Prohibited AI and recent case law such as SCHUFA (C-634/21) and Dun & Bradstreet (C-203/22) illustrate how these prohibitions will be applied in practice. Profiling individuals using AI-driven evaluation systems may fall under prohibited AI if all the necessary conditions are met. However, while most business AI-powered scoring and evaluation models will fall outside the scope organisations should scrutinise data use and scoring criteria to avoid unintentional breaches.
For example, in Dun & Bradstreet, a company specialising in business-related data analytics, credit scoring, and risk assessments, is unlikely to be considered to be engaging in prohibited social scoring under the AI Act. Its models assess the financial health of companies and individuals, for legitimate commercial purposes, rather than evaluating individuals based on social behaviour or personality traits. On the other hand, as highlighted in the Guidelines on Prohibited AI, an insurance company collecting banking transaction data — unrelated to life insurance eligibility — and using it to adjust premium pricing could be engaging in prohibited social scoring. The Guidelines recognise that while certain scoring practices are prohibited, many AI-enabled evaluation models do not meet the criteria for social scoring, meaning most business-oriented AI-driven scoring practices will fall outside the scope of this prohibition.
Conclusion: preparing for compliance
Business should assess whether their AI systems fall within the scope of the AI Act, evaluate their AI literacy programmes, and review their AI-powered tools for potential risks related to exploitation of vulnerabilities or social scoring. Given the annual updates to the list of prohibited practices, businesses will also need to monitor regulatory developments closely to stay compliant.
While the AI Act presents new regulatory challenges, it also offers a framework for responsible AI governance. Businesses that take a proactive approach to compliance — by integrating AI literacy into governance frameworks, evaluating AI risk, and ensuring responsible deployment — will not only mitigate legal exposure but also achieve AI governance maturity, strengthen consumer trust and enhance their competitive positioning in an AI-driven economy.