In August 2024, the EU Artificial Intelligence Act (EU AI Act) came into force. The EU AI Act intersects with existing data legislation, particularly the EU's General Data Protection Regulation (GDPR), presenting businesses with new compliance challenges and opportunities. The aim of both frameworks is to ensure that technology advances societal interests and protects individual rights, albeit with distinct focuses and methodologies.
The GDPR requires organisations to conduct Data Protection Impact Assessments (DPIAs) for processing activities which are likely to result in high risks to individuals' rights and freedoms. This specifically includes activities involving automated decision-making, profiling, and large-scale processing of sensitive data. DPIAs enable businesses to identify risks, implement mitigations, and demonstrate compliance.
The EU AI Act introduces the concept of Fundamental Rights Impact Assessments (FRIAs) for high-risk AI systems. These assessments evaluate risks to fundamental rights, including privacy, non-discrimination, and societal impacts. While DPIAs focus on personal data, FRIAs encompass broader ethical and human rights dimensions.
Examples of high-risk AI systems which will commonly require compliance under both frameworks may include:
- AI-driven recruitment tools analysing personal data to make hiring decisions.
- Credit scoring algorithms using profiling and automated decision-making to determine access to financial products and services.
- Facial recognition systems used in public spaces.
Organisations that deploy AI systems which utilise personal data will see an overlap between the requirements for DPIAs and FRIAs. Controllers and providers of high-risk AI systems must ensure compliance with both frameworks and should leverage the DPIAs to conduct the FRIAs. Organisations should document risk assessments, embed privacy by design, and implement safeguards to mitigate the risks identified under both frameworks.
Practical takeaways for businesses
- Unified Assessments: Consider a single process integrating DPIAs and FRIAs to address overlapping requirements efficiently.
- Collaborate Across Roles: Build relationships between data protection officers (DPOs), compliance teams, and AI development teams to bridge legal and technical compliance gaps.
- Accountability Mechanisms: Establish clear governance frameworks for monitoring the compliance of AI systems with both the GDPR and the EU AI Act, including periodic reviews.
- Prepare for Validation: Expect supervisory authorities or notified bodies to validate your assessments, particularly for high-risk AI systems.
The relationship between the GDPR and the EU AI Act underscores the importance of responsible AI development. Although we would welcome an AI system that could suggest the perfect gift for our Secret Santa, it might be disproportionate to input three years’ of their internet history into the system to achieve the outcome! Businesses should proactively align their compliance strategies both to meet regulatory demands and to foster trust and innovation in an AI-driven future.
Related item: Looking ahead: Global AI governance takes shape - what to expect from the EU and US
This article was co-authored by Joshua Curzon, Trainee Solicitor.