RGPD and IA Act: what should DPOs prepare by 2025?
From 2025, DPOs will have to articulate RGPD and IA Act for all high-risk AI systems, by updating their DPIs, registers and internal procedures, and by guaranteeing traceability, transparency and human supervision to ensure integrated and secure compliance.

From 2025, DPOs will have to integrate a new regulatory situation: the joint application of the RGPD and the European regulation on artificial intelligence (IA Act).
Automated processing, high-risk systems, cross-documentation: the requirements are intensifying and call for adapted governance.
This article guides you on the links between these two texts, the obligations to anticipate and the steps to be implemented to manage aligned and secure compliance.
To remember:
- GDPR and IA Act often apply jointly, especially in the case of high-risk automated processing
- The DPO acts as a facilitator : GDPR impact assessment, coordination with AI compliance teams
- From 2025, it will be necessary update AIPD, registry and internal procedures to integrate AI requirements
- One cross and structured compliance makes it possible to gain consistency and efficiency
- A lot of gray areas remain: Interservice monitoring and dialogue are essential
Understanding the relationship between GDPR and IA Act
Since the entry into force of RGPD in May 2018, the regulation of personal data became a reference framework for any organization dealing with sensitive information. The entry into force ofIA Act (EU Regulation 2024/1689) in August 2024, with gradual application until 2026-2027, adds a complementary layer of regulation, specifically oriented to artificial intelligence systems. For DPOs, the question is not to choose between the two, but to understand How do these regimes fit together, intersect, and must be integrated into a single compliance strategy.
Two logics, one common ground
The RGPD applies as soon as personal data is processed, regardless of the technology used. It is part of a logic for the protection of fundamental rights, with a focus on the principles of loyalty, transparency, minimization and limitation of purposes.
Conversely, theIA Act aims at the regulation of artificial intelligence systems, regardless of whether they process personal data or not. His approach is based on risk level what the system presents for health, safety or fundamental rights. Thus, an AI used to screen applications, assess medical paths, or grade students will be considered “high-risk” and subject to strict requirements.
In practice, both texts apply simultaneously in the majority of professional use cases: high-risk AI systems generally process personal data to profile, predict, or make automated decisions.
Comparison table: GDPR vs IA Act
We also refer you to the article for more explanations AI Act: Everything you need to know about the new European AI law.
What points of convergence?
Despite their structural differences, the two regulations complement each other on several levels. They both require:
- One prior risk assessment (AIPD for the GDPR, conformity assessment for the IA Act)
- One treatment/system documentation
- One traceability of automated decisions
- One transparency with respect to the persons concerned
- A logic of “Privacy and Ethics by Design”
In addition, the GDPR already prohibits exclusively automated decisions with legal or similar effects (article 22), with some exceptions. The IA Act reinforces and clarifies this principle by requiring human supervision measures specific for high-risk systems.
In practice, compliance teams will need to be able to to identify the treatments where RGPD and IA Act apply simultaneously, to ensure their joint documentation, and to ensure the consistency of the AIPD and IA conformity assessment procedures.
{{newsletter}}
The evolving role of DPOs in the face of regulated AI
The entry into force of the IA Act requires an evolution of the role of Data protection officer (DPO). Although the IA Act does not assign responsibilities directly to the DPO, the DPO becomes a key facilitator of the governance of artificial intelligence systems processing personal data, in close collaboration with businesses, AI project teams and technical departments.
Evaluate the GDPR impacts of existing AI treatments
The first challenge for DPOs is to map what exists from a RGPD point of view. Many applications or tools already in place in the organization incorporate AI components without their data protection implications having been fully evaluated.
It is therefore necessary to identify:
- Automated processing that may fall within the scope of the IA Act and involving personal data
- AI tools offered by third parties (SaaS, API, GPAI platforms) and their GDPR implications
- Processes based on profiling or automated decisions already subject to article 22 of the RGPD
One proofreading impact assessments (AIPD) and treatment registers will make it possible to identify regulatory crossing points and the necessary complements.
Facilitating the governance of AI projects
The DPO must now intervene Upstream of projects integrating an AI component to assess the RGPD aspects. This involves:
- To participate in the assessment of risks for people's rights and freedoms
- To recommend the appropriate technical and organizational measures on the GDPR side
- To support the project team in compiling the RGPD compliance file
- To ensure that the rights of individuals (information, access, access, correction, opposition) remain applicable
- To coordinate with the teams in charge of IA Act compliance
More generally, it is a question of bringing out a integrated compliance culture, in the same way as the RGPD culture developed in recent years.
New reflexes to adopt
- Participate in AI project committees from the scoping phase for GDPR aspects
- Establishing a RGPD/IA Act cross-evaluation grid
- Work in coordination with the IA Act teams and the CSSI
- Strengthen the link with lawyers to secure contractual clauses with AI suppliers
- Train professionals to identify use cases involving personal data and AI
Collaborating in an ecosystem of shared responsibilities
IA-RGPD compliance cannot be carried out solely by the DPO. The IA Act introduces specific roles (supplier, deployer, distributor) with their own obligations. The DPO has a coordinating role for:
- Facilitate the understanding of RGPD issues in AI projects
- Clarifying data protection responsibilities
- Initiate shared compliance reviews (enriched RGPD register, integrated procedures)
- Create internal frameworks aligned with RGPD/IA Act
The DPO becomes a ethical and technical compliance facilitator AI systems processing personal data.
We also refer you to the article for more explanations AI Act: Understand everything about the new European law on AI.
AI project governance example
Project : development of a recruitment aid tool integrating algorithmic scoring. Actors : HR (business owner), data science (model), CSSI (infrastructure), DPO (GDPR compliance), IA Act team (AI compliance).
Role of the DPO :
- GDPR risk assessment (profiling + impact on individuals)
- Design review: exercising the rights of candidates?
- Updating the treatment register and the AIPD
- Information for candidates on the processing of their data
- Coordination with the IA Act team for the coherence of the procedures
New obligations to be anticipated as early as 2025
The gradual application ofIA Act between 2024 and 2027 introduces new obligations that come complete the existing GDPR framework. For compliance teams, it is becoming imperative to anticipate these developments, by identifying the systems concerned and by preparing the required documents and processes. What's at stake? Guarantee the compliance of automated treatments, avoid sanctions, and above all strengthen the trust of users.
High-risk AI systems: identification and documentation
The core of the IA Act regulatory system is based on the Classification of AI systems by risk level. Those considered to be “at high risk” (listed in Annex III) are subject to specific requirements.
In particular, the following are considered to be at high risk:
- AIs used for recruiting, selecting, and evaluating candidates
- Solvency scoring or credit assessment tools
- Decision support systems in accessing essential services
- Remote biometric surveillance devices
- Assessment systems in education
In these cases, the Deployer of the AI system must:
- Ensure that the system is compliant before commissioning
- Make a evaluation of the impact on fundamental rights (section 27)
- Hold a deployment documentation
- Set up appropriate human supervision measures
- Guarantee the traceability and system monitoring
RGPD/IA Act cross obligations
Sanctions and legal risks
The IA Act provides for a graduated sanctions regime:
- Prohibitions : up to €35m or 7% of global annual turnover
- Non-compliance high-risk systems : up to €15 million or 3% of turnover
- Information requirements : up to €7.5 million or 1.5% of turnover
These sanctions are in addition to existing GDPR sanctions, creating a risk of cumulation in the event of cross-compliance.
Transparency, traceability, human control: the new standards
The IA Act imposes a automatic traceability high-risk systems (article 12). This includes:
- Automatic recording of decisions taken or recommended
- Documentation of the input data used
- The traceability of significant events during operation
This requirement complements the obligation to GDPR transparency already existing. For example, a candidate evaluated by an AI system should be informed:
- The use of an AI system (Transparency IA Act)
- The processing of personal data (RGPD information)
- Of his rights (access, rectification, opposition RGPD + human supervision IA Act)
La human supervision becomes more precise than with Article 22 RGPD: it must be effective, appropriate and allow for significant intervention.
AIPD and RGPD register: update required
For DPOs, this implies a adaptation of existing compliance tools :
- Les impact assessments (AIPD) must include an assessment of the risks associated with the use of AI
- The register of treatments should mention the use of AI systems and their level of risk
- Of periodic review procedures AI systems will have to be put in place
It is advisable to hold a register specific to AI systems in coordination with the IA Act teams, documenting:
- The aim and purpose of the system
- The IA Act risk level identified
- The GDPR implications (legal basis, rights of individuals)
- The guarantees put in place
- Roles and responsibilities (supplier, deployer, DPO)
Implement RGPD/IA Act cross-compliance
With the application of the AI Act, organizations must now think of compliance as a integrated system, capable of meeting the requirements of both the GDPR and the European AI regulation. It's not about piling up bonds, but about sharing processes, tools and responsibilities to build coherent governance.
Tools, processes and shared documentation
The first step is to align existing approaches with the new requirements:
- The register of treatments RGPD must include an AI component (systems used, purpose, AI Act risk level)
- Les AIPD must be enriched with an analysis of risks specific to AI: bias, explainability, supervision, model evolution
- Les internal policies must integrate AI clauses (security, ethics, purchasing, contracting)
- One centralized technical documentation must be created, with the elements required by the IA Act
The aim is to produce an integrated proof of compliance, usable in the event of a CNIL control or audit by an IA Act supervisory authority.
Typical process - Integrating AI into GDPR compliance
- Identification the AI use case (tool, project, supplier)
- GDPR risk assessment (personal data, automated decision-making, profiling)
- IA Act Risk Level Assessment (annex III, systemic risk)
- Cross-documentation (enriched register + IA Act file + adapted AIPD)
- Establishment of guarantees : human supervision, exercise of rights, traceability
- Multidisciplinary journal DPO - IA Act team - CSSI - lawyer - profession
This process guarantees a complete documentary traceability and makes it possible to quickly identify discrepancies or non-conformities.
We remind you that our Adequacy solution includes an AI module to manage these specific compliance aspects.
Example of compliance: application sorting system
A company uses an AI to pre-screen resumes.
- RGPD : the system is subject to article 22 (automated decision) + appropriate legal basis + AIPD + candidate rights
- IA Act : high-risk system (annex III - use) → obligations of the deployer, impact assessment, fundamental rights, human supervision
By crossing requirements, teams can:
- Conduct an integrated impact assessment (RGPD + fundamental rights)
- Update the register with AI specificities
- Define human supervision procedures that respect both frameworks
- Training recruiters on cross-obligations
- Contract clearly with the AI supplier
Integrating AI into the organization's compliance culture
Beyond regulatory obligations, it is essential to create a culture of AI-data risk management. This involves:
- La Raising awareness among business departments with the intersecting impacts IA-RGPD
- THEinvolvement of governance bodies In arbitrations
- The establishment of annual integrated compliance reviews
The DPO has a facilitating role here: he is no longer content with managing the RGPD aspects, he participates in extended governance combining technology, regulation and ethics.
The gray areas: what remains to be clarified for DPOs?
Even if theIA Act provides a structuring legal framework, many grey areas remain. Between technical uncertainties, regulatory tensions and imprecision on roles, the period 2025-2027 will be one of adaptation and adjustment. It is therefore crucial toidentify topics that are still unclear, in order to organize an active watch and to anticipate future arbitrations.
GPAI and generative AI: complex articulation with the RGPD
Les general-purpose AI models (GPAI) are subject to a specific regime in the IA Act:
- GPAI standards : minimum documentation requirements and transparency
- GPAI at systemic risk (threshold of 10²₂ FLOPs): model evaluations, contradictory tests, reinforced cybersecurity measures
The difficulty for DPOs lies in the coordination with the RGPD:
- How to legally qualify the professional use of a GPT or Claude ChatGPT?
- What is the legal basis for training data?
- How to exercise RGPD rights on content generated by AI?
- What is the responsibility in the event of the generation of problematic content?
Limits of responsibility and coordination of roles
The IA Act introduces a chain of responsibilities: vendor, distributor, importer, Deployer. The GDPR maintains its concepts of data controller and subcontractor.
The practical questions are numerous:
- Is an AI deployer automatically responsible for GDPR processing?
- How to articulate the obligations of the AI supplier with those of the GDPR subcontractor?
- Who is responsible in case of discriminatory bias: the model supplier or the deployer?
The DPO will have to work closely with the legal teams. to clarify these joints in contracts.
Major regulatory uncertainties to watch out for
- GPAI thresholds and criteria : the evolution of systemic risk thresholds and their impact on bonds
- Cascading responsibility : who answers what in case of a complex supplier-integrator-deployer chain
- Articulation with labor law : validity of discriminating HR AI despite technical compliance
- Exercise of RGPD rights on AI systems: portability, rectification, erasure in complex models
What is the future doctrine of the supervisory authorities?
Neither the CNIL nor the future IA Act supervisory authorities have yet published unified doctrine on the joint application of the two regimes.
Key questions remain open:
- Can we combine AIPD and fundamental rights impact assessment?
- How to apply legitimate interest to high-risk AI treatments?
- What approach for AI systems developed in-house?
The DPO must therefore organize a proactive watch and participate in professional working groups to anticipate interpretations.
FAQ - GDPR and AI Act
Is the AI Act replacing the GDPR?
No The RGPD remains fully applicable to all personal data processing. The AI Act complements this framework by specifically regulating AI systems, regardless of whether they process personal data or not. The two texts apply cumulatively.
Can an AI be deployed without AIPD?
No, if it processes personal data and poses a high risk to rights and freedoms. The IA Act adds its own fundamental rights impact assessment requirements for high-risk systems, which can be coordinated with the AIPD.
Who is responsible in case of non-compliance of an AI system processing personal data?
It depends on the nature of the offence:
- IA Act : the supplier for system compliance, the deployer for its appropriate use
- RGPD : the data controller for data processing compliance The same organization can combine several roles and responsibilities.
Should a DPO become an AI expert?
The DPO must develop a sufficient understanding of AI issues to assess their GDPR impacts, but does not have to become a technical expert. Above all, he must know how to coordinate with specialized teams (data scientists, AI Act teams, CSSI).
How to deal with generative AIs in business?
The GPAIs used in business require a case-by-case analysis:
- Evaluate whether the use constitutes the processing of personal data (legal basis, information, rights)
- Check the terms of use of the GPAI provider
- Document supervision and control measures
- Anticipate the risks of generating problematic content
{{newsletter}}


