AI Act: limited risk AI systems, transparency, and compliance
The category of limited-risk AI systems under the AI Act aims to govern innovation while protecting users. While these systems do not pose a critical threat to fundamental rights, their use is subject to strict transparency obligations. The article details how professionals (suppliers and deployers) must ensure traceability and user information, particularly through rigorous documentation and light monitoring, which are essential principles for AI Act compliance within organizations.

The category of limited-risk AI systems under the AI Act aims to govern innovation while protecting users. While these systems do not pose a critical threat to fundamental rights, their use is subject to strict transparency obligations. The article details how professionals (suppliers and deployers) must ensure traceability and user information, particularly through rigorous documentation and light monitoring, which are essential principles for AI Act compliance within organizations.
{{newsletter}}
Definition: what is limited risk AI according to the AI Act?
Having explored prohibited and high-risk AI, it is now time to discuss limited-risk AI. While these systems do not pose a direct threat to fundamental rights or human safety, their use still requires transparency and accountability.
The objective is simple: to enable innovation while ensuring users understand they are interacting with an AI system and will not suffer any unexpected consequences. The distinction between high-risk and limited-risk uses is therefore primarily based on potential impact on individuals.
Understanding risks through concrete use cases
To visualise what limited-risk AI is, imagine systems that, if mistaken, do not cause major harm. These include:
- A customer support chatbot that provides answers to frequently asked questions in a company or community
- Recommendation tools for films, books or music
- Writing assistance or language correction systems
In all these cases, the risk is not zero, but remains manageable provided the principles of transparency and documentation are respected. This is the key to AI Act compliance for this category.
Main obligations for limited-risk AI systems
For limited-risk AI, the AI Act primarily imposes transparency and governance measures, alleviating the compliance burden compared to high-risk systems. These obligations are fundamental for DPOs and CISOs responsible for implementation:
- Inform users that they are interacting with an AI system, and specify its limitations and capabilities
- Document the model used and the use cases, to maintain traceability in the event of an anomaly or future inspection. This process is part of a shared [GDPR / AI Act documentation] (Link to GDPR documentation or traceability article) approach
- Implement a light monitoring process, enabling the detection of misuse or recurrent errors
The advantage of limited-risk AI is that the implementation of these obligations remains relatively simple, but it must be proactive and systematic.
Clarification of roles and responsibilities
Even for low-risk AI, it is useful to clarify responsibilities. These were specified in the previous article on [the roles defined by the AI Act] (Link to article defining roles in the AI Act).
In the context of low-risk AI, the roles remain the same as those defined by the AI Act — namely, supplier, deployer, importer and distributor — but their obligations are considerably reduced.
- The supplier must ensure a minimum level of transparency regarding the functioning of their AI system and provide the necessary information for its proper use
- The deployer, for its part, becomes the guardian of proper use: it informs users, monitors the effects of the system, and documents use cases in its compliance file
- Importers and distributors must ensure that this information is communicated and accessible throughout the deployment chain
The spirit of the regulation remains the same: each party must understand where their responsibility ends and that of others begins, so trust in AI is based on clear, shared traceability.
Transparency, accountability, and anticipation of framework evolution
Limited-risk AI fosters innovation while safeguarding users through straightforward transparency and documentation requirements. Providing clear information to users, tracking use cases and monitoring the proper functioning of the AI system ensures responsible and controlled use.
However, the framework is still evolving. The FRIA methodology will soon enable the official assessment of the impact of AI systems on fundamental freedoms. Even limited-risk AI could see its obligations evolve depending on the context of use. Regulatory monitoring is therefore essential for CISOs.
Adequacy enables you to comply with the AI Act now by centralising your use cases, clearly defining the roles and obligations of each AI system and model, and documenting your compliance file in a structured, operational manner. Request a demo!
{{newsletter}}
FAQ: limited risk AI and compliance (DPO/CISO)
What distinguishes high-risk AI from limited-risk AI?
The distinction is based on the potential impact on fundamental rights and human safety. High-risk AI is involved in critical areas (health, recruitment, justice) and is subject to very heavy obligations. Limited-risk AI (chatbots, simple recommendation tools) primarily requires transparency obligations to inform the user.
Does the GDPR apply to limited-risk AI?
Yes, as soon as an AI system processes personal data, the GDPR applies in full, even for limited-risk AI. The AI Act adds specific AI obligations (transparency, documentation) that complement, but do not replace, the GDPR requirements.
What are the main obligations for the deployer of limited-risk AI?
The deployer is the guardian of proper use. Their main obligations are to clearly inform users that they are interacting with an AI, monitor the effects of the system, and document specific use cases in their AI Act compliance file.


