Intelligence artificielle

Intelligence artificielle
AI Act: limited risk AI systems, transparency, and compliance
The category of limited-risk AI systems under the AI Act aims to govern innovation while protecting users. While these systems do not pose a critical threat to fundamental rights, their use is subject to strict transparency obligations. The article details how professionals (suppliers and deployers) must ensure traceability and user information, particularly through rigorous documentation and light monitoring, which are essential principles for AI Act compliance within organizations.

Intelligence artificielle
High-risk AIs according to the AI Act: how to identify them?
Unlike prohibited systems, high-risk AIs defined by the AI Act (automated recruitment, public services, biometrics) are authorized, but subject to strict obligations. Identifying, documenting and securing each use case is now mandatory to build a robust compliance file.

Intelligence artificielle
AI ranking: French, European and global — Choosing a solution that complies with the GDPR and the AI Act
Artificial Intelligence is omnipresent, but behind innovation there is a major challenge of digital sovereignty and compliance with the GDPR and the recent AI Act. This article is intended for Legal, IT and Compliance departments to guide them in the ranking of AIs (French, European or global) and to help them structure the use of these technologies in a responsible and sustainable way.

Intelligence artificielle
AIs prohibited by the AI Act: when the use becomes risky and unacceptable
The European AI Act strictly prohibits uses of AI deemed to be at “unacceptable risk”, such as social scoring, cognitive manipulation (subliminal messages) or the exploitation of vulnerabilities. It is imperative for any organization to identify, block and document the absence of these prohibited practices.

Intelligence artificielle
AI Act: Everything you need to know about the new European law on artificial intelligence
The AI Act is a historic turning point for Europe. It promises to strengthen trust in AI and force organizations to reconsider how they use it and ensure compliance.
Discover Adequacy
One of our experts introduces Adequacy to you in a real situation.