AI Act: Everything you need to know about the new European law on artificial intelligence

The AI Act is a historic turning point for Europe. It promises to strengthen trust in AI and force organizations to reconsider how they use it and ensure compliance.

By
Rémy Bozonnet
1
Min
Share this article
Cerveau Intelligence Artificielle AI Act loi européenne

The AI Act, the world’s first legislation regulating the use of artificial intelligence, has been in effect since July 12, 2024.

Adopted by the European Union, the law aims to promote the development of ethical, transparent, and secure AI systems while fostering innovation.

What is the AI Act?

The AI Act (Artificial Intelligence Regulation) is a European legal framework that governs the marketing, development, and use of AI systems. It is based on a risk-level approach that imposes obligations proportional to the potential impact of AI systems.

Why is Europe regulating artificial intelligence?

The rise of generative AI tools, such as ChatGPT and Midjourney, has highlighted new challenges, including the risks of discrimination, opacity, manipulation, and violations of fundamental rights. The AI Act complements the GDPR by specifically targeting the use of AI. Its objective is to strengthen public trust while ensuring that innovation is properly regulated.

The AI Act defines four risk levels

1. Unacceptable risks (prohibited)

Certain uses of AI are strictly banned because they directly threaten citizens’ rights. Examples include:

  • State-run social scoring
  • Mass biometric surveillance in public spaces
  • Cognitive manipulation of children
  • Undisclosed deepfakes

2. High risks (strictly regulated)

These systems are permitted, but they are subject to stringent requirements. They include AI used in:

  • Critical infrastructure (transportation and healthcare)
  • Education and automated assessments
  • Recruitment (CV screening)
  • Credit or insurance decisions
  • Judicial administration
  • Public security

Mandatory obligations include:

  • Risk assessment
  • Comprehensive technical documentation
  • Data traceability and retention
  • Human oversight
  • Model robustness and security

Remote biometric identification systems are subject to strict conditions with limited exceptions, such as criminal investigations, terrorism, and missing children.

3. Limited Risks (Transparency Required)

This category covers AI systems that could potentially mislead users. Clear disclosure is mandatory when:

  • Interacting with a chatbot
  • Content is generated by AI (text, audio, images, or videos)
  • Deepfakes are shared with the public

4. Minimal risk (free use)

Low-risk AI systems are already widely used and may continue to be used without restriction. These systems include:

  • Spam filters
  • Product recommendation engines
  • AI in video games

Penalties for noncompliance with the AI Act

The AI Act provides for significant fines in cases of noncompliance:

  • Up to €30 million or 6% of a company's global annual turnover for the most serious violations
  • Up to €20 million or 5% of turnover for other breaches
  • Up to €10 million or 2% of turnover for failing to cooperate with the authorities

How can businesses prepare for the AI Act?

Businesses should anticipate compliance by taking the following steps:

  1. IIdentify AI systems in use
  2. Assess their risk levels
  3. Document models and algorithms
  4. Ensure traceability and human oversight
  5. Update governance policies

A solution that helps you meet AI Act requirements

Our software helps you structure compliance with the AI Act by:

  • An AI system registry with integrated risk analysis
  • Mapping of data, models, and assets
  • Monitoring of regulatory obligations
  • Centralization of technical documentation

The AI Act provides companies with an opportunity to adopt a clear, structured framework.
It fosters the development of responsible AI that aligns with European values. Achieving compliance now ensures product sustainability and strengthens user trust.

On September 25 at 11:00 a.m., join Alessandro Fiorentino, Product Owner, and Rémy Bozonnet, Account Executive at Adequacy, to learn about Adequacy and its governance approach to GDPR and the AI Act.

They have trusted us for years

Discover Adequacy

One of our experts introduces Adequacy to you in a real situation.