AIs prohibited by the AI Act: when the use becomes risky and unacceptable

The European AI Act strictly prohibits uses of AI deemed to be at “unacceptable risk”, such as social scoring, cognitive manipulation (subliminal messages) or the exploitation of vulnerabilities. It is imperative for any organization to identify, block and document the absence of these prohibited practices.

By
Anne-Angélique de Tourtier
1
Min
Share this article
Artificial intelligence human head and brain
Alignement icône et texte

cognition Comprendre le cadre européen

Imagine that you are an innovation manager in a company, a community or a sensitive sector, and that you are about to deploy an AI system (SIA). Until now, your focus has been on performance, user experience, and innovation. But since the entry into force ofAI Act, on August 1, 2024, everything changed.

This European regulation classifies AI systems according to their level of risk and imposes obligations adapted to each category. Some practices are Forbidden, others considered to high risk, and more at limited risk, with transparency requirements.

To better understand: this article focuses on prohibited AIs, or unacceptable risky uses. The following two articles will deal with high-risk AIs and low-risk AIs, allowing you to visualize the entire framework and to locate yourself according to your projects and sectors.

Alignement icône et texte

robot_2 Comprendre les risques à travers les usages

To project yourself, ask yourself the question: “If I use this AI system, what are the impacts for users and society? ”.

Cognitive manipulation and vulnerability exploitation

Imagine a video game for children in which a chatbot pushes people to buy via subliminal messages. Even if the intention is playful, this type of Cognitive manipulation is forbidden. An advertising campaign targeting vulnerable populations, such as elderly people or people in financial difficulty, is also prohibited, because it exploits vulnerabilities.

Prohibitions specific to the public sector (social scoring, facial recognition)

In the public sector, some ideas look appealing on paper, but are forbidden in practice:

  • A system of social rating awarding points to the most respectful citizens to influence access to public services is an unacceptable use, regardless of destination
  • Likewise, the Real-time facial recognition in public places is strictly regulated: only a few very limited exceptions are authorized

Other categories of AI that are strictly prohibited (emotion inference)

Other uses also fall into the category of Forbidden AIs :

  • THEInference of emotions in schools or at work
  • Biometric categorization based on sensitive characteristics
  • The constitution of massive facials without consent, which also violates the RGPD

The key principle is that risk is determined by the actual impact on individuals and society, not by the technology itself.

Alignement icône et texte

crisis_alert Identifier les IA interdites et documenter sa non-conformité

Even if an AI system seems promising, it is essential to:

  • Identify clearly the models on which AIS is based and the use cases that could make this AIS fall into the category of AIS at unacceptable risk
  • Block or delete any destination or project falling under this category. Use is strictly prohibited, with no general exceptions
  • Documenting your decisions in an AI compliance case, to show that you have assessed the risks and taken the necessary actions
  • Raise awareness among your teams to the prohibitions and principles of the AI Act and the RGPD, to avoid any unintentional abuse

The objective is not to “comply” as for high-risk AIs, but to prevent any prohibited use, to detect and document it.

Alignement icône et texte

mystery Ce qui reste à clarifier

The AI Act clearly sets out prohibited uses of AI, such as subliminal manipulation, social scoring, or biometric recognition without consent. But several points remain unclear: the application to models for general use outside the EU, the monitoring of emerging uses (social networks, online games), and the highly regulated exceptions for public safety.

Adequacy now makes it possible to comply with the AI Act, by centralizing your use cases, by clearly defining your roles and obligations for each AI system, each model, and by documenting your compliance file in a structured and operational manner (ask for a demo!)

Alignement icône et texte

shield_question FAQ : questions fréquentes sur les IA Interdites

What is unacceptable risky use under the AI Act?

Unacceptably risky use refers to an AI system whose deployment is considered to be contrary to the fundamental values of the EU, as it presents a clear risk of manipulation, discrimination or violation of fundamental rights. These systems are strictly forbidden and should be avoided.

What are the concrete examples of AI prohibited by the regulation?

The AI Act explicitly prohibits the social scoring of citizens, the use of subliminal techniques to manipulate individuals, the exploitation of the vulnerabilities of specific groups (children, people with disabilities), and, with very few exceptions, the use of real-time remote biometric recognition in public places.

If my AI system is banned, can I make it compliant?

No Unlike high-risk AIs, a system that is classified as “forbidden” cannot be brought into compliance. It must be blocked, abandoned or deleted, because its use is strictly prohibited with no general compliance exception. The only action is to document the ban in your compliance file.

The latest news

They have trusted us for years

Discover Adequacy

One of our experts introduces Adequacy to you in a real situation.