High-risk AIs according to the AI Act: how to identify them?

Unlike prohibited systems, high-risk AIs defined by the AI Act (automated recruitment, public services, biometrics) are authorized, but subject to strict obligations. Identifying, documenting and securing each use case is now mandatory to build a robust compliance file.

By
Anne-Angélique de Tourtier
1
Min
Share this article
Human head Artificial intelligence

After exploring the forbidden AIs in our first article, it's time to focus on AI at high risk. THEAI Act, which came into force on 1 August 2024, distinguishes high-risk systems because their use can have a significant impact on fundamental rights or the safety of individuals.

These systems are not prohibited, but they are subject to strict requirements. The placing on the market or the use of these AI requires meeting a set of obligations that we centralize in a compliance file, to follow control procedures and to document use cases.

What is a AI at high risk? Examples and definition

To understand what constitutes high risk, ask yourself the question: “My AI can it affect fundamental rights, security, or privacy in a significant way? ”.

Each use must be evaluated according to the potential impact, the exposure of people and the criticality of automated decisions.

Automated recruitment systems

Automated recruitment systems that assess and sort candidates on a large scale, which can lead to discriminatory biases.

AI used in public services

Les AI used in public services, such as the provision of social benefits or the management of access to aid, where an error can have a direct impact on citizens' rights.

Biometric authentication tools

Biometric authentication or access control tools in companies and administrations, where any failure or bias can compromise the security and confidentiality of data.

General-purpose models in certain sectors

General-purpose models in certain sectors, such as health or finance, when they are used to produce diagnoses, financial recommendations or official documents.

Roles and responsibilities according toAI Act

The regulation clearly distinguishes between actors and their responsibilities:

Supplier

  • Are you developing or are you having developed a AIS and put it on the market or put it into service under your own name or brand, at a cost or free of charge
  • You market under your own name or your own brand a AIS high-risk already placed on the market or put into service
  • You are making a substantial change to a AIS at high risk that has already been placed on the market or has already been put into service in such a way that it remains a AIS at high risk
  • You change the destination of a AIS, including a AIS for general use, which has not been classified as high risk and has already been placed on the market or put into service in such a way that the AIS Concerned becomes a AIS at high risk

Suppliers must provide all necessary information to deployers to demonstrate compliance with the AIS and the model (s) on which it is based.

Deployer

You are using a AIS in the context of a non-personal professional activity.

Mandatory

You are located or established inEU and have received and accepted a written mandate from a supplier of AIS or model ofAI for general use to carry out established obligations and procedures on its behalf.

Importer

You are located or established inEU and put on the market a AIS which bears the name or trademark of a natural or legal person established in a third country.

Distributor

You are neither a supplier or an importer but you provide a AIS on the market forEU.

The aim is for each actor to know their responsibilities and to be able to justify their decisions in the event of control by the authorities.

Best practices for AI at high risk

To manage your AIS at high risk:

  • Systematically assess each use case before commissioning
  • Building a compliance file AI comprehensive and scalable, including models, datasets, tests and mitigation measures
  • Establish post-deployment monitoring and control mechanisms to quickly detect and correct any bias or anomaly
  • Ensure transparency and documentation of decisions to facilitate audits and controls

The aim is to reduce risks, while remaining fully compliant withAI Act and the obligations of RGPD in the event that some datasets contain personal data.

Anticipation and expected clarity: FRIA and RGPD

Les AI high-risk devices are not prohibited, but their deployment requires rigor and documentation. Some uncertainties remain, in particular with regard to new generative applications, convergence with the rules RGPD, and harmonization between member states.

En france, the actors are still waiting for the complete publication of the methodology FRIA (FUndamental RAights IImpact Assessment), which will make it possible to more accurately assess the risk of a system ofAI on fundamental freedoms, a bit like a AIPD for personal data. This methodology should become a key tool for deployers and suppliers to document and justify their decisions.

Adequacy now makes it possible to comply with AI Act, by centralizing your use cases, by clearly defining your roles and obligations for each system ofAI, for each model, and by documenting your compliance file in a structured and operational manner. (Ask for a demo!)

FAQ - Frequently asked questions about AI at high risk

What is the difference between AI forbidden and AI at high risk?

One AI prohibited is considered unacceptable for fundamental rights and must be stopped. One AI High-risk is allowed, but its high potential impact requires a series of strict documentation requirements, tests (before and after deployment), and a comprehensive compliance record.

Who is responsible for the compliance of a AI at high risk?

THEAI Act clearly distinguishes roles. The supplier (the one who puts the AIS on the market) and the deployer (whoever uses it) have specific obligations. Agents, importers and distributors also have defined responsibilities.

What are the main obligations of AI at high risk?

They include the establishment of a compliance file AI rigorous, post-deployment monitoring mechanisms, good management of data sets (quality and security) and increased transparency with respect to users.

The latest news

They have trusted us for years

Discover Adequacy

One of our experts introduces Adequacy to you in a real situation.