Article

The AI Act: A Practical Guide for Businesses

Écrit par
Subscribe to newsletter
By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The AI Act represents a regulatory revolution in the European digital landscape. This regulation, adopted by the European Parliament And the Council of the European Union, establishes a harmonized legal framework for artificial intelligence across all Member States.

Unlike a directive that would require transposition into national law, The AI Act applies directly in France as in all the other states of the European Union.

This approach ensures consistency of application and avoids regulatory fragmentation that could hinder innovation.

France has not yet officially designated its National Competent Authority (NCA) to oversee the application of the AI Act. By default, the Directorate-General for Enterprises (DGE) takes on this role. However, given the potential implications on personal data, the CNIL should also play a significant role.

Finding the balance between innovation and regulation

Despite initial fears, the AI Act was designed with particular attention to the needs of SMES (small and medium-sized businesses) and ETI (intermediate-sized companies). While the GDPR (General Data Protection Regulation) has sometimes been perceived as disproportionately heavy for small structures, the AI Act was put in place to ease the burden for small structures.

The approach is pragmatic and takes into account the economic realities of small businesses. Six concrete measures illustrate this desire for balance:

  • Regulatory sandboxes : controlled environments allowing SMEs to test their innovations with priority and free access, including in real conditions.
  • Reduced compliance costs and fees : assessment costs will be proportionate to the size of SMEs and the Commission will regularly assess compliance costs and work to reduce them.
  • Simplified documentation : technical forms adapted to small businesses and targeted training courses to understand the regulation.
  • Dedicated communication : specific channels to answer questions from SMEs with a practical rather than theoretical approach.
  • Proportionality : obligations adapted for providers of general-purpose AI models, with distinct performance indicators for SMEs.

The categories and their obligations

Supplier or Deployer?

The majority of obligations fall on the providers (developers) of high-risk AI systems, and your obligations vary radically depending on your role. They are limited to information requirements if you are a professional user (deployer).

External SaaS solution: you are a deployer with limited liability

As a simple user of a solution developed by others, your responsibilities are relieved, but there are legal obligations nonetheless:

  • Verify compliance from your SaaS provider to the AI Act (ask them for their documentation)
  • Inform clearly The users they interact with an AI
  • Inform clearly Users that synthetic data is produced
  • Train your teams to the capabilities and limitations of AI

Solution created by a service provider: shared responsibility

When you order a custom system, the line of responsibilities becomes more blurred:

  • Clarify by contract who assumes the role of “supplier” within the meaning of the AI Act.
  • Collaborate in the analysis of risk to determine if the system is considered to be at high risk.
  • Share the responsibility technical documentation and compliance measures.

Without contractual clarification, you could be considered a co-supplier, with all the obligations that come with it.

Solution developed in-house: you are a supplier with full responsibility

Creating your own solution puts you at the forefront of regulatory requirements:

  • Evaluate rigorously if your tool falls under Annex III in particular (see below).
  • Document comprehensively your technical choices and your risk analysis.
  • Notify the authority competent if necessary and register your system in the European database.
  • Implement all technical guarantees required for high-risk systems.

In many cases, an AI system may be exempt from high-risk obligations, but you will need to document this analysis.

In all cases: transparency and vigilance

Whatever your role, certain obligations remain the same:

  • Indicate clearly the use of AI
  • Marquez AI-generated content
  • Form your users
  • Follow the evolution of regulatory interpretation

The AI Act is not just a constraint. This is an opportunity to build trust in your solutions. When it comes to business, that trust can become your best selling point.

What is an AI system and what are the general obligations

According to the AI Act, an AI system is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can be adaptable after deployment, and that, for explicit or implicit purposes, infers, from the data it receives, how to generate results such as predictions, content, recommendations, or decisions that may influence physical or virtual environments.”

This includes, of course, AI agents, but also most tools that useGenerative AI Or thePredictive AI.

Concretely, a CV analysis system used by a large company to pre-select candidates, a customer support chatbot deployed by an ETI to answer frequently asked questions, or a personalized recommendation application developed by a startup - all fall into this category.

General obligations

All AI systems, regardless of their risk classification, are subject to fundamental transparency and literacy (training) requirements. In concrete terms, this means that

  • People interacting with an AI system should be clearly informed about this interaction (unless it is obvious in the context).
  • Synthetic or manipulated content must be clearly identified as such, in a manner detectable by machines.
  • Users should be provided with sufficient information to understand the capabilities and limitations of the system.

These obligations apply to all AI systems, whether you are a supplier or a deployer.

Forbidden systems

The AI Act clearly prohibits certain AI systems that present unacceptable risks for fundamental rights, regardless of their performance.

The following are prohibited:

  • Les social rating systems classifying individuals according to their behavior.
  • Technologies manipulating vulnerable people.
  • THEreal-time biometric identification in public spaces (with rare safety exceptions).
  • The systems ofemotional inference at work or in education (excluding medical/safety reasons).
  • The creation of bases for facial data by massive online image extraction or via video surveillance.

Foundational models

Foundational models represent a specific category in the AI Act. They are AI models trained on vast data, capable of performing a variety of tasks and being integrated into different applications.

Their suppliers should create detailed technical documentation, inform downstream developers, respect copyright, and publish a summary of training content. Open source models benefit from relief unless they present systemic risk.

Models trained with a power greater than 10^25 FLOP (such as Gpt-4o, Mistral Large 2, or Gemini 1.0 Ultra) are considered to be at systemic risk. They need to undergo additional evaluations, including adversarial tests, and implement incident tracking and reporting systems.

High-risk systems: understanding the challenges and obligations

The category of high-risk systems is the core of the AI Act and deserves particular attention. A system is considered to be at high risk if it meets one of the following two criteria:

  • if it is used as a safety component of a product covered by EU laws (listed in Annex I)
  • or else, it corresponds to one of the use cases detailed in Annex III

Annex III covers eight sensitive areas where AI poses significant risks:

systèmes d'IA à haut risque
  • Biometric systems (except those prohibited).
  • Critical infrastructures (road traffic management, water supply, gas, electricity).
  • Vocational education and training (admission, assessment of learning outcomes).
  • Employment and management of workers (recruitment, promotion, dismissal).
  • Access to essential public and private services (solvency assessment, health insurance).
  • Law enforcement (crime risk assessment).
  • Management of migration, asylum and borders.
  • The administration of justice and democratic processes.

Let's take a concrete example:

An ETI is developing an AI system to analyze the CVs of candidates. This system falls under the “employment and management of workers” category in Annex III and would therefore be considered to be at high risk.
Likewise, a startup that creates an AI application to assess eligibility for bank loans would fall into the “access to essential services” category.

Exemption from obligations

However, the AI Act provides significant exemptions. A system is not considered to be at high risk, even if it is listed in Annex III, if it:

  • Execute a well-defined task in a larger process
  • Improve The result of a task performed by a human
  • Perform a preparatory task for an assessment relevant to Annex III use cases.

If a supplier considers that its Annex III system is not at high risk due to the exemptions mentioned, it should document this assessment and submit it to the competent national authority upon request. He must also register his system in the EU database before it is placed on the market.

For example, an AI tool that simply helps format resumes or detect spelling mistakes would not be considered high-risk, even in a recruitment context.

For a medical AI system that analyzes radiologies, an exemption could apply if it is limited to preprocessing images and reporting abnormalities, while the final diagnosis remains established by a physician who reviews the results independently. Here, AI improves the process without replacing human medical judgment.

This distinction illustrates the difference between AI assistant and AI agent. An assistant provides information that humans evaluate to make decisions. An agent makes decisions or acts independently, with little or no human intervention. This nuance is fundamental to the AI Act, as it determines the level of obligations applicable to the system.

Obligations for high-risk systems

The requirements for high-risk systems are substantial and include:

  • One risk management system throughout the life cycle.
  • One data governance guaranteeing representative and error-free sets.
  • One detailed technical documentation.
  • The system to allow The traceability and automatic recording events.
  • Of instructions for use to downstream deployers.
  • The system to allow a human supervision.
  • Appropriate levels of precision, of robustness And of cybersecurity.
  • A management system for quality.

AI systems dealing with sensitive areas or personal data generally involve substantial obligations. If you think you are exempt, caution is required: document your assessment carefully so that you can justify it to the authorities in the event of an inspection.

Key dates of the AI Act: application calendar

Timeline corrected and updated

  • Official publication : July 12, 2024 in the Official Journal of the EU.
  • Entry into force : August 1, 2024 (not in spring 2025).
  • AI literacy prohibitions and requirements : Applicable since February 2, 2025.
  • Codes of practice : Finalization scheduled for May 2, 2025.
  • General purpose AI models and governance : Applicable on August 2, 2025.
  • High-risk systems and rest of the rules : Application on August 2, 2026.
  • Full application (Article 6.1) : 2 août 2027.

Dates importantes pour 2025

  • Aujourd'hui (mars 2025) : Les interdictions des systèmes à risque inacceptable sont déjà en vigueur ainsi que l'obligation de littératie (formation).
  • D'ici mai 2025 : Publication des codes de pratique par la Commission.
  • Août 2025 : Désignation des autorités nationales compétentes par les États membres.
  • Août 2025 : Entrée en application des dispositions sur les modèles d'IA à usage général.

Plan d'action pour les entreprises

Face à l'entrée en vigueur progressive de l'AI Act, les entreprises doivent mettre en place une stratégie structurée pour assurer leur conformité. Voici les actions prioritaires à entreprendre dès maintenant :

  • Inventorier vos systèmes d'IA - Recensez tous les outils et applications utilisant l'IA dans votre organisation.
  • Évaluer le niveau de risque - Déterminez si vos systèmes relèvent de l'Annexe III et documentez votre analyse.
  • Identifier votre rôle - Clarifiez si vous êtes fournisseur, déployeur ou les deux pour chaque système.
  • Prioriser les obligations immédiates - Mettez en œuvre les exigences de transparence et de littératie déjà applicables.
  • Préparer la documentation technique - Constituez progressivement vos dossiers de conformité.
  • Suivre l'évolution réglementaire - Restez informé des précisions apportées par les autorités nationales.

La mise en conformité avec l'AI Act représente un investissement stratégique qui, au-delà de l'aspect réglementaire, peut devenir un véritable avantage concurrentiel en renforçant la confiance de vos clients et partenaires dans vos solutions d'IA.

L'AI Act représente un tournant majeur dans la régulation de l'intelligence artificielle en Europe. Plutôt qu'un simple fardeau réglementaire, il constitue un cadre structurant qui renforcera la confiance dans les technologies d'IA et favorisera leur adoption par les utilisateurs et les entreprises.

  • La réglementation est en vigueur depuis février 2025. L'autorité référente en France n'est pas encore officiellement désignée, mais devrait être la DGE et la CNIL.
  • Elle vous concerne si vous concevez ou faites développer un outil d'IA, même pour un usage interne.
  • Les obligations de transparence et d'information s'appliquent à tous :
    • Natural person : obligation d'indiquer qu'il s'agit d'un robot,
    • Synthetic data : obligation de préciser qu'il s'agit de données générées artificiellement)
  • Pour les systèmes à haut risque, qui traitent des données personnelles ou impactent directement des individus, vous devrez documenter vos travaux, y compris les justifications de vos éventuelles exemptions.

En définitive, la clé du succès réside dans l'anticipation :

  • Cartographier vos systèmes d'IA existants et futurs.
  • Évaluer leur niveau de risque selon les critères précis de l'AI Act.
  • Implémenter progressivement les mesures de conformité nécessaires.

Cette approche proactive vous permettra non seulement d'éviter les sanctions potentielles, mais contribuera également à positionner l'IA européenne comme un modèle d'innovation responsable à l'échelle mondiale.

Ressources utiles

Pour vous accompagner dans votre démarche de mise en conformité :

Join our corporate venture building club

Briefing the Chief

Merci, votre inscription a bien été prise en compte !
Oups! Il semble que cela n'est pas fonctionné... Merci de bien vouloir recommencer !