Écrit par

Alexis Laporte
2x Cofounder - 1x Board Member - Tech Specialist
Subscribe to newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
The AI Act represents a regulatory revolution in the European digital landscape. This regulation, adopted by the European Parliament And the Council of the European Union, establishes a harmonized legal framework for artificial intelligence across all Member States.
Unlike a directive that would require transposition into national law, The AI Act applies directly in France as in all the other states of the European Union.
This approach ensures consistency of application and avoids regulatory fragmentation that could hinder innovation.
France has not yet officially designated its National Competent Authority (NCA) to oversee the application of the AI Act. By default, the Directorate-General for Enterprises (DGE) takes on this role. However, given the potential implications on personal data, the CNIL should also play a significant role.
Despite initial fears, the AI Act was designed with particular attention to the needs of SMES (small and medium-sized businesses) and ETI (intermediate-sized companies). While the GDPR (General Data Protection Regulation) has sometimes been perceived as disproportionately heavy for small structures, the AI Act was put in place to ease the burden for small structures.
The approach is pragmatic and takes into account the economic realities of small businesses. Six concrete measures illustrate this desire for balance:
The majority of obligations fall on the providers (developers) of high-risk AI systems, and your obligations vary radically depending on your role. They are limited to information requirements if you are a professional user (deployer).
As a simple user of a solution developed by others, your responsibilities are relieved, but there are legal obligations nonetheless:
When you order a custom system, the line of responsibilities becomes more blurred:
Without contractual clarification, you could be considered a co-supplier, with all the obligations that come with it.
Creating your own solution puts you at the forefront of regulatory requirements:
In many cases, an AI system may be exempt from high-risk obligations, but you will need to document this analysis.
Whatever your role, certain obligations remain the same:
The AI Act is not just a constraint. This is an opportunity to build trust in your solutions. When it comes to business, that trust can become your best selling point.
According to the AI Act, an AI system is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can be adaptable after deployment, and that, for explicit or implicit purposes, infers, from the data it receives, how to generate results such as predictions, content, recommendations, or decisions that may influence physical or virtual environments.”
This includes, of course, AI agents, but also most tools that useGenerative AI Or thePredictive AI.
Concretely, a CV analysis system used by a large company to pre-select candidates, a customer support chatbot deployed by an ETI to answer frequently asked questions, or a personalized recommendation application developed by a startup - all fall into this category.
All AI systems, regardless of their risk classification, are subject to fundamental transparency and literacy (training) requirements. In concrete terms, this means that
These obligations apply to all AI systems, whether you are a supplier or a deployer.
The AI Act clearly prohibits certain AI systems that present unacceptable risks for fundamental rights, regardless of their performance.
The following are prohibited:
Foundational models represent a specific category in the AI Act. They are AI models trained on vast data, capable of performing a variety of tasks and being integrated into different applications.
Their suppliers should create detailed technical documentation, inform downstream developers, respect copyright, and publish a summary of training content. Open source models benefit from relief unless they present systemic risk.
Models trained with a power greater than 10^25 FLOP (such as Gpt-4o, Mistral Large 2, or Gemini 1.0 Ultra) are considered to be at systemic risk. They need to undergo additional evaluations, including adversarial tests, and implement incident tracking and reporting systems.
The category of high-risk systems is the core of the AI Act and deserves particular attention. A system is considered to be at high risk if it meets one of the following two criteria:
Annex III covers eight sensitive areas where AI poses significant risks:
Let's take a concrete example:
An ETI is developing an AI system to analyze the CVs of candidates. This system falls under the “employment and management of workers” category in Annex III and would therefore be considered to be at high risk.
Likewise, a startup that creates an AI application to assess eligibility for bank loans would fall into the “access to essential services” category.
However, the AI Act provides significant exemptions. A system is not considered to be at high risk, even if it is listed in Annex III, if it:
If a supplier considers that its Annex III system is not at high risk due to the exemptions mentioned, it should document this assessment and submit it to the competent national authority upon request. He must also register his system in the EU database before it is placed on the market.
For example, an AI tool that simply helps format resumes or detect spelling mistakes would not be considered high-risk, even in a recruitment context.
For a medical AI system that analyzes radiologies, an exemption could apply if it is limited to preprocessing images and reporting abnormalities, while the final diagnosis remains established by a physician who reviews the results independently. Here, AI improves the process without replacing human medical judgment.
This distinction illustrates the difference between AI assistant and AI agent. An assistant provides information that humans evaluate to make decisions. An agent makes decisions or acts independently, with little or no human intervention. This nuance is fundamental to the AI Act, as it determines the level of obligations applicable to the system.
The requirements for high-risk systems are substantial and include:
AI systems dealing with sensitive areas or personal data generally involve substantial obligations. If you think you are exempt, caution is required: document your assessment carefully so that you can justify it to the authorities in the event of an inspection.
Face à l'entrée en vigueur progressive de l'AI Act, les entreprises doivent mettre en place une stratégie structurée pour assurer leur conformité. Voici les actions prioritaires à entreprendre dès maintenant :
La mise en conformité avec l'AI Act représente un investissement stratégique qui, au-delà de l'aspect réglementaire, peut devenir un véritable avantage concurrentiel en renforçant la confiance de vos clients et partenaires dans vos solutions d'IA.
L'AI Act représente un tournant majeur dans la régulation de l'intelligence artificielle en Europe. Plutôt qu'un simple fardeau réglementaire, il constitue un cadre structurant qui renforcera la confiance dans les technologies d'IA et favorisera leur adoption par les utilisateurs et les entreprises.
En définitive, la clé du succès réside dans l'anticipation :
Cette approche proactive vous permettra non seulement d'éviter les sanctions potentielles, mais contribuera également à positionner l'IA européenne comme un modèle d'innovation responsable à l'échelle mondiale.
Pour vous accompagner dans votre démarche de mise en conformité :
Join our corporate venture building club