Regulation of AI: Europe at the time of choices

Estimated read time 4 min read

Regulation of AI: Europe at the time of choices

Scheduled to enter into force in mid-2025, the AI Act must regulate the use of artificial intelligence in the European Union. Originally, when the project was presented in April 2021, the founding principle of the future regulation had the advantage of simplicity since it was based on a classification of RNS by risk levels in the critical areas of health, safety or law.

AI models that create an “unacceptable” risk such as population surveillance devices based on facial recognition are obviously prohibited. Likely to generate discrimination, matching solutions in recruitment, banking scoring or predictive justice present a high risk and are called upon to be supervised.

Then ChatGPT arrived…

The arrival over the past year of ChatGPT and more generally of generative AI has changed the situation. Beyond the risk-based approach, the AI Act now intends to regulate these “foundation models”, namely, as Wikipedia explains, large-scale models based on a very high volume of unlabeled data, generally driven by self-supervised learning.

Following the example of what it has been able to do with the Digital Markets Act (DMA), the European Union plans not only to supervise uses but also to regulate the actors of large language models (LLM) such as OpenAI/Microsoft’s ChatGPT, Google’s Bard or Meta’s LLaMA. By their power, their tools, which can perform a large number of tasks, are judged to have a high impact and present potentially systemic risks.

Qualified as “very efficient foundation models”, they could be subject to additional obligations and regular checks. Other categories of AI are envisaged depending on the number of users, such as general purpose AI (GPAI).

France, Germany and Italy in ambush

As the text enters its final legislative phase known as the trilogue – the Council of the EU, the European Commission and the European Parliament are meeting to reach a consensual result – this multi-level approach is hotly contested. Three founding countries, France, Germany and Italy, are asking Spain’s current Presidency of the Council of the EU to renounce it.

According to the website Euractiv, the trio considers that the rules applying to foundation models “would go against the approach of technological neutrality and taking into account the risks of the regulation on artificial intelligence, supposed to preserve both innovation and safety. »

The argument of restrained innovation is taken up by the tech players. Bringing together about thirty national organizations, including the French Afnum, Digital Europe fears, in a press release, that the regulation will kill in the bud the development of new models, ” many of whom were born here in Europe ».

For the lobby group, “ the risk-based approach must remain at the heart of the AI law ». Technologically neutral, the regulatory framework must focus” on really high-risk use cases “and not to qualify certain AI models as high-risk by default.

Finally, Digital Europe highlights the cost of compliance for companies in the sector. The marketing of a single AI-based product would cost more than 300,000 euros for an SME with 50 employees, according to the Commission’s own data.

In France, another luxury lobbyist, Cédric O plays the go-betweens. The former digital minister sits on the committee of experts responsible for informing the government on its national strategy for artificial intelligence while advising Mistral AI, the French flagship of the field.

A mistake with serious consequences

CEO of Mistral AI, Arthur Mensch makes no secret of his opposition to the current version of the AI Act. He considers it far from its original spirit, which aimed to proportionate the regulation according to the level of risk of a use case. Now want to regulate the foundation models, either ” the engine behind some AI applications “, is, according to him, a mistake.

“We cannot regulate an engine devoid of uses, he argues. We do not regulate the C language because it can be used to develop malware. On the contrary, we prohibit malware and strengthen network systems (we regulate the use)”.

For the young startup, the regulations as they are currently proposed favor “incumbent companies that can afford to face heavy compliance requirements ». The new entrants who “ don’t have an army of lawyers “on the other hand, will be sanctioned.

If it is not modified in the coming weeks, the AI Act could, more generally, curb innovation and call into question European sovereignty in the field of AI by weakening its players against their American and Chinese competitors.

You May Also Like

More From Author