AI in practice

EU reaches "historic agreement" on world's first AI regulation, setting global precedent

Matthias Bastian
Widescreen photo of an advanced digital data stream, subtly integrating the EU flag. The flag is blended into the stream of binary code and technological patterns, seamlessly merged together in a high-tech, clean aesthetic. The colors of the flag are muted to harmonize with the digital background, creating a sophisticated and unified digital environment.

DALL-E 3 prompted by THE DECODER

The European Union has reached a political agreement on the world's first law regulating artificial intelligence.

Negotiations between the European Parliament and the Council of the EU were concluded on Friday evening after lengthy discussions.

The AI law is designed to more strictly regulate the use of AI in the EU, introducing different risk classes for AI systems while promoting innovation. The rules set out obligations for AI systems based on their potential risk and impact.

EU Internal Market Commissioner Thierry Breton called the agreement "historic." The legislation, which the Commission first proposed in April 2021, could serve as a model for AI regulation around the world through the so-called ripple effect of EU regulation, with tech companies optimizing their AI products for the most heavily regulated market.

Image: Thierry Breton via X

Risky AI systems and prohibited uses

The legislation bans certain AI applications that could pose a threat to civil rights and democracy. These include

There are exceptions for real-time biometric identification systems in public spaces for law enforcement purposes. The use of AI is only permitted for a defined list of serious crimes (murder, terrorism, abuse) with prior judicial approval. The person must be convicted or suspected of committing a serious crime.

For AI systems classified as high-risk, MEPs are calling for a mandatory fundamental rights impact assessment. The rules also apply to the insurance and banking sectors.

Citizens have the right to lodge complaints about AI systems and receive explanations for decisions based on high-risk AI systems that affect their rights.

National support for "regulatory sandboxes" and real-world testing should help small and medium-sized companies develop AI applications without pressure from dominant industry giants, writes the EU Parliament.

Transparency requirements for general-purpose AI models

In addition, large international companies such as OpenAI, Microsoft, and Google are required to meet high standards of transparency for their basic AI models. Among other things, these companies must disclose what data was used to train the technology and how copyright is protected. The latter is the subject of many court cases, including in the US.

Because AI systems can perform a wide variety of tasks and their capabilities are expanding rapidly, the Parliament agreed that General Purpose AI (GPAI) systems and underlying GPAI models must meet the transparency requirements proposed by the Parliament.

GPAI models with systemic risk are subject to more rigorous obligations. These models must meet certain criteria, such as conducting model assessments, assessing and mitigating systemic risk, conducting adversarial testing, reporting serious incidents to the Commission, ensuring cybersecurity, and reporting on their energy efficiency.

Non-compliance with the rules can result in fines ranging from 35 million euros or 7 percent of global turnover to 7.5 million euros or 1.5 percent of turnover, depending on the type of breach and the size of the company.

The new rules also mean that the recent push by Germany, Italy, and France for self-regulation of foundational AI models has failed.

The agreed legislative text must now be formally adopted by the Parliament and the Council to become EU law. The European Parliament's Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) committees will vote on the agreement at a forthcoming meeting. The vote is a formality.

Implementation of the law will be crucial. The necessary regulation must not become bureaucratic and, in particular, the certification processes must not be overwhelming for small companies and start-ups. This is especially true for the EU, which lags far behind the US and China in terms of AI market penetration and therefore needs to act quickly and flexibly.