Content
summary Summary

The European Union has reached a political agreement on the world's first law regulating artificial intelligence.

Negotiations between the European Parliament and the Council of the EU were concluded on Friday evening after lengthy discussions.

The AI law is designed to more strictly regulate the use of AI in the EU, introducing different risk classes for AI systems while promoting innovation. The rules set out obligations for AI systems based on their potential risk and impact.

EU Internal Market Commissioner Thierry Breton called the agreement "historic." The legislation, which the Commission first proposed in April 2021, could serve as a model for AI regulation around the world through the so-called ripple effect of EU regulation, with tech companies optimizing their AI products for the most heavily regulated market.

Ad
Ad
Image: Thierry Breton via X

Risky AI systems and prohibited uses

The legislation bans certain AI applications that could pose a threat to civil rights and democracy. These include

  • Biometric categorization systems that use sensitive features
  • "untargeted" scraping of facial images from the Internet or CCTV footage to create facial recognition databases,
  • Emotion recognition in the workplace and educational institutions,
  • Social scoring based on social behavior or personal characteristics,
  • AI systems that manipulate human behavior to circumvent free will,
  • and AI that exploits human weaknesses.

There are exceptions for real-time biometric identification systems in public spaces for law enforcement purposes. The use of AI is only permitted for a defined list of serious crimes (murder, terrorism, abuse) with prior judicial approval. The person must be convicted or suspected of committing a serious crime.

For AI systems classified as high-risk, MEPs are calling for a mandatory fundamental rights impact assessment. The rules also apply to the insurance and banking sectors.

Citizens have the right to lodge complaints about AI systems and receive explanations for decisions based on high-risk AI systems that affect their rights.

National support for "regulatory sandboxes" and real-world testing should help small and medium-sized companies develop AI applications without pressure from dominant industry giants, writes the EU Parliament.

Recommendation

Transparency requirements for general-purpose AI models

In addition, large international companies such as OpenAI, Microsoft, and Google are required to meet high standards of transparency for their basic AI models. Among other things, these companies must disclose what data was used to train the technology and how copyright is protected. The latter is the subject of many court cases, including in the US.

Because AI systems can perform a wide variety of tasks and their capabilities are expanding rapidly, the Parliament agreed that General Purpose AI (GPAI) systems and underlying GPAI models must meet the transparency requirements proposed by the Parliament.

GPAI models with systemic risk are subject to more rigorous obligations. These models must meet certain criteria, such as conducting model assessments, assessing and mitigating systemic risk, conducting adversarial testing, reporting serious incidents to the Commission, ensuring cybersecurity, and reporting on their energy efficiency.

Non-compliance with the rules can result in fines ranging from 35 million euros or 7 percent of global turnover to 7.5 million euros or 1.5 percent of turnover, depending on the type of breach and the size of the company.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

The new rules also mean that the recent push by Germany, Italy, and France for self-regulation of foundational AI models has failed.

The agreed legislative text must now be formally adopted by the Parliament and the Council to become EU law. The European Parliament's Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) committees will vote on the agreement at a forthcoming meeting. The vote is a formality.

Implementation of the law will be crucial. The necessary regulation must not become bureaucratic and, in particular, the certification processes must not be overwhelming for small companies and start-ups. This is especially true for the EU, which lags far behind the US and China in terms of AI market penetration and therefore needs to act quickly and flexibly.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The European Union has reached political agreement on the world's first law regulating artificial intelligence, which introduces different risk classes for AI systems while promoting innovation.
  • The law bans certain AI applications that could threaten civil rights and democracy, such as biometric categorization systems, emotion recognition in the workplace, and AI that exploits human weaknesses.
  • It introduces high transparency standards for foundational AI models, especially for large international companies such as OpenAI, Microsoft, or Google, which must disclose, among other things, what data was used to train the technology and how copyright is protected.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.