Content
newsletter Newsletter

The EU AI Act aims to regulate artificial intelligence in Europe and could also influence legislation outside the EU. Here are the key points.

The European Commission published in April 2021[1] published a first draft of a regulation establishing harmonized rules on artificial intelligence (hereinafter: AI-Reg-E)[2]. The goal of this regulation is to create a consistent legal framework for trustworthy AI while incentivizing research and development in Europe.

The proposal follows a risk-based approach and is intended to apply to all actors developing or operating an AI system in the EU. While AI systems with unacceptable risks will be banned from the outset, high-risk AI systems will have to comply with mandatory requirements before they can enter the market. Low-risk AI systems will only be subject to specific transparency requirements, while all other AI systems will be considered to pose minimal risk and will not be subject to any specific regulation.

Subject of regulation: AI systems

What is meant by the term AI systems is regulated by Art. 3 No. 1 AI-Reg-E. According to this, an AI system is " software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with."

Ad
Ad

The concepts and techniques covered by Annex I currently include: machine learning (lit. a); logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, inference and deduction engines, reasoning and expert systems (lit. b); and statistical approaches, Bayesian estimation, search and optimization methods (lit. c).

Addressees of the regulation

The regulations are addressed in personal terms to all actors who are providers or users within the meaning of this regulation.

Provider is, according to Art. 3 No. 2 AI-Reg-E, any natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.

User is, according to Art. 3 No. 4 AI-Reg-E, means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.

From a geographical point of view, the Regulation follows the so-called "marketplace principle". According to Art. 2 I, the Regulation applies to providers who place AI systems on the market or put them into operation in the Union, regardless of whether these providers are established in the Union or in a third country (lit. a), as well as to users who are established in the Union (lit. b). Also affected are providers and users of AI systems established or located in a third country, provided that the result ("output") produced by the system is used in the Union.

Recommendation

With the increasing use of the cloud, it is becoming easier to move an AI system to a third country, which is why lit. c is likely to be of particular practical relevance. What is covered by "output" is currently unclear.

Risk-based approach

The draft regulation adopts the risk-based approach of the White Paper on Artificial Intelligence[3] and divides AI systems into four categories. In addition to the function of the AI, the classification of risk is also based on its purpose and the circumstances of its use.

  1. Unacceptable risk: ban on certain AI applications

Certain applications are banned from the outset under Article 5 of the AI-Reg-E because, in the Commission's understanding, they pose an unacceptable risk to the fundamental rights and values of the Union. They can basically be divided into three groups:

AI systems of behavioral manipulation, Art. 5 I a), b) AI-Reg-E:

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

The first group includes AI systems that use subliminal techniques outside of a person's awareness, or exploit a person's vulnerabilities due to age or physical or mental disability, to significantly influence that person's behavior in a way that is likely to cause physical or psychological harm to that person or others.

The presumed intent ("in order to materially distort a person’s behaviour") is a notable qualification. It remains to be seen how this will play out in practice; based on the current wording, the standard seems somewhat narrow in light of the intended purpose of protecting users' autonomy.

AI systems of so-called "social scoring", Art. 5 I c) AI-Reg-E:
Another group concerns AI systems used by or on behalf of public authorities to assess and classify an individual's social behaviour (so-called social scoring), where this leads to a worse position or disadvantage in social contexts unrelated to the circumstances under which the data were generated or collected, or in a way that is unjustified or disproportionate to their social behaviour or its scope.

It is worth noting that, at this stage, the prohibition applies only to public or governmental entities and not to private operators.

Biometric real-time remote identification, Art. 5 I d) AI-Reg-E:
A major concern of the Commission was also the regulation of AI systems for real-time biometric remote identification for law enforcement purposes in public spaces. However, no general prohibition is foreseen here, as exceptions are foreseen for certain uses, in particular to search for victims of crime or missing children, to prevent threats to life and limb, and to prevent terrorist attacks. Second, paragraph 4 contains an opening clause allowing Member States to decide on the permissibility of these systems under certain conditions.

  1. High risk: high-risk AI systems

At the heart of the regulation are the requirements for high-risk AI systems in Art. 8-51 AI-VO-E. What qualifies an AI system as high-risk is set out in Art. 6, which basically requires two conditions to be met:

  1. The AI system is intended to be used as a safety component in a product subject to the provisions of Annex II or is itself such a product. These are primarily product safety regulations, including the Machinery Directive, the Toy Safety Directive or the Medical Devices Directive.
  2. In addition, the AI system itself, or the product of which the AI system is a safety component, shall be subject to third party conformity assessment in accordance with the provisions of Annex II.

Notwithstanding the above, AI systems intended to be used in an area listed in Annex III will generally be classified as high-risk AI systems. Essentially, this refers to use in security-relevant areas or similarly fundamental rights-sensitive areas of application. These include, for example, use for biometric identification and categorization of natural persons (No. 1), use in education and training (No. 3), access to private and public services (No. 5), and law enforcement (No. 6) and judicial purposes (No. 8).

In addition, the Commission is empowered to amend the list in Annex III in accordance with Art. 7 AI-Reg-E.

Requirements for high-risk AI systems

The requirements for high-risk AI systems have largely been developed from the recommendations of the High Level Expert Group on AI and its Assessment List for Trustworthy Artificial Intelligence[4], where principles such as risk and quality management or transparency obligations are already known from other product safety regulations.

Risk management systems, Art. 9 AI-Reg-E:

The first basic requirement is a documented risk management system with concrete, state-of-the-art risk management measures designed to keep the residual and overall risk of the AI system as low as possible.

Data governance, Art. 10 AI-Reg-E:

High-risk systems can only be trained and operated with data that meets specific data quality requirements. This could be referred to as upstream data compliance, which must be maintained throughout the development phase. The training, validation, and test data sets underlying the AI must be representative, error-free, and complete - which can be difficult, especially with large data sets.

Technical documentation, Art. 11KI-VO-E:

Even before a high-risk AI system is placed on the market or put into operation, the technical documentation of the system must be prepared in accordance with Art. 11. The documentation must be designed to demonstrate that the AI system meets the relevant requirements. This includes in particular the minimum requirements set out in Annex IV.

The technical documentation is primarily used for certification procedures and possible inspections by market surveillance authorities.

Obligation to keep records, Art. 12 AI-Reg-E:

According to Art. 12, high-risk AI systems must be developed and designed in such a way that it is possible to document and log the operations of the AI system during its operation (so-called "logs"). At the same time, the storage of the processing operations must comply with the requirements of the GDPR.

Transparency requirement, Art. 13 AI-Reg-E:

The design and development of a high-risk system must be such that the operation of the system is sufficiently transparent so that the user can make appropriate use of the system and interpret the results. What exactly is meant by "sufficiently transparent" is currently unclear. For some AI systems, this requirement could become problematic, as many AI systems are currently unable to justify decisions in a comprehensible manner.

Human Supervision, Art. 14 AI-Reg-E:

According to Art. 14 I, high-risk AI systems must be designed so that they can be supervised by humans as long as the AI is in use.

This is concretized in paragraph 4, which states that the capabilities and limitations of the high-risk AI system must be fully understood and its operation duly monitored (lit. a). In addition, it must be possible to intervene in the operation of a high-risk AI system or to "be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure." (lit. e).

Accuracy, Robustness and Cybersecurity, Art 15 AI-Reg-E:

Finally, it sets minimum requirements for accuracy, robustness and cybersecurity. AI systems must be protected against errors, unauthorized third-party access, and data manipulation, and must be backed up with solutions such as fail-safe or backup plans.

Obligations for providers and users

The monitoring of these requirements is basically the responsibility of the system providers (Art. 16 a) AI-Reg-E).

In particular, according to Art. 19 I 1 AI-Reg-E, they must subject the AI system to the conformity assessment procedure according to Art. 43 AI-Reg-E. The regulation provides for different conformity assessment procedures depending on the type of AI system: An internal control by the provider himself according to Annex VI or the control by notified bodies according to Annex VII - For the majority of high risk AI systems, the KI-VO-E provides for the internal control procedure.

Externally, this conformity assessment must be indicated by a CE marking and a declaration of conformity (Art. 19 I, 48, 49 KI-VO-E). Providers are also obliged to set up a quality management system in accordance with Art. 17 AI-Reg-E and to take corrective action if the AI system no longer meets the requirements (Art. 21 AI-Reg-E).

Users are obliged to use the system in accordance with the instructions to be provided by the providers (Art. 29 I AI-Reg-E) and, insofar as they have control over the input values, to ensure that the data used are in accordance with the intended use (Art. 29 III AI-Reg-E). In addition, they must monitor the operation of the AI system (Art. 29 VI AI-Reg-E).

Providers and users are subject to mutual reporting and notification obligations. If the AI system poses a risk within the meaning of Art. 65 I AI-Reg-E, providers must report this to the competent authorities (Art. 22 AI-Reg-E). In addition, serious incidents and malfunctions must be reported if they lead to a violation of fundamental rights protected under EU law (Art. 62 I 1 AI-Reg-E). In these cases, the user's obligations towards the provider also apply.

According to Art. 3 Nr. 44 CISO-E, serious incidents are events that are likely to lead, directly or indirectly, to death or serious damage to health, property, the environment or the disruption of the operation of critical infrastructure.

  1. Low risk: Special labeling requirements

The special labeling requirements apply to certain AI systems that interact with humans or appear to generate authentic content. They apply regardless of whether an AI system has been identified as a high-risk system - the only exceptions are for law enforcement.

According to Art. 52 I AI-Reg-E, users should be informed when they interact with an AI system, unless this is already obvious from the circumstances of use.

What is meant by "interaction" is not described in detail; based on an average understanding of language, it includes reciprocal, interrelated actions.[5]. In many cases, however, the exception of external recognizability will also be met. The labeling requirement will therefore apply primarily to communications in which it is unclear whether the other party is a person or a machine (telephone or text).

According to Art. 52 II AI-Reg-E, the same applies to the use of emotion recognition systems or a biometric categorization system.

If image, audio or video content is generated or manipulated with an AI system that resembles real persons, objects, places or other entities and events and would appear authentic to another person (synthetic media or deep fakes), it must be disclosed that the content has been artificially generated or manipulated, Art. 52 III AI-Reg-E. Certain exceptions for freedom of expression and artistic freedom should apply.

  1. Minimal risk: Code of Conduct

All other AI systems are classified as low-risk. They are not subject to specific regulations. Providers, on the other hand, have the possibility to establish voluntary codes of conduct (Art. 69 AI-Reg-E). These may be based on the requirements of high-risk systems, but may also pursue objectives such as accessibility or sustainability.

According to the Commission, the vast majority of AI systems currently in use in the EU are low-risk[6], which is why the Regulation states that the establishment of such codes of conduct should be "encouraged". What this "encouragement" looks like is not clear from the draft (at this stage).

Promoting innovation through regulatory sandboxes

In order to bridge the gap between product safety and research, Art. 53, 54 AI-Reg-E provide for the establishment of so-called "regulatory sandboxes", i.e. controlled environments for the development and testing of AI systems - the Commission intends to define the detailed modalities by means of an implementing act. Member State authorities and the European Data Protection Supervisor will be responsible for setting up and organizing the sandboxes.

With regard to access requirements, a distinction is made according to the type of participant, with facilitated access for small providers and small users (Article 55 AI-Reg-E). It is important for the use of these training laboratories that the participants remain liable for any damage caused to third parties as a result of their use within the experimental field (Art. 53 IV AI-Reg-E).

Art. 54 AI-Reg-E then regulates the further processing of originally lawfully collected personal data for the purpose of developing and testing AI systems. The requirements are presumably high; among other things, there must be a significant public interest in the specific application, the data must be necessary for compliance with the regulation and not replaceable by synthetic data, and all personal data should be deleted after the procedure is completed.

It is questionable to what extent data can be completely deleted when recovery from trained systems is in principle possible.

Governance and control institutions

At the national level, the member states are to organize their own supervisory authorities in accordance with Art. 59 I AI-Reg-E. In addition, a European Data Protection and Intelligence Committee (ECIA), consisting of the supervisory authorities of the Member States and the European Data Protection Supervisor, will be established (Art. 56 f. AI-Reg-E), which will primarily perform advisory activities and publish recommendations and opinions in this regard.

With regard to market surveillance, Art. 63 I of the Draft Regulation refers to the European Market Surveillance Regulation[7].. The market surveillance authority, which is likely to be identical to the supervisory authority, is entitled, among other things, to have access to all training, validation and test data and, if necessary, to the source code of the AI system and to carry out tests (Art. 64 V KI-VOE). In the case of a risky AI system, the market surveillance authority may verify compliance with the requirements of the KI-VO-E. Furthermore, the authority is authorized to oblige the operator to take measures to restore compliance with the requirements of the regulation or to remove the system from the market (Art. 65 II UAbs.2 KI-VO-E).

In the event of violations, the ordinance provides for strikingly high fines. A violation of the provisions of the regulation can be punished with a fine of up to EUR 20,000,000 or 4% of the company's worldwide annual turnover, Art. 71 IV KI-VO-E. In the case of violations of the prohibited applications of Art. 5 KI-VO-E or the data quality provisions of Art. 10 KI-VO-E, the fine is up to 30,000,000 Euro or 6% of the worldwide annual turnover, Art. 71 III KI-VO-E.

Concluding Remarks

On June 14, 2018, the European Parliament published its final position on the "AI Act"[8], and with the Commission's draft and the Council of the European Union's position[9] now all drafts are available, the final phase of negotiations is now underway.

However, fundamental changes are not expected, so the proposed amendments generally provide for adjustments to the scope of the regulation, an expanded list of prohibited AI applications, and minor changes in the area of high-risk AI systems. In addition, "generative AI" will be explicitly included in the regulation and the list of fines will be adjusted downwards.

In view of the upcoming EU Parliamentary elections in 2024, it can be assumed that a decision will be made quickly - an agreement is expected by the end of the year.

[1] A Europe for the digital age: Artificial intelligence

[2] EUR-Lex - 52021PC0206 - EN - EUR-Lex (europa.eu)

[3] eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0065

[4] AI HLEG - Assessment List for Trustworthy Artificial Intelligence (ALTAI) | Futurium (europa.eu)

[5] Interaction ᐅ spelling, meaning, definition, origin | Duden

[6] https://eur-lex.europa.eu/legal-content/DE/TXT/?uri=CELEX%3A52021PC0206

[7] EUR-Lex - 32019R1020 - EN - EUR-Lex (europa.eu)

[8] TA (europa.eu)

[9] eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CONSIL:ST_15698_2022_INIT

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Marvin Rosenstengel

Marvin Rosenstengel is a law student at the Justus Liebig University in Giessen, Germany, majoring in public law.

Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.