In the first part of this article, we presented an overview of the latest draft of the EU AI Act, aiming to clarify what this regulation will represent when comes into force, its definitions, scope of action, and classification system. Part one of this article also highlighted some additional topics recently included in the draft, such as the inclusion of generative AI systems and AI general principles in the regulation. In this article, we delve into more details on the Act AI classification system and its related processes. 

Data Leaders members can read the full post and connect with peers via the Data Leaders Hub.

As previously stressed, the EU AI Act adopts a risk-based approach to classifying AI systems. It regulates AI systems according to the level of risk they can pose to people’s health, safety or fundamental rights. The classification system includes four risk tiers: unacceptable, high, limited and minimal.

Unacceptable risk

AI systems considered to pose unacceptable risks will be prohibited in the EU (article 5). In the original draft of the proposal, those were the AI systems that:

  1. Deploy subliminal techniques beyond a person’s consciousness to distort their behaviour, causing or are likely to cause physical or psychological harm;
  2. Exploit people’s vulnerabilities due to their age, physical or mental disability, distorting their behaviour, causing or are likely to cause physical or psychological harm;
  3. Evaluate or classify the trustworthiness of natural persons based on their social behaviour and known or predicted personal or personality characteristics for the use of public authorities (government-run social scoring); 
  4. Use ‘real-time’ remote biometric identification in publicly accessible spaces. Exceptions are listed in article 5, ‘d’ of the Act. 

This was a controversial topic during the negotiations on the Act. The agreement reached by the European Parliament has made changes related to the banned AI practices. For instance:

  • The prohibition on the exploitation of the vulnerabilities of a person or a specific group of persons (2) has been extended to cover vulnerabilities related to personality traits, social or economic situations;
  • The prohibition on social scoring by public actors (3) has also been extended to private actors;
  • The use of emotion recognition AI-powered software is proposed to be banned in the areas of law enforcement, border management, workplace, and education;
  • ‘Post’ remote biometric identification systems are now part of the banned AI systems unless they are subject to a pre-judicial authorisation related to a serious criminal offence; 
  • The use of ‘real-time’ remote biometric identification in publicly accessible spaces (4) is now fully banned, as all the exceptions have been now removed from the draft;
  • Purposeful manipulation and deceptive techniques by AI systems are now included as a prohibited practices (1) despite concerns that intentionality might be difficult to prove;
  • AI systems that infer emotions of natural persons in the areas of law enforcement, border management, in workplace and education institutions are now also banned.

High risk 

AI systems considered to pose high risks are permitted in the EU as long as they comply with the requirements established in Chapter 2 of the AI Act.

According to the Act, AI systems that are intended to be used as a safety component of a product (AI component of a product), or are a product themselves (stand-alone AI product), that are both (article 6):

  1. Covered by the Union harmonisation law listed in Annex II (i.e., EU Directive on machinery, EU Directive on the safety of toys, EU Regulation on medical devices, etc.); AND
  2. Required to undergo a third-party conformity assessment to the placing on the market or to be put into service in accordance with the Union harmonisation legislation listed in Annex II.

In addition to the high-risk AI systems referred to above, the Act’s original proposal also presented a list of critical areas and use cases in Annex III as high-risk. The following are the areas listed in Annex III:

  1. Biometric identification and categorisation of natural persons;
  2. Management and operation of critical infrastructure;
  3. Education and vocational training;
  4. Employment, workers management and access to self-employment;
  5. Access to and enjoyment of essential private services and public services and benefits;
  6. Law enforcement;
  7. Migration, asylum and border control management;
  8. Administration of justice and democratic processes. 

After discussions on the Act proposal, MEPs introduced an extra layer to this risk classification. The agreement says that AI systems that fall under Annex III’s categories will only be deemed high-risk if they pose a significant risk of harm to health, safety or fundamental rights. Significant risk has been defined as a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and it’s the ability to affect an individual, a plurality of persons, or to affect a particular group of persons.

Finally, two other AI systems were included as high-risk in the latest version of the AI Act:

  1. AI systems used to manage critical infrastructure, like energy grids and water management systems, if they entail a severe environmental risk; and
  2. Recommender systems of very large online platforms, as defined under the Digital Services Act (DSA).

All high-risk AI systems shall undergo the conformity assessment procedure described by the Act prior to their placing on the market or putting into service (article 43). Providers of high-risk AI systems will need to demonstrate the compliance of their AI systems with requirements related to:

  1. Risk management (article 9);
  2. Data and data governance (article 10);
  3. Technical documentation (article 11);
  4. Record keeping (article 12);
  5. Transparency and provision of information (article 13);
  6. Human oversight (article 14);
  7. Accuracy, robustness and cybersecurity (article 15).

If the high-risk AI system complies with all the requirements mentioned above, set out in Chapter 2 of the Act, the providers shall draw up an EU declaration of conformity (article 48) and affix the CE marking of conformity into the AI system (article 49). The CE marking certifies that a product has met EU health, safety, and environmental requirements, which ensure consumer safety.

Additional to these obligations, the Act’s draft proposes that providers of high-risk AI systems shall (article 16):

  1. Have a quality management system in place that complies with the regulation (article 17);
  2. Comply with the registration obligations related to stand-alone AI systems (article 51);
  3. Take the needed corrective actions if the AI system is not in conformity with the requirements set out in Chapter 2 after its deployment (post-market monitoring – article 21);
  4. Inform the national supervisory authorities of the Member States of the non-compliance and any relevant corrective actions taken (duty of information – article 22); and
  5. Cooperate with competent authorities, the Office and the Commission upon information requests (article 23).

After discussions on the Act’s draft, MPEs added two more obligations related to the sustainability of high-risk AI systems. The new addition proposes that:

  • Providers keep records of those AI systems’ environmental footprints; and
  • Foundation models must comply with European environmental standards.

Furthermore, the agreed version of the Act by the European Parliament makes additional clarifications on the responsibilities and roles of AI systems providers.

Finally, the Act also lists specific obligations to importers (article 26), distributors (article 27), and deployers of high-risk AI systems. The Regulation proposes that high-risk AI system deployers shall (article 29):

  1. Use AI systems in accordance with the instructions of use
  2. Implement the human oversight measures indicated by the provider;
  3. Ensure that input data is relevant and sufficiently representative for the intended purpose of the AI system;
  4. Monitor the operation of the AI system on the basis of the instructions of use;
  5. Keep the logs automatically generated by the AI system that are under their control;
  6. Inform the provider or distributor and suspend the use of the AI system if they have identified a potential risk, a serious incident or malfunctioning;
  7. Shall conduct a fundamental rights impact assessment; and
  8. Comply with existing legal obligations set by other EU legislation, such as the GDPR.

Limited risk 

The Act imposes transparency obligations to (article 52):

  1. Providers of AI systems intended to interact with natural persons; 
  2. Users of emotion recognition or biometric categorisation systems, which is not prohibited; and 
  3. Users of AI systems that generate deep fakes. 

This means that natural persons shall be informed when they:

  1. interact with an AI system; 
  2. are submitted to emotion recognition or biometric categorisation systems; and 
  3. are exposed to deep fake content artificially generated or manipulated by an AI system. 

Additionally, in the latest draft, the users shall also obtain the prior consent of the natural persons submitted to emotion recognition or biometric categorisation systems that are not prohibited.

As highlighted in Part I of this article, the draft adopted by the European Parliament also added new rules regarding foundation models. However, the negotiated text makes it clear that those AI systems are still not classified as ‘high-risk systems’. Therefore, they can be positioned as ‘limited risk systems’.

The adopted draft of the Act defines the following as obligations of providers of foundation models:

  1. Demonstrate the identification, reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior to and throughout development, as well as the documentation of remaining non-mitigable risks after development;
  2. Process and incorporate only datasets that are subject to appropriate data governance measures;
  3. Seek to achieve throughout the AI system lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity;
  4. Make use of applicable standards to reduce energy use, resource use and waste, as well as to increase energy efficiency, and the overall efficiency of the system;
  5. Draw up extensive technical documentation and intelligible instructions for use;
  6. Establish a quality management system; and
  7. Register that foundation model in the EU database.

If the foundation model is a generative AI system, providers shall also:

  1. Comply with the transparency obligations outlined in Article 52;
  2. Ensure adequate safeguards against the generation of content in breach of Union law; and
  3. Document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.

Minimal risk

All AI systems not classified as ‘unacceptable risk’, ‘high-risk’, and ‘limited risk’ fall into the minimal risk category. The minimal-risk AI systems are generally not regulated. The Act suggests that EU Member States, the Commission and the AI Office should encourage and facilitate the drawing up of codes of conduct by individual providers of AI systems or by organisations representing them, including users and stakeholders. The codes of conduct are intended to foster voluntary application (article 69):

  1. by non-high-risk AI systems of the requirements set out for high-risk AI systems; and
  2. by all AI systems (including high-risk and limited-risk ones) of requirements related to environmental sustainability, accessibility for persons with disability, stakeholders’ participation in the design and development of AI systems and diversity of development teams. 

Although Member states will need to lay down the rules on penalties applicable to infringements of the EU AI Act, the non-compliance penalties proposed by the regulation are quite significant. The highest fines are proposed for non-compliance with the rules set for prohibited AI systems (unacceptable risks) and the rules related to data and data governance of high-risk AI systems. Those fines can reach up to €40 million or, if the offender is a company, 7% of its total global annual turnover, and €20 million or 4% of the total global annual turnover, respectively. Submitting incorrect, incomplete or misleading information to competent authorities and bodies can result in fines of up to €10 million or, if the offender is a company, 2% of its total global annual turnover (article 71).

FINAL REMARKS

The way data is used and processed by AI systems is already regulated by the EU legislation. Still, the AI Act will add to the protection of data subjects while seeking to encourage innovation and boost the opportunities AI can create. 

At Data Leaders, we are sharing the latest developments in AI Regulation and promoting peer exchanges in the subject to support our community in dealing with the uncertainties of the theme. Watch this space for more on AI Governance and Regulation, and let us know what AI topics you would like us to focus on.

This article has originally been published exclusively for the members of Data Leaders service. Our service gives unique access to trusted, timely, relevant insights from Chief Data and Analytics Officers from across the world.

To find out how you and your organisation can benefit from becoming part of our service, contact us here.

If you are a Data Leaders Member login here to access more related content:

Membership

Make better informed decisions by assessing your data and analytics capability and use community intelligence to close the gap between strategy and execution.

Newsletter

Essential Data monthly newsletter is where you can discover limited availability articles, guides and frameworks otherwise only accessible to our members.