In this two-part article, we bring an overview of the draft of the EU AI Act, which was agreed upon on April/2023, voted on and adopted by the European Parliament on the 14th of June 2023. The regulatory proposal has now moved to the next stage of negotiations before it comes into force.

Data Leaders members can read the full post and connect with peers via the Data Leaders Hub.

Although the beginnings of modern AI can be traced back to 1956, recent years have seen a significant boom in the development and deployment of AI systems. The lack of proper regulation on artificial intelligence has created a race to the market, creating great opportunities but also risks.

In this sense, in trying to achieve effective AI Governance, several private and public organisations have created their own AI documents and codes of conduct. At the same time, data leaders are also exchanging their own experiences to take AI Governance to the next level

At the international level, organisations such as the OECD and UNESCO have gathered intergovernmental agreements outlining global standards on AI to guide the development and deployment of such technology ethically and safely. However, there is still a need for proper regulation at the regional and national levels. 

In April 2021, the European Commission proposed the first draft of the Artificial Intelligence Act. The legislation has been under discussion between different stakeholders and, more recently, under negotiation in the European Parliament, which reached a provisional political deal on the AI Act on 27 April 2023, voted on and adopted it on 16 June 2023. Until the very last moments, the EU lawmakers were still negotiating on some of the most controversial parts of the proposal and including new rules for emerging AI technologies, such as generative AI systems. 

In the next stage, called ‘trilogues‘, EU interinstitutional negotiations will take place between representatives of the EU Parliament, the EU Council and the EU Commission to finalise and implement the law. At this stage, the proposal text of the Act may still be subject to a few adjustments as lawmakers will negotiate sticking points and revise proposals. Trilogues can vary in time, especially when dealing with complex subjects such as AI, but there is an expectation that the Act will go to a plenary vote in mid-June. 

The EU AI Act, when passed into law, will be a landmark regulation of AI with a real global impact. In this sense, organisations and professionals developing and deploying AI systems, such as CDAOs, must start to understand this Act, even before coming into force, to guide internal processes. AI actors, mainly providers, will need to bear in mind the obligations, requirements, and fines proposed by the Act when developing these systems.

Therefore, in part one of this article, we bring an overview of the latest draft of this important regulatory proposal. In part two, we will explore in more detail the AI classification system and processes proposed by the EU AI Act. More focused discussions on specific themes of the Act, including the proposed incentives for AI developments, will be done in upcoming articles. 

WHAT IS THE EU AI ACT?

The EU AI Act is the first AI legal framework proposal by a major regulator, the European Union, and it is considered the main pillar of the EU digital single market strategy, setting out rules to all industries for the development, modification and use of AI-driven products, services and systems within the territory of the EU. AI systems exclusively developed or used for military purposes are excluded from the scope of this Regulation. The Act aims to codify the EU trustworthy AI paradigm from the lab to the market, requiring AI to be legally, ethically and technically robust while respecting democratic values and fundamental rights. 

On Artificial Intelligence, trust is a must, not a nice to have.’ Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age

The proposed legislation focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors. Like the EU’s General Data Protection Regulation (GDPR), the EU AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on people’s life.

EU DEFINITION OF AI

The AI Act’s original draft defined an AI system as a:

software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with‘ (Article 3). 

This definition has been subjected to extensive discussions to ensure it provides sufficient clarity to distinguish AI from more classical software systems. Therefore, aligning with the OECD’s AI definition, the European Parliament agreed to narrow the definition down to systems developed through machine learning and/or logic-based approaches to generate predictions, recommendations or decisions. The revised text now also clarifies that an AI system can be designed to operate with varying levels of autonomy, with some human input. 

SCOPE OF ACTION

When in force, this Regulation will have a global impact as it applies to (Article 2): 

  1. Providers placing on the market or putting into service AI systems in the EU’s territory, even if those providers are not established in the EU;
  2. Deployers of AI systems established or located within the Union;
  3. Providers and deployers of AI systems established or located outside the EU, where the output produced by the AI system is intended to be used in the EU’s territory.

EU RISK-BASED APPROACH TO AI

The EU AI Act adopts a risk-based approach to classifying AI systems. It regulates AI systems according to the level of risk they can pose to people’s health, safety or fundamental rights. The classification system includes four risk tiers: unacceptable, high, limited and minimal.

In part two of this article, we will dive into each area of risk to examine the implications for data teams.

ADDITIONAL TOPICS

The deal recently reached on the AI Act also added two new topics to the Regulation’s draft: one related to generative AI tools and one regarding AI Governance general principles.

General Purpose AI

In the adopted version of the Regulation, the European Parliament confirmed stricter obligations on foundation models, a sub-category of General Purpose AI (or systems that do not have a specific purpose), which includes systems such as ChatGPT. The Act’s draft proposes that providers of generative AI models must assess and mitigate possible risks and register their models in the EU database before their release on the EU market. They also have to comply with transparency requirements. An important consequence is that companies developing those generative AI tools would now have to disclose if they have used copyrighted material in their systems and provide detailed summaries of the copyrighted data used for their training.

General principles

The new article on AI general principles is not meant to create new obligations. However, these principles will have to be incorporated into technical standards and guidance documents. The principles include:

  1. Human agency and oversight,
  2. Technical robustness and safety,
  3. Privacy and data governance,
  4. Transparency,
  5. Social and environmental well-being,
  6. Diversity,
  7. Non-discrimination and fairness.

If you are interested in better understanding the risk-based approach proposed by the EU AI Act draft, go to the second part of this article.

This article has originally been published exclusively for the members of Data Leaders service. Our service gives unique access to trusted, timely, relevant insights from Chief Data and Analytics Officers from across the world.

To find out how you and your organisation can benefit from becoming part of our service, contact us here.

If you are a Data Leaders Member login here to access more related content:

Membership

Make better informed decisions by assessing your data and analytics capability and use community intelligence to close the gap between strategy and execution.

Newsletter

Essential Data monthly newsletter is where you can discover limited availability articles, guides and frameworks otherwise only accessible to our members.