Although the beginnings of modern AI can be traced back to 1956, recent years have seen a significant boom in the development and deployment of AI systems. The lack of proper regulation on artificial intelligence has created a race to the market, creating great opportunities but also risks.
In this sense, in trying to achieve effective AI Governance, several private and public organisations have created their own AI documents and codes of conduct. At the same time, data leaders are also exchanging their own experiences to take AI Governance to the next level.
At the international level, organisations such as the OECD and UNESCO have gathered intergovernmental agreements outlining global standards on AI to guide the development and deployment of such technology ethically and safely. However, there is still a need for proper regulation at the regional and national levels.
In April 2021, the European Commission proposed the first draft of the Artificial Intelligence Act. The legislation has been under discussion between different stakeholders and, more recently, under negotiation in the European Parliament, which reached a provisional political deal on the AI Act on 27 April 2023, voted on and adopted it on 16 June 2023. Until the very last moments, the EU lawmakers were still negotiating on some of the most controversial parts of the proposal and including new rules for emerging AI technologies, such as generative AI systems.
In the next stage, called ‘trilogues‘, EU interinstitutional negotiations will take place between representatives of the EU Parliament, the EU Council and the EU Commission to finalise and implement the law. At this stage, the proposal text of the Act may still be subject to a few adjustments as lawmakers will negotiate sticking points and revise proposals. Trilogues can vary in time, especially when dealing with complex subjects such as AI, but there is an expectation that the Act will go to a plenary vote in mid-June.
The EU AI Act, when passed into law, will be a landmark regulation of AI with a real global impact. In this sense, organisations and professionals developing and deploying AI systems, such as CDAOs, must start to understand this Act, even before coming into force, to guide internal processes. AI actors, mainly providers, will need to bear in mind the obligations, requirements, and fines proposed by the Act when developing these systems.
Therefore, in part one of this article, we bring an overview of the latest draft of this important regulatory proposal. In part two, we will explore in more detail the AI classification system and processes proposed by the EU AI Act. More focused discussions on specific themes of the Act, including the proposed incentives for AI developments, will be done in upcoming articles.
The EU AI Act is the first AI legal framework proposal by a major regulator, the European Union, and it is considered the main pillar of the EU digital single market strategy, setting out rules to all industries for the development, modification and use of AI-driven products, services and systems within the territory of the EU. AI systems exclusively developed or used for military purposes are excluded from the scope of this Regulation. The Act aims to codify the EU trustworthy AI paradigm from the lab to the market, requiring AI to be legally, ethically and technically robust while respecting democratic values and fundamental rights.
‘On Artificial Intelligence, trust is a must, not a nice to have.’ Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age
The proposed legislation focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors. Like the EU’s General Data Protection Regulation (GDPR), the EU AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on people’s life.
The AI Act’s original draft defined an AI system as a:
‘software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with‘ (Article 3).
This definition has been subjected to extensive discussions to ensure it provides sufficient clarity to distinguish AI from more classical software systems. Therefore, aligning with the OECD’s AI definition, the European Parliament agreed to narrow the definition down to systems developed through machine learning and/or logic-based approaches to generate predictions, recommendations or decisions. The revised text now also clarifies that an AI system can be designed to operate with varying levels of autonomy, with some human input.
When in force, this Regulation will have a global impact as it applies to (Article 2):
- Providers placing on the market or putting into service AI systems in the EU’s territory, even if those providers are not established in the EU;
- Deployers of AI systems established or located within the Union;
- Providers and deployers of AI systems established or located outside the EU, where the output produced by the AI system is intended to be used in the EU’s territory.
The EU AI Act adopts a risk-based approach to classifying AI systems. It regulates AI systems according to the level of risk they can pose to people’s health, safety or fundamental rights. The classification system includes four risk tiers: unacceptable, high, limited and minimal.