Preparing for the EU Regulation of AI
As 2024 gets started, AI will remain top of mind for technology leaders, and we are likely to see the first regulation of the technology in the EU in the first half of the year go into law. Although the regulation isn’t final, we certainly know a lot about what European regulators are thinking. At the same time, AI has been evolving at breakneck speed for the last year, and nearly every business is investing or building AI solutions at an incredible rate and will continue to in 2024. This puts technology leaders in a precarious position to not fall behind in the fast-paced world of AI, but at the same time not wasting time and money investing in something that could potentially be banned or further regulated soon. In this post, we go through a crash course into what we know ahead of the final regulations being published as well as how technology leaders should prepare ahead of time. This does not constitute legal advice.
Am I Affected?
Based upon what we know today, the proposed legislation is likely to be broad in terms of scope. The latest best guess is that this legislation will affect the obvious organizations– those building, deploying, or selling AI systems in the EU regardless of location – but also includes any organization where the output of an AI system is used within the EU. There likely will be some exclusions for AI systems that are open source, used solely for research, scientific discovery, or military/defense systems. There are certainly details that will be published in the short term which organizations should certainly investigate once the regulations are finalized.
Similarly, we don’t know exactly how the final regulation will define an AI system, but it is likely to be very close to the definition from the Organisation for Economic Co-operation and Development (OECD):
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
This definition is also very broad and would likely include more traditional machine learning algorithms as well as more advanced analytics approaches beyond just the generative AI models people are talking about today.
What is Required?
The requirements of those leveraging AI systems vary across both the risk level of the systems as well as the sophistication of the underlying model.
Risk Levels
The risk levels of the systems are likely to be split into 4 classifications based upon their potential impact to society ranging from explicitly prohibited to minimal risk. There are certain exceptions within each category that we won’t go into since they are likely to change slightly in the final regulations.
Classification | Description | Example Systems | Regulatory Requirements | Enforcement Timeframe |
Prohibited | Unacceptable level of safety, security, and fundamental rights to people |
| Explicitly prohibited with exceptions | 6 months |
High Risk | Systems which have a safety component or those explicitly listed |
| Permitted with heavy compliance depending on if you are a builder or deployer of AI systems. Requirements include risk & quality management programs, data inputs, technical documentation, user transparency, oversight, & registration | 24 months |
Limited Risk | Systems that interact with humans directly |
| Permitted with requirements related to transparency | 24 months |
Minimal Risk | Everything else that meets the definition of AI but isn’t prohibited or high risk |
| Permitted with lighter compliance requirements | 24 months |
General Purpose AI
There are also special requirements for models that perform ‘generally applicable functions’ which really are intended to provide a special classification for generative AI or foundational models. These models will largely be held to similar standards as the high-risk classification, regardless of their application with some requirements varying slightly or not applying.
Looking Ahead
The regulation of AI was inevitable, and it isn’t a surprise that the EU will likely be the first to formally regulate it. There certainly is a lot still to be finalized, and I am certain there will be extensive debate on the exact details of every detail. There is plenty that technology leaders can do to be best prepared for the regulation in the EU and beyond.
References
Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts, April 21, 2021
Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts - Annexes, April 21, 2021
Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts - General approach, November 25, 2022
Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts - Preparation for the trilogue, October 17, 2023