Artificial Intelligence is changing the world. It’s helping to make us more productive, creative, and efficient – transforming countless industries and environments. But it can be dangerous too – particularly when it’s implemented and used incorrectly. That’s why regulatory bodies are beginning to create frameworks to govern AI use – like the EU AI Act.
As the use cases for AI continue to evolve, we’re beginning to rely on this technology for a lot more things – from customer service, to self-driving cars. Yet, we still don’t know how much we can really trust artificial intelligence, and rely on it to keep human beings safe.
Initiatives like the European Union Artificial Intelligence Act hope to give us a more “concrete” way to regulate artificial intelligence. The EU AI Act is the world’s first overarching strategy for making AI more trustworthy, ethical, and responsible in the years ahead. Here’s everything you need to know about the components of the Act – and how you can prepare to adhere to it going forward.
What is the EU AI Act? The Basics
The Artificial Intelligence Act for the European Union (the EU AI Act), is a new law introduced to govern the development and use of artificial intelligence across European countries. It focuses on a “risk-based” approach to regulating AI technologies.
Although other regulators and countries are beginning to work on similar frameworks to guide the creation of responsible and safe AI, the EU AI Act is considered the first “comprehensive” framework for AI usage worldwide. It even prohibits specific uses of AI outright.
The act also establishes rules for the creation of general-purpose artificial intelligence models and imposes fines against those who break them. For instance, penalties can range from around 7.5 million EUR (or 1.5% of turnover) to 35 million EUR (or 7% of worldwide turnover).
Experts believe that just as the EU’s GDPR guidelines inspired other nations to create similar data privacy laws, the EU AI Act will drive the development of global AI governance standards.
The EU AI Act: Who Does it Apply To?
The EU’s Artificial Intelligence Act applies to various groups in the AI supply chain, including providers, importers, deployers, manufacturers, distributors, and representatives. For instance:
- Providers: The people or group that develops an AI system or general-purpose model – or has a model created on their behalf to introduce to the market. AI systems are systems that can process inputs to generate outputs with some level of autonomy. General Purpose AI Models (GPAI) can perform various tasks, such as a chatbot or generative AI tool for creating content.
- Deployers: This term refers to anyone who uses an AI system, such as an organization that might use an AI chatbot to handle customer service tasks or an individual user who uses a chatbot to ask questions about a topic for a work-based task.
- Importers: People or groups in the European Union bring AI systems created by someone “outside of the EU” into the European market.
Notably, the AI Act also applies to deployers and providers outside of the EU if their systems or the outputs of their systems are used in the EU. For instance, imagine a firm in the EU sends data to an AI developer outside of the EU, and the developer processes that data with a model and sends it back to the original company. The developer would also be bound by the AI Act.
The Requirements of the EU AI Act
As mentioned above, the EU AI Act regulates the use and development of AI systems according to their perceived “risk” level. It prohibits certain practices that are considered to pose an “unacceptable” risk, sets rules for general-purpose AI models, and imposes standards for deploying high-risk systems.
To adhere to the EU AI Act rules, companies need to carefully assess the inventory of AI solutions they’re using, and classify their potential “risk”. The risk levels identified by the EU AI Act include:
Unacceptable Risk
Systems that fall into the “unacceptable risk” category are prohibited completely. For instance, using or creating an AI system that intentionally manipulates people into making dangerous choices is prohibited.
Some other examples of systems that pose unacceptable risks include:
- Social scoring tools that classify individuals based on their social behavior, leading to potential discrimination or detrimental treatment to specific groups.
- Emotional recognition systems used within educational and work institutions without the purpose being linked to medical or safety purposes.
- AI tools used to exploit specific vulnerabilities caused by a person’s age, medical condition, or another specific factor.
- AI systems used to scrap facial images from CCTV systems and the internet for facial recognition and biometric identification systems.
- Biometric systems that identify individuals based on potentially sensitive characteristics, such as specific facial features or skin color.
- Tools used by law enforcement for real-time remote biometric identification – unless an exception applies.
High Risk
The EU AI Act allows for developing and using “high-risk” systems. However, these systems need to comply with various requirements and undergo specific conformity assessments. They also need to be registered in an EU database so they can be monitored and tracked.
Examples of high-risk systems usually involve those related to the operation of critical infrastructure and systems used in hiring processes, credit scoring, insurance claims processing, and more. For instance, an AI system used to recruit candidates or evaluate people for promotions would be a high-risk system.
Systems used to influence the outcome of elections, and AI tools used in certain medical devices, educational environments, critical infrastructure management, and essential public or private services environments can also be considered “high risk”.
Exceptions do exist for specific high-risk AI systems on this list if they don’t pose a significant threat to individuals’ rights, safety, or health. However, providers need to document and explain why and how the system isn’t truly “high risk”.
Requirements for High-Risk Systems
If an AI system is considered high-risk, it needs to comply with requirements like:
- Implement continuous risk management practices to monitor the system’s output and results throughout its lifecycle for dangers and irregularities.
- Using strict data governance practices to guarantee that validation, training, and testing criteria adhere to specific quality criteria. For instance, companies need to use safe practices to collect data and mitigate bias.
- Maintaining specific technical documentation with insights into system design, the capabilities and limitations of the model, and implemented compliance practices.
There are also specific obligations linked to AI transparency that certain types of AI need to adhere to. For instance, AI systems that interact with human beings need to inform users that they’re talking to artificial intelligence. Plus, AI systems used to generate images, text, and other forms of content need to be able to mark those outputs as “AI generated” or manipulated. This is part of an effort to reduce the risks caused by deepfakes.
Other Requirements for AI Providers and Deployers
The rules that need to be followed for high-risk systems vary depending on a person’s relationship with the tool. For instance, providers of high-risk AI tools need to comply with the requirements mentioned above and have a clear quality management system in place. They also need to commit to constant post-market monitoring of their tools.
Deployers, conversely, need to commit to taking the right technical and organizational measures to ensure they’re using systems safely. They also need to maintain system logs to track AI usage. Deployers using high-risk systems to provide essential services (such as government bodies), also need to conduct a fundamental rights impact assessment before initial use.
Limited or Minimal Risk
The remaining systems identified by the EU AI Act are “limited” or “minimal risk” systems. These systems are subject to fewer transparency obligations. For instance, creators of minimal-risk systems must ensure that end users know when interacting with AI.
The AI Act’s rules don’t currently govern those using and creating minimal-risk systems. For instance, anyone can create an AI solution that helps to filter “spam” from an inbox. However, the AI Act notes that rules may change going forward.
The Rules Around General Purpose AI Models
Notably, the EU AI Act also outlines rules for general-purpose models. Providers creating general-purpose models must establish policies that respect EU copyright laws. They’ll also need to ensure that the training data sets they use for their models are publicly available.
If a GPAI solution is considered to pose a significant risk, then there will be other obligations to consider. A “systematic risk” is a risk that can significantly impact the EU market thanks to its reach or potential output.
The Act highlights training resources as a criterion for identifying systematic risk. For instance – if the cumulative computing power used to train a model is above 10^25 floating point operations, it’s generally considered to pose a systematic risk.
Those creating models that pose a systematic risk need to document and report potential serious incidents to the EU AI Office and implement advanced cybersecurity measures for their models.
The EU AI Act: When Does it Take Effect and What are the Fines?
The EU AI Act officially went “into force” on August 1, 2024. However, the European Union did outline that the law would be rolled out in stages. For instance, from February 2, 2025, unacceptable AI “prohibitions” will begin to take effect, and rules governing general-purpose AI Models will also begin to take effect.
Providers of GPAI models introduced before this time will have until August 2027 to ensure compliance. On August 2nd, 2026, the rules for high-risk AI tools will begin to roll out. On August 2nd, 2027, rules surrounding AI systems that are products or safety components of items regulated by EU laws will start to apply.
For non-compliance with prohibited AI practices, organizations can be fined 7% of their worldwide turnover or up to EUR 35,000,000 (whichever is more). For most other violations, the fines are either 3% of worldwide annual turnover or EUR 15,000,000.
Alternatively, fines for supplying incorrect or misleading information to the authorities can be as high as 1% of worldwide turnover or EUR 7,500,000. Notably, the fine for smaller startups and organizations will be the “lower” of the two options.
How to Prepare for the EU AI Act
Although there’s a while to go before all of the elements of the EU AI Act go into effect, companies should start preparing fast. The best first step is to learn as much as you can about the AI Act, and how it affects your organization, from there:
1. Establish an AI Governance Team
Mitigating the challenges of the EU AI Act will require companies to leverage legal, technical, and operational expertise. Generally, the firms that will be the most successful here will probably be those with designated teams of governance specialists. Create a cross-functional team with varying levels of expertise to guide your company.
2. Build an AI Inventory
Currently, the EU AI Act regulates specific AI-related technologies. To ensure you’re implementing the right governance standards, you need an inventory your AI solutions.
Ensure you have clear documentation outlining whether the AI has been developed internally or by a third-party. Plus, outline who is responsible for its actions. Outline what the AI tool is designed to do, and where it will be used. Additionally, keep track of the type of content or “input” used for the AI model, and what the model produces as “output”.
3. Assess AI Act Classifications
After you have your AI inventory, you’ll need to determine whether your technology is in the “scope” of the AI Act. You’ll also need to figure out whether you’re acting as a provider, importer, distributor, or deployer.
Plus, you’ll need a clear view of whether the AI systems you’re using qualifies as “prohibited”, “unacceptable risk”, high-risk AI, limited risk, minimal risk, or General Purpose AI. Since the Prohibited AI rules are going into effect first, it’s worth prioritizing a governance framework for these. However, remember to think long-term.
Ensure you have a risk management framework for high-risk AI and a strategy for monitoring model output. For general-purpose models, prepare technical documentation in advance.
4. Build Your AI Governance Framework
Next, with your team, begin developing and implementing internal policies. Ensure you can align how you create and use AI with ethical, transparency, security, and privacy guidelines. The AI Act can help dictate some of the elements of this framework, but it’s worth considering other compliance obligations that your company might be subject to.
Research emerging laws and pre-existing laws, such as GDPR, that might affect how you collect and manage data to train AI models.
5. Constantly Invest in AI Education
Starting February 2nd, 2025, the EU AI Act will require all covered organizations to train employees. They’ll need to ensure these professionals have a decent knowledge of how AI systems work, how they can mitigate risks, and how they can adhere to upcoming compliance standards.
Start implementing standardized, organization-wide training initiatives as soon as possible. You could even implement these training initiatives into onboarding strategies for new employees. Remember to consider role-based training options for anyone likely to interact with AI tools more often.
The EU AI Act and the Future of AI Rules
The EU AI Act will ultimately challenge companies, AI developers, and users. However, it also ensures that Europeans can more effectively trust in the systems they’re using. While many AI systems pose minimal or limited risks, some can be extremely dangerous when used incorrectly.
The EU AI Act ensures that we have valuable safety rails as we move into a new era of AI. With these guidelines, we’re can more effectively mitigate the risks caused by deepfakes and dangerous AI systems.
There’s still a way to go before we have a global framework for keeping AI safe, ethical and responsible. Fortunately, the EU AI Act is a valuable first step.