A comprehensive EU AI Act Summary [2024]
In this article
Summary
The EU AI Act is the first of its kind—a law regulating the use of AI, and one which will impact many hundreds of thousands of businesses developing or implementing AI solutions in their operations.
The EU AI Act aims to make AI safer and more secure for public and commercial use, mitigate its risks, ensure it remains under human control, reduce any negative impacts of AI on the environment and society, keep our data safe and private, and ensure transparency in almost all forms of AI use.
The Act has defined four key risk categories into which different types of AI and their associated risks are grouped. Businesses need to be aware of each risk category, how their own AI systems might be categorized, and the regulatory implications on each system.
- Unacceptable-risk AI systems
- High-risk AI systems
- Limited-Risk AI systems
- Minimal-Risk AI systems
EU AI legislation will be made applicable via a phased approach, meaning that businesses operating in the EU may be required to comply with parts of the EU AI Act as early as February 2025. For example, as soon as February 2nd next year, organizations operating in the European market must ensure adequate AI literacy among employees involved in the use and deployment of AI systems, and AI systems that pose unacceptable risks will be banned.
The need for the EU AI Act
Artificial Intelligence (AI) is an emerging technology which is changing the face of international business and, on a broader scale, everyday life—both online and off. AI has already been implemented across a diverse range of sectors, from predictive text and data processing and analysis to game design, climate prediction, and industrial automation.
Indeed, the evolution of AI has been met with a degree of positivity from most business leaders. Around 64% of leaders believe that AI will be able to improve their company’s productivity. Additionally, the AI market is expected to enjoy an annual growth rate of 37.3% from now until 2030, and will likely be worth upwards of USD $407 billion by 2027.
In tandem, business owners are unsure of the risks involved with integrating AI into their operations. Whilst a majority of CIOs report AI as part of their innovation plan, less than half of them feel their companies are prepared to handle the associated risks. On top of this, 75% of CROs interviewed by the World Economic Forum reinforced this fear, reporting that the use of AI poses a reputational risk to their organization.
Indeed, poor-quality AI systems can present numerous threats, including risks to privacy, data misuse, and the undermining of individual autonomy. These concerns extend to broader social and environmental impacts, which has prompted international regulatory bodies to take action.
The EU AI Act is one such landmark piece of legislation—the first of its kind in the world—aimed at regulating AI use in the European Union to the benefit of trust and safety in this new technology.
For IT leaders, compliance with regulations like the EU AI Act will be not only mandatory, but also key to the safe, secure, and beneficial integration and development of AI in your business.
For those operating within or trading with the EU, this article will tell you everything you need to know about the EU AI Act and will guide you through the steps necessary to comply with this groundbreaking AI legislation.
What is the EU AI Act?
The European AI Act, adopted by the European Parliament in March 2024 and approved by the Council in May 2024, is the first comprehensive law regulating artificial intelligence (AI) in the world.
Before the European Union AI Act, The EU had already established Guidelines on Ethics in AI—a set of non-binding guidelines for safe and ethical AI implementation introduced in 2019. These provided a framework for developing and deploying AI systems in a manner which aligned with European values, and emphasized seven key requirements: including human oversight, privacy, and non-discrimination.
The EU AI Act build upon these principles by making them legally binding.
The act classifies AI systems based on the risk they pose to users, with corresponding levels of regulation to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly… [and that] AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”
The EU AI Act risk categories
By emphasizing a risk-based assessment approach, the European AI Act aims to ensure AI systems are classified by their potential risk to individuals and society, and imposes corresponding regulations to each category to enhance safety and compliance.
The Act has defined four key risk categories into which different types of AI and their associated risks are grouped. Businesses need to be aware of each risk category, how their own AI systems might be categorized, and the regulatory implications on each system.
- Unacceptable-risk AI systems
- High-risk AI systems
- Limited-Risk AI systems
- Minimal-Risk AI systems
Category 1: Unacceptable-Risk AI (e.g., social scoring by governments)
Chapter 2 of the EU AI Act defines the ‘Unacceptable Risk’ category for AI systems and the regulations applied to them.
Definition: Systems considered a threat to individuals, such as those manipulating vulnerable groups, engaging in social scoring, or using biometric identification and categorization.
Regulation: These systems are banned, including real-time and remote biometric identification like facial recognition, except under strict law enforcement conditions which first gain court approval.
Category 2: High-Risk AI (e.g., healthcare applications)
Chapter 3 of the EU AI Act defines the ‘High Risk’ category for AI systems and the regulations applied to them.
Definition: Systems impacting safety or fundamental rights, including AI in toys, medical devices, critical infrastructure, education, employment, essential services, law enforcement, migration, and legal interpretation.
Regulation: These systems must be registered in an EU database, thoroughly risk-assessed and regularly reported on to ensure strict compliance and oversight. Assessment will have o be conducted prior to high-risk AI systems being put on the market and then throughout their lifecycle. People will also have the right to lodge complaints against AI systems to their designated national authorities.
Obligations: Businesses must group any high-risk AI they employ into one of two subcategories: high-risk systems used in products covered by the EU’s product safety legislation and those which fall into other specific areas, such as those listed in the definition above.
Category 3: Limited-Risk AI (e.g., AI systems with transparency obligations)
Chapter 4 of the EU AI Act defines the ‘Limited Risk’ category for AI systems and the regulations applied to them.
Definition: The limited-risk category includes generative-AI models like ChatGPT. These are not high-risk per se but must meet transparency requirements and comply with EU copyright law.
Regulation: Providers of limited-risk AI models and applications must disclose to users that their content is AI-generated, must prevent illegal content generation, and must also publish summaries of copyrighted data used for training. High-impact AI models must undergo thorough evaluations and report serious incidents to the European Commission. AI-generated or modified content (e.g., deepfakes) must be clearly labeled as such.
Category 4: Minimal-Risk AI (e.g., AI used in games or spam filters)
Chapter 5 of the EU AI Act defines the ‘Minimal Risk’ or ‘General Purpose’ category for AI systems and the regulations applied to them.
Definition: These applications are already widely deployed and make up most of the AI systems we interact with today. Examples include spam filters, AI-enabled video games and inventory-management systems.
Regulation: Most AI systems in this category face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct. Primary responsibility will be shouldered by the “providers” of AI systems, though any business which utilizes them should remain vigilant of their compliance obligations.
What this means for business owners and executives: As a business owner utilizing minimal-risk AI from a third-party vendor, it is necessary for you to responsibly source, employ, and risk-assess the adoption of each new AI system, though you will not be required to comply with the EU AI Act.
The EU AI Act timeline
EU AI legislation will be made applicable via a phased approach, with different regulatory requirements triggered at 6–12-month intervals from when the act entered into force.
When does the EU AI Act take effect?
Different parts of the act will apply six, 12, 18, 24, and 36 months from the act’s initial entry date of 1 August 2024, meaning that businesses operating in the EU may be required to comply with parts of the EU AI Act as early as 2 February 2025.
Here’s an overview of when the different phases of the EU AI Act become effective.
2 February 2025
- The ban of AI systems that pose unacceptable risks. This will apply six months after the act entered into force (i.e., 2 February 2025).
- As of 2 February 2025, the EU AI Act mandates that organizations operating in the European market ensure adequate AI literacy among employees involved in the use and deployment of AI systems.
August 2025
- Obligations for providers of GP-AI models and provisions on penalties, including administrative fines, will begin to apply 12 months from when the act entered into force (i.e., 2 August 2025).
- Rules governing general-purpose AI systems that need to comply with transparency requirements will also begin to apply from 2 August 2025.
August 2026 and August 2027
- The EU’s AI legislation will begin applying to high-risk AI systems from 24- and 36-months post-launch (i.e., 2 August 2026 and 2 August 2027).
Note: For the most up-to-date information (that became available after the publication date of this article), we advise you visit the official website of the European Parliament.
Complying with the EU AI Act is a must for almost any business operating in the EU and incorporating AI somewhere in its value chain. Compliance paradigms are set to include:
- Identifying the categories of AI your organization utilizes
- Assessing their risk levels
- Implementing robust AI-governance frameworks
- Ensuring transparency in AI operations
By prioritizing AI compliance, businesses can not only mitigate legal risks but also strengthen trust and reliability in their AI systems, positioning themselves as leaders in a technology-driven future.
Adequate AI literacy (required for all EU organizations on 2 February 2025)
As part of the phased compliance rollout, the EU AI Act emphasizes the importance of AI literacy among employees to ensure safe and compliant AI usage.
Starting February 2, 2025, the EU AI Act requires organizations in the European market to ensure employees involved in AI use and deployment have adequate AI literacy. This applies to both AI system providers and users.
According to Article 4 of the EU AI Act:
“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
What is AI literacy?
AI literacy, or artificial intelligence literacy, refers to the understanding, utilization, monitoring, and critical reflection on AI applications.
What does this AI literacy requirement mean for your organization?
In layman’s terms, organizations must ensure their staff is sufficiently educated in the operation and use of AI systems. This emphasis on education goes beyond regulatory alignment. It also helps mitigate risks such as unauthorized data exposure or biased outputs.
For example, engineers who aren’t aware of security risks might accidentally share sensitive code with external AI systems, or HR staff using AI in hiring may overlook potential biases in the AI’s recommendations.
Are there fines if your organization doesn’t comply?
AI literacy forms a crucial part of a robust AI governance framework.
While there are no direct fines for non-compliance with Article 4, ensuring AI literacy may influence the severity of penalties in cases of other violations.
How can your organization prepare for the upcoming AI literacy requirements?
As the EU AI Act’s February 2025 deadline approaches, taking a proactive approach to AI literacy and upskilling is essential for compliance and risk mitigation.
Recently, we released our AI readiness guide for organizations. This guide provides practical steps for leaders on AI governance, risk management, development, and security.
Upskilling and establishing a learning organization is one of the key steps highlighted in this guide.
5 key strategies to improve the AI literacy in your organization
Building on recommendations from our recently released AI Readiness Guide, here are 5 key strategies to elevate organizational AI literacy.
- Define essential AI skills: Identify the skills and knowledge required to support AI initiatives.
- Assess the workforce’s current skills: Evaluate the current level of AI knowledge and skills within your workforce. Where necessary, invest in education, recruitment, or contracting to fill knowledge gaps.
- Develop tiered training programs: Provide AI education at multiple levels—from foundational AI literacy for all staff, covering AI policy, ethics, and security, to advanced training for technical roles such as data scientists and engineers.
- Establish a community of practice: Create a cross-disciplinary learning community where employees from diverse functions—strategy, ethics, privacy, and technical AI—can share insights, best practices, and advancements.
- Access external knowledge: Establish partnerships with legal, ethical, and technical AI experts to stay informed on industry standards and emerging best practices.
Together, these steps build a foundation of AI literacy and governance, supporting both compliance and responsible AI innovation within your organization.
International trends and themes in AI regulation
While the European Union is pioneering with its AI Act, AI regulation is being developed across the world.
According to the September 2023 Global AI Legislation Tracker, other countries around the world are also developing AI governance legislation to match the rapid growth and diversity of the technology. Efforts include: creating comprehensive laws, drafting specific regulations for particular use cases, and implementing voluntary guidelines and standards.
Stanford University has noted a significant increase in the number of countries with AI-related laws, growing from 25 countries in 2022 to 127 in 2023. While individual regions, including the EU and the US, are advancing their own frameworks, multilateral coordination is also on the rise. This includes adopting AI principles from the Organization for Economic Co-operation and Development, and discussions within the United Nations and G7. The Centre for Strategic & International Studies highlights that these efforts aim to balance the potential risks of AI against the benefits it offers.
McKinsey reports that different countries are taking varied approaches to AI regulation, which is why it is so pressing for IT organizations around the world to consult their legal teams as to their AI compliance requirements.
Despite regional differences in AI regulation, certain common themes are emerging globally. Understanding these themes can help businesses prepare for future compliance across various markets. Below, we briefly define these key trends in AI regulation:
Human agency and oversight
AI systems should empower people, uphold human dignity, and remain under human control. Regulators emphasize the need for appropriate human oversight to ensure AI serves humanity’s best interests.
Accountability
There is a demand for mechanisms that ensure responsibility and accountability for AI systems. This includes top management commitment, organization-wide education, and clear individual responsibilities.
Technical robustness and safety
AI systems must be robust, stable, and capable of correcting errors. They should include fallback mechanisms and be resilient against malicious attacks or manipulation.
Diversity, non-discrimination, and fairness
Ensuring that AI systems are free from bias and do not cause discrimination or unfair treatment is a top priority.
Privacy and data governance
AI systems should comply with existing privacy and data protection laws—such as GDPR in the European Union—ensuring high standards of data quality and integrity.
Transparency
Regulators are pushing for AI systems to produce clear, traceable outputs. Users should be informed when interacting with AI, understand their rights, and be aware of the system’s capabilities and limitations
Social and environmental wellbeing
AI should contribute to sustainability and be environmentally friendly, benefiting society at large. Continuous monitoring of AI’s long-term effects on individuals, communities, and democracy is essential.
Conclusion
On 1 August 2024, the European Union implemented the world’s first legislation governing the use of AI in both public and private sectors—the EU AI Act. This act aims to mitigate the various potential risks associated with AI while ensuring that it is safer and more secure for businesses operating within the EU.
In addition to the EU AI Act, it is clear that other countries, including the UK and the USA, are poised to introduce their own AI legislations. Compliance with these regulations will be mandatory for all affected businesses and may prove both costly and complex.
Fortunately, business leaders can take several actionable steps now to facilitate future compliance and fully leverage the benefits of safe and secure AI adoption. Now is the ideal time to review your AI strategies and ensure they align with both current and anticipated regulatory requirements.
As AI legislation evolves, is your organization prepared? Our AI readiness guide, authored by Rob van der Veer, a leading expert on ISO/IEC standards and the EU AI Act, helps businesses navigate the complexities of AI compliance and more. With 19 practical steps covering governance, risk, compliance, security, and IT development, this guide will help you not only meet regulatory requirements but also leverage AI’s benefits responsibly.
Stay ahead of the new AI regulations. Get your copy of our AI readiness guide and ensure your organization is prepared for the future of AI compliance.