Book a Demo
search icon
illustration

The EU AI Act Explained [2024]

7 min read

Written by: Software Improvement Group

publication inner img
illustration

Summary

The EU AI Act is the first of its kind—a law regulating the use of AI, and one which will impact many hundreds of thousands of businesses developing or implementing AI solutions in their operations.

The EU AI Act aims to make AI safer and more secure for public and commercial use, mitigate its risks, ensure it remains under human control, reduce any negative impacts of AI on the environment and society, keep our data safe and private, and ensure transparency in almost all forms of AI use.

The Act has defined four key risk categories into which different types of AI and their associated risks are grouped. Businesses need to be aware of each risk category, how their own AI systems might be categorized, and the regulatory implications on each system.

  1. Unacceptable-risk AI systems
  2. High-risk AI systems
  3. Limited-Risk AI systems
  4. Minimal-Risk AI systems

EU AI legislation will be made applicable via a phased approach, meaning that businesses operating in the EU may be required to comply with parts of the EU AI Act as early as February 1st, 2025.

Introduction

Artificial Intelligence (AI) is an emerging technology which is changing the face of international business and, on a broader scale, everyday life—both online and off. AI has already been implemented across a diverse range of sectors, from predictive text and data processing and analysis to game design, climate prediction, and industrial automation.

Indeed, the evolution of AI has been met with a degree of positivity from most business leaders. Around 64% of leaders believe that AI will be able to improve their company’s productivity. Additionally, the AI market is expected to enjoy an annual growth rate of 37.3% from now until 2030, and will likely be worth upwards of USD $407 billion by 2027.

In tandem, business owners are unsure of the risks involved with integrating AI into their operations. Whilst a majority of CIOs report AI as part of their innovation plan, less than half of them feel their companies are prepared to handle the associated risks. On top of this, 75% of CROs interviewed by the World Economic Forum reinforced this fear, reporting that the use of AI poses a reputational risk to their organization.

Indeed, poor-quality AI systems can present numerous threats, including risks to privacy, data misuse, and the undermining of individual autonomy. These concerns extend to broader social and environmental impacts, which has prompted international regulatory bodies to take action.

The EU AI Act is one such landmark piece of legislation—the first of its kind in the world—aimed at regulating AI use in the European Union to the benefit of trust and safety in this new technology.

For IT leaders, compliance with regulations like the EU AI Act will be not only mandatory, but also key to the safe, secure, and beneficial integration and development of AI in your business.

For those operating within or trading with the EU, this article will tell you everything you need to know about the EU AI Act and will guide you through the steps necessary to comply with this groundbreaking AI legislation.

A man reading about the EU AI Act on his laptop in an office.

What is the EU AI Act?

The European AI Act, adopted by the European Parliament in March 2024 and approved by the Council in May 2024, is the first comprehensive law regulating artificial intelligence (AI) in the world.

Before the European Union AI Act, The EU had already established Guidelines on Ethics in AI—a set of non-binding guidelines for safe and ethical AI implementation introduced in 2019. These provided a framework for developing and deploying AI systems in a manner which aligned with European values, and emphasized seven key requirements: including human oversight, privacy, and non-discrimination.

The EU AI Act build upon these principles by making them legally binding.

The act classifies AI systems based on the risk they pose to users, with corresponding levels of regulation to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly… [and that] AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”

Three people discussing the EU AI Act

 

The four risk categories of the EU AI Act explained

By emphasizing a risk-based assessment approach, the European AI Act aims to ensure AI systems are classified by their potential risk to individuals and society, and imposes corresponding regulations to each category to enhance safety and compliance.

The Act has defined four key risk categories into which different types of AI and their associated risks are grouped. Businesses need to be aware of each risk category, how their own AI systems might be categorized, and the regulatory implications on each system.

  1. Unacceptable-risk AI systems
  2. High-risk AI systems
  3. Limited-Risk AI systems
  4. Minimal-Risk AI systems

Category 1) Unacceptable-Risk AI (e.g., social scoring by governments)

Chapter 2 of the EU AI Act defines the ‘Unacceptable Risk’ category for AI systems and the regulations applied to them.

Definition: Systems considered a threat to individuals, such as those manipulating vulnerable groups, engaging in social scoring, or using biometric identification and categorization.

Regulation: These systems are banned, including real-time and remote biometric identification like facial recognition, except under strict law enforcement conditions which first gain court approval.

Category 2) High-Risk AI (e.g., healthcare applications)

Chapter 3 of the EU AI Act defines the ‘High Risk’ category for AI systems and the regulations applied to them.

Definition: Systems impacting safety or fundamental rights, including AI in toys, medical devices, critical infrastructure, education, employment, essential services, law enforcement, migration, and legal interpretation.

Regulation: These systems must be registered in an EU database, thoroughly risk-assessed and regularly reported on to ensure strict compliance and oversight. Assessment will have o be conducted prior to high-risk AI systems being put on the market and then throughout their lifecycle. People will also have the right to lodge complaints against AI systems to their designated national authorities.

Obligations: Businesses must group any high-risk AI they employ into one of two subcategories: high-risk systems used in products covered by the EU’s product safety legislation and those which fall into other specific areas, such as those listed in the definition above.

Three lawyers discussing about category 2: high risk AI

Category 3) Limited-Risk AI (e.g., AI systems with transparency obligations)

Chapter 4 of the EU AI Act defines the ‘Limited Risk’ category for AI systems and the regulations applied to them.

Definition: The limited-risk category includes generative-AI models like ChatGPT. These are not high-risk per se but must meet transparency requirements and comply with EU copyright law.

Regulation: Providers of limited-risk AI models and applications must disclose to users that their content is AI-generated, must prevent illegal content generation, and must also publish summaries of copyrighted data used for training. High-impact AI models must undergo thorough evaluations and report serious incidents to the European Commission. AI-generated or modified content (e.g., deepfakes) must be clearly labeled as such.

Category 4) Minimal-Risk AI (e.g., AI used in games or spam filters)

Chapter 5 of the EU AI Act defines the ‘Minimal Risk’ or ‘General Purpose’ category for AI systems and the regulations applied to them.

Definition: These applications are already widely deployed and make up most of the AI systems we interact with today. Examples include spam filters, AI-enabled video games and inventory-management systems.

Regulation: Most AI systems in this category face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct. Primary responsibility will be shouldered by the “providers” of AI systems, though any business which utilizes them should remain vigilant of their compliance obligations.

What this means for business owners and executives: As a business owner utilizing minimal-risk AI from a third-party vendor, it is necessary you to responsibly source, employ, and risk-assess the adoption of each new AI system, though you will not be required to comply with the EU AI Act.

When will the EU AI Act be in effect?

EU AI legislation will be made applicable via a phased approach, with different regulatory requirements triggered at 6–12-month intervals from when the act entered into force.

Different parts of the act will apply six, 12, 18, 24, and 36 months from the act’s initial entry date of 1 August 2024, meaning that businesses operating in the EU may be required to comply with parts of the EU AI Act as early as 1 February 2025.

When will different parts of the EU AI Act become legally binding

Note: For the most up-to-date information (that became available after the publication date of this article), we advise you visit the official website of the European Parliament.

Complying with the EU AI Act is a must for almost any business operating in the EU and incorporating AI somewhere in its value chain. Compliance paradigms are set to include:

By prioritizing AI compliance, businesses can not only mitigate legal risks but also strengthen trust and reliability in their AI systems, positioning themselves as leaders in a technology-driven future.

International trends and themes in AI regulation

While the European Union is pioneering with its AI Act, AI regulation is being developed across the world.

According to the September 2023 Global AI Legislation Tracker, other countries around the world are also developing AI governance legislation to match the rapid growth and diversity of the technology. Efforts include: creating comprehensive laws, drafting specific regulations for particular use cases, and implementing voluntary guidelines and standards.

Stanford University has noted a significant increase in the number of countries with AI-related laws, growing from 25 countries in 2022 to 127 in 2023. While individual regions, including the EU, are advancing their own frameworks, multilateral coordination is also on the rise. This includes adopting AI principles from the Organization for Economic Co-operation and Development, and discussions within the United Nations and G7. The Centre for Strategic & International Studies highlights that these efforts aim to balance the potential risks of AI against the benefits it offers.

McKinsey reports that different countries are taking varied approaches to AI regulation, which is why it is so pressing for IT organizations around the world to consult their legal teams as to their AI compliance requirements.

Despite regional differences in AI regulation, certain common themes are emerging globally. Understanding these themes can help businesses prepare for future compliance across various markets. Below, we briefly define these key trends in AI regulation:

Human agency and oversight

AI systems should empower people, uphold human dignity, and remain under human control. Regulators emphasize the need for appropriate human oversight to ensure AI serves humanity’s best interests.

Accountability

There is a demand for mechanisms that ensure responsibility and accountability for AI systems. This includes top management commitment, organization-wide education, and clear individual responsibilities.

Technical robustness and safety

AI systems must be robust, stable, and capable of correcting errors. They should include fallback mechanisms and be resilient against malicious attacks or manipulation.

Diversity, non-discrimination, and fairness

Ensuring that AI systems are free from bias and do not cause discrimination or unfair treatment is a top priority.

Privacy and data governance

AI systems should comply with existing privacy and data protection laws—such as GDPR in the European Union—ensuring high standards of data quality and integrity.

Transparency

Regulators are pushing for AI systems to produce clear, traceable outputs. Users should be informed when interacting with AI, understand their rights, and be aware of the system’s capabilities and limitations

Social and environmental wellbeing

AI should contribute to sustainability and be environmentally friendly, benefiting society at large. Continuous monitoring of AI’s long-term effects on individuals, communities, and democracy is essential.

 

A man reading about International trends and themes in AI regulation on his computer.

Conclusion

On 1 August 2024, the European Union implemented the world’s first legislation governing the use of AI in both public and private sectors—the EU AI Act. This act aims to mitigate the various potential risks associated with AI while ensuring that it is safer and more secure for businesses operating within the EU.

In addition to the EU AI Act, it is clear that other countries, including the UK and the USA, are poised to introduce their own AI legislations. Compliance with these regulations will be mandatory for all affected businesses and may prove both costly and complex.

Fortunately, business leaders can take several actionable steps now to facilitate future compliance and fully leverage the benefits of safe and secure AI adoption. Now is the ideal time to review your AI strategies and ensure they align with both current and anticipated regulatory requirements.

Author:

Software Improvement Group

image of author
yellow dot illustration

Let’s keep in touch

We'll keep you posted on the latest news, events, and publications.