EU AI Act

The EU AI Act: Your Essential Guide to New Regulations

The EU AI Act is finally here! Published in the Official Journal of the EU on July 12, this groundbreaking regulation comes into force today, August 1, 2024. But don’t worry, the changes won’t hit you all at once. There’s a detailed schedule outlining who needs to comply and when.

Key Points You Need to Know

1.Immediate Impact Across the EU: The EU AI Act directly affects all EU member states, regulating every AI system and model out there. But it’s not a one-size-fits-all approach; the obligations vary based on the risk associated with each AI system.

2.Generative AI Gets Special Attention: With the rise of technologies like ChatGPT, the Act introduces specific regulations for General Purpose AI (GPAI). These rules were added later in the drafting process, given the rapid development of generative AI.

Breaking Down the Risk Classes

Prohibited AI Applications

Mark your calendars: on February 2, 2025, the first batch of high-risk AI applications will be banned. This includes controversial social scoring systems and real-time biometric identification in public spaces by law enforcement. However, there are some exceptions, which have sparked criticism from civil rights groups. For example, predictive policing data can’t be acted upon, and emotion recognition is banned in workplaces and schools, except in certain scenarios like detecting pilot fatigue. Also, analyzing facial images without consent is a no-go.

High-Risk Applications

AI applications that could harm human safety or fundamental rights are classified as high risk. These systems must meet stringent requirements for data robustness, documentation, transparency, and human oversight. These rules kick in on August 2, 2026. To sell a high-risk AI system, you’ll need a conformity assessment to prove it meets all standards, along with a solid quality and risk management plan.

Minimal Risk Applications

Good news for most developers: around 80% of AI systems fall into this category and can be developed and used without special requirements. Think spam filters and search algorithms – these will continue to operate freely.

Spotlight on General Purpose AI (GPAI)

GPAI, like the powerful language models behind ChatGPT, can pose significant risks, including severe accidents or extensive cyberattacks. The EU Commission warns about these “systemic risks.” From August 2, 2025, GPAI will face strict regulations, including transparency requirements. Users must be informed when interacting with AI applications, like chatbots. Plus, providers of large AI models must supply all necessary information to ensure downstream users comply with the law.

AI Provider

Becoming an AI provider is straightforward but comes with responsibilities. If you use models from providers like OpenAI or Google and modify them, you become an AI provider and must adhere to the EU AI Act. This means:

•Informing users when they interact with AI.

•Keeping detailed records to ensure compliance.

•Implementing measures to minimize bias.

•Disclosing energy consumption of your AI models.

Intellectual Property and Training Data

GPAI providers must avoid copyright infringement when training their models, though this remains a gray area with differing opinions on what constitutes a violation.

Classifying AI Applications

AI systems are categorized based on their purpose, with specific product safety regulations in place. The Commission maintains a list to help determine where an application fits. For GPAI models, a controversial threshold is set: those trained with computing power exceeding 10^25 FLOPs are considered systemic risks, currently affecting models like GPT-4 and likely Gemini.

Requirements for Systemic Risk Models

Providers of these models must evaluate risks, report serious incidents, conduct thorough evaluations, ensure cybersecurity, and disclose energy consumption. This has been a sensitive issue for companies like OpenAI and Google, as AI development is highly resource-intensive.

Emphasis on Sustainability

The EU AI Act mandates high environmental protection standards, requiring providers to improve resource efficiency. The Commission will evaluate if enough is being done. Providers of systemic risk models must disclose their energy needs.

Exceptions for Biometric Remote Identification

While generally banned for law enforcement, exceptions exist for 16 specific crimes, including kidnapping, human trafficking, and terrorism. These uses require prior authorization and a fundamental rights impact assessment.

Enforcement and Penalties

Each EU member state must designate a national authority to enforce the AI Act, with Germany considering its data protection authorities or the Federal Network Agency. An AI Office in Brussels, under DG Connect, will oversee GPAI models, supported by a scientific committee of independent experts.

Penalties for violations are steep: up to 35 million euros or 7% of global turnover for severe breaches. Lesser violations can still result in fines up to 15 million euros or 3% of turnover, and even misleading information can attract fines up to 7.5 million euros or 1.5%. Complaints can be filed with national authorities by anyone.

Key Dates to Remember

February 2, 2025: Ban on certain high-risk AI applications.

August 2, 2025: Regulation of GPAI begins.

August 2, 2026: Full compliance required for high-risk AI applications, with a possible 36-month extension for industry adaptation.

Photo by Google DeepMind

Unlock the Future of Business with AI

Dive into our immersive workshops and equip your team with the tools and knowledge to lead in the AI era.

Scroll to top