EU AI Act: A First Look at Europe’s AI Regulation

By: Etienne Cussol CIPP/E, CIPM Etienne Cussol CIPP/E, CIPM | Updated on: October 18, 2024

EU-AI-Act-A-First-Look-At-Europe-AI-Regulation-01

European Parliament passed the EU Artificial Intelligence Act (EU AI Act) in March 2024, marking the first legal guidelines and compliance requirements regarding the use of AI services.

Companies worldwide using basic AI systems may be impacted, so how can you prepare?

Below, I walk you through the EU AI regulation and explain its scope, impact, penalties for non-compliance, and how it affects businesses and consumers. 

Table of Contents
  1. What Is the EU AI Act?
  2. Important Dates Regarding the EU AI Act
  3. How Does the EU AI Act Define AI Systems?
  4. What Kind of AI Does the EU Regulate and How?
  5. Who Is Impacted by the EU AI Act and How?
  6. How Will the EU AI Act Be Enforced?
  7. Potential Penalties and Fines Under the EU AI Act
  8. AI and Data Privacy: The Connection, the Pros, and the Cons
  9. How Can Businesses Prepare for AI Regulation?
  10. Why Is Europe Regulating AI?
  11. How Various European Countries Are Responding to AI
  12. What About ChatGPT?
  13. Summary

What Is the EU AI Act?

The European Union’s Artificial Intelligence Act (EU AI Act) is a formal regulation that was passed by European Parliament in March 2024. 

It entered into force in August 2024.

It was proposed by the European Commission and is one of the first pieces of legislation that regulates and creates a legal framework for using and developing artificial intelligence. 

Important Dates Regarding the EU AI Act

Let’s briefly review some important dates regarding the development of Europe’s AI regulation and consider the potential future timeline of this regulation. 

April 2021

The European Commission initially presented a comprehensive AI package in April 2021, which included a proposal for the AI Act.  

November 2022

In November 2022, after a year of discussion amongst EU lawmakers and nearly five iterations of the regulation’s text, a final compromised version of the AI Act was unanimously agreed to by the Council of the EU and submitted to the Telecommunications (TTE) Council. 

March 2023

The European Parliament was scheduled to vote on the final version of the AI Act by the end of March 2023. 

Following the vote, discussions between the member states, the Parliament, and the Commission commenced in April.

March 13, 2024

European Parliament officially adopted the final version of the EU AI Act with an overwhelming majority of favorable votes (523-46). 

August 2024

The EU AI Act partially entered into force, launching the AI Pact, an initiative by the European Commission meant to encourage companies to pledge to help keep AI development safe, trustworthy, and in-line with the EU AI Act as it takes effect in phases over the next three years. 

February 2025

The prohibitions impacting unacceptable risk AI enter into force. 

August 2025

Requirements impacting the providers of general purpose AI models enter into force, member states must appoint authorities.

The Commission will also perform it’s first annual review of the Act and propose possible amendments as needed. 

August 2026

Obligations impacting the use of high risk AI enter into force, primarily those expressed in Annex III, which includes AI systems in the following sectors: 

  • Biometrics
  • Education
  • Critical infrastructure
  • Employment
  • Public service access
  • Law enforcement

All members states will have implemented rules on penalties with regards to the Act, including administrative fines, and establish at least one operational regulatory sandbox. 

The Commission will perform another annual review and propose amendments if necessary.

August 2027

All remaining obligations impacting high risk AI systems enter into effect, including those intended to be used as a safety component of a service or product and third-party assessment requirements as described in other existing EU laws. 

How Does the EU AI Act Define AI Systems?

The EU AI Act defines artificial intelligence and AI systems as follows: 

A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

This definition includes software that is developed with one or more techniques, such as:

  • Machine learning approaches
  • Logic and knowledge-based approaches
  • Statistical approaches 

What Kind of AI Does the EU Regulate and How?

The EU AI Act regulates four different levels of AI ‘risks’, and the severity of the risk level impacts the requirements and obligations that companies are subject to following. 

  • Unacceptable risk (Act Title II): AI that uses subliminal manipulation, exploits children or mentally disabled persons, and AI used for the general purpose of social scoring and remote biometric identification for law enforcement — with some exceptions.
  • High risk (Act Title III, Annex III): Safety component of regulated products and AI systems with applications in Human Services like hiring, employee management, education, public services, law enforcement, and border control.
  • Limited risk (Act Title IV): AI systems intended to interact with natural persons, like deep fakes, chatbots, AI systems that generate or manipulate images, audio, or video content that resembles existing persons, objects, and places, or other entities or events and would falsely appear to a person to be authentic or truthful.
  • Minimal risk: Defined by omission, this category includes all AIs that are not in the previous three categories.

Under the AI Act, any technology in the unacceptable-risk category is not permitted for use in the EU. It’s up to the business to prove which risk level their technology falls under. 

High Risk AI

According to the European Union’s AI act, high-risk AI is the most heavily regulated category. 

One requirement is that technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and must be kept up-to-date, as described in detail in Annex IV.

It states, in part, that the documentation should contain:

The act also lays out the following additional high-risk AI requirements: 

  • Data sets used for training, validating, and testing AIs should have data governance and management practices in place
  • AI designed and developed with capabilities enabling the automatic recording of events while the system is operating must conform to recognized standards or common specifications
  • You must provide transparent information to users on how the AI system is designed and developed, intended to be used, its characteristics, capabilities, and limitations
  • Enable human oversight through interface tools, either built into the AI system or implementable by the user

There are numerous other obligations to high-risk AI providers covering the following:

  • Quality management
  • Technical documentation
  • Conformity assessments
  • Automatically generated logs
  • Corrective actions
  • The duty of information
  • Cooperation with authorities

Limited Risk AI

The EU AI Act outlines transparency obligations for limited-risk AIs, especially where there is a risk of manipulation (i.e., chatbots) or deceit (i.e., deep fakes). 

It states that natural persons should be made aware that they’re interacting with an AI system and makes room for some law enforcement exceptions.

Minimal Risk AI

Any AI services that fall in the minimal risk category are encouraged to adopt codes of conduct under the current version of the AI Act, like ensuring fairness, non-discrimination, and the use of human oversight.

Minimal-risk AI includes anything that doesn’t fall into the other higher-risk categories.

Who Is Impacted by the EU AI Act and How?

The EU AI Act has a broad impact beyond the EU member-states because of its extraterritorial scope, meaning the enforcement abilities could expand beyond the traditional territorial boundaries of the EU. 

Currently, it impacts: 

  • Companies placing AI systems on the market or into service in the EU, even if those providers aren’t physically located within the EU itself
  • Any users of AI systems if they are located within the EU
  • Both providers and users of AI systems that are located in a country outside of the EU, where the output produced by the system is used inside of the EU

Under this European AI regulation, the EU users of AI gain rights and knowledge over how those systems work, whereas businesses producing or relying on AI technology will be expected to follow various obligations and guidelines.

However, the act doesn’t apply to private or non-professional use. It also exempts any AI systems exclusively developed and used for military purposes.

How Will the EU AI Act Be Enforced?

Enforcement of the EU AI Act is left up to the national surveillance authorities, but an autonomous AI Board will supervise the act itself.

The proposed AI Board would specifically oversee consistent application and enforcement of the act across the 27 member states.  

Potential Penalties and Fines Under the EU AI Act

The AI Act outlines an enforcement framework similar to the GDPR, the EU’s primary data privacy legislation, but with heftier fees. 

Penalties include administrative fines of up to €30,000,000 or, if the offender is a global company, up to 6% of its total worldwide annual turnover for:

  • Non-compliance with the prohibition of artificial intelligence with unacceptable risks referred to in Article 5
  • Non-compliance of the AI system with the data and data governance requirements related to high-risk AIs referred to in Article 10

For other infringements, expect administrative fines of up to €20,000,000 or, if the offender is a global company, up to 4% of its total global annual turnover.

Additionally, supplying incorrect, incomplete, or misleading information to authorities is subject to a fine of up to €10,000,000 or, for global companies, up to 2% of its total annual turnover.

AI and Data Privacy: The Connection, the Pros, and the Cons

AI systems are intrinsically related to data privacy laws because some software can identify an individual who otherwise should’ve been unidentifiable based on the provided dataset. 

Machine learning, in general, relies on gigantic amounts of text files, and some of those data sets contain details that legally qualify as personal information according to privacy regulations like the:

  • General Data Protection Regulation (GDPR)
  • California Consumer Privacy Act (CCPA)
  • Virginia Consumer Data Protection Act (CDPA)

Undoubtedly, AI will change how businesses comply with data privacy regulations and vice-versa. So we’ll be here to keep you updated on potential laws like the EU AI Act and help make the transition go as smoothly as possible for companies of any size.

The Pros and Cons of AI for Data Privacy

AI technology has many potential pros. It helps streamline mundane activities, enhances the user experience, and limits a company’s reliance on sensitive personal data.

But, it can also be used for opposite purposes. 

Cyber attackers and security breaches often rely on AI-driven tools that adapt their malware to evade detection. It’s also difficult to accurately analyze if the results of personal information processing by AI systems contain biases or not. 

People around the world are still debating the pros and cons of AI technology. But legislation like the European AI regulation hopes to protect the rights of individuals while still allowing for the development and implementation of new, groundbreaking artificial intelligence software.

How Can Businesses Prepare for AI Regulation?

To prepare for the EU AI Act, I recommend businesses establish an AI risk management assessment procedure by following these steps.

Step One: Run an Inventory of All AI Systems Your Business Uses

Your business must know what AI systems it uses, how, why, and where the AI gets deployed. 

So create a process for identifying and inventorying your AI systems. Be transparent and thorough during this step, so you don’t accidentally leave anything out.  

Step Two: Create an AI Risk Classification System 

Once you have an inventory of all AI systems your business uses, separate the systems into risk categories parallel to the classification system outlined by the EU AI Act. 

Attempt to determine what risk levels the AI systems you use fall into and determine if you have any high or unacceptable risk systems in place.  

Step Three: Implement AI Risk-Mitigation Procedures and Independent Audits

Build out a system of risk-mitigation procedures and a method for performing audits to ensure your business can identify and regulate high and unacceptable-risk AI systems. 

For example, start budgeting for the appropriate risk-mitigation tools your business might need to rely on if and when the EU AI Act passes. 

Step Four: Adopt an Internal AI Governance Structure

Businesses can prepare for the EU AI Act by implementing an internal organizational system that parallels the proposed AI governance structure. In other words, use parts of the act to shape and inspire how you build the internal risk-management guidelines you implement.

We recommend creating an internal committee responsible for your AI risk management comprised of cybersecurity, legal, and data privacy professionals. 

This will help businesses achieve the previous steps, particularly Step 3, as it ensures your company is adequately educated and prepared for the potential legal implications. 

Why Is Europe Regulating AI?

Europe is regulating the use of AI systems to establish protections over the fundamental rights, health, and safety of EU citizens and to set themselves up as a global leader in the development of AI services and software.

The goal is to allow for the rapid continuation of AI software development while preserving human rights, helping to ensure AI has a positive effect on people’s lives, not a negative one.

The Brussels Effect and the EU AI Act 

Many lawmakers reference something known as the Brussels Effect when talking about the impact of the EU AI regulation because it has the potential to sway how the rest of the globe regulates AI unilaterally.

There are two parts to the phenomenon known as the Brussels effect.

  1. De Facto: Companies start to universally follow EU requirements to standardize products or services;
  2. De Jure: Other countries pass formal legislation aligning with the EU regulation to prevent conflicts with the recently standardized process.

The EU’s data privacy legislation, the General Data Protection Regulation (GDPR), is often cited as the poster child of the Brussels Effect, as it significantly impacts business obligations and privacy legislation worldwide.

While it’s uncertain if the EU AI Act will have the same scope, many lawmakers suggest it’s a strong possibility, so this act is worth watching no matter where you’re headquartered.

How Various European Countries Are Responding to AI

With the emergence of AI services marketed to the general public, European regulators moved their focus to AI, so in this next section, I compare how different EU countries responded.

The UKs ICO

In November 2022, the UK’s Information Commissioner’s Office (ICO) published a framework for auditing AI systems, focusing on best practices for data protection and compliance.  

The audits are voluntary and completed by the ICO’s Assurance Department and the framework can be applied whether the organization designs its own AI system or uses AI as a service. 

This came after an incident in 2022, when UK authorities fined Clearview AI for £7.5 million and sent an enforcement notice to the group for illegally collecting over 20 billion images of individuals’ faces. 

Norway’s Datatilsynet

In December 2022, Norway’s Datatilsynet, the national Data Protection Authority for the region, submitted a report on transparency in AI to advise organizations on how to inform users about the use of AI.

Datatilsynet noted that the law on personal data requires transparency on the use of AI, which means that organizations must decide not only what information on the use of AI they need to share but also when and how to inform users.

The Netherlands AP

In December 2022, the Dutch Data Protection Authority, Autoriteit Persoonsgegevens (AP), started monitoring and supervising AI algorithms for:

  • Transparency
  • Discrimination
  • Fairness 

The AP secured an additional €1 million in funding to assist with the algorithm supervision. 

The initial stage will focus on identifying high-risk algorithms, pooling knowledge, and further shaping the collaboration.

France’s CNIL

France’s Data Protection Authority, Commission Nationale Informatique & Libertés (CNIL) announced in 2023 that it would create a service to evaluate the privacy risks posed by AI. 

The aims to facilitate the understanding of the functioning of AI systems within the CNIL for professionals and individuals and consolidate the expertise of the CNIL in the knowledge and prevention of privacy risks related to the implementation of these systems.

What About ChatGPT?

OpenAI, the owners of ChatGPT, released A Primer on the EU AI Act and announced in September 2024 that they’ve signed up for the core commitments of the EU AI Pact.

It appears OpenAI considers ChatGPT to be a minimal risk general-purpose AI system under the Act, but what does ChatGPT have to say about all of this?

In March 2023, it reportedly told POLITICO that it and other large language models should be regulated by the EU as “high-risk” due to their “potential to create harmful and misleading content” — a direct quote from the controversial AI itself!

Summary

The EU AI Act proves that AI technology is here to stay, and businesses need to prepare for legal compliance. 

Any company using AI services should create a professional committee and perform independent audits to determine what risk category the AI technology might fall into.

Businesses can use Termly’s Privacy Policy Generator to add an AI use clause that adequately discloses the AI systems they use and how it processes consumer personal data. 

Etienne Cussol CIPP/E, CIPM
More about the author

Written by Etienne Cussol CIPP/E, CIPM

Etienne is an Information Privacy professional and compliance analyst for Termly. He has been with us since 2021, managing our own compliance with data protection laws and participating in our marketing researches. His fields of expertise - and interest - include data protection (GDPR, ePrivacy Directive, CCPA), tracking technologies (third-party cookies, fingerprinting), and new forms of privacy management (GPC and the Google Privacy Sandbox). Etienne studied International Economic Affairs at the University of Toulouse, and graduated with a Masters in 2017. More about the author

Related Articles

Explore more resources