EU AI Act: A First Look at Europe’s AI Regulation

Etienne Cussol CIPP/E, CIPM

by Etienne Cussol CIPP/E, CIPM

March 10, 2023

EU-AI-Act-A-First-Look-At-Europe-AI-Regulation-01

Thanks to rapid advancements in artificial intelligence, we’ve got self-driving cars, chatbots that can answer your questions, and incredibly accurate predictions about what products and services are most relevant to consumers.

But with the European Union’s proposed Artificial Intelligence Act (EU AI Act) — expected to pass by the end of the year — we may also soon have legal guidelines and compliance requirements regarding the use of AI services.

If the EU AI Act passes as predicted, companies worldwide using basic AI systems may be subject to the regulations.

So how can you prepare?

This guide covers what we currently know about the EU AI regulation, including its potential scope, penalties for non-compliance, and how it’ll most likely affect businesses small and large around the globe — and how you can prepare.

Table of Contents
  1. What Is the EU AI Act?
  2. Important Dates Regarding the EU AI Act
  3. How Does the EU Define AI?
  4. Why Is Europe Regulating AI?
  5. What Kind of AI Will the EU Regulate and How?
  6. What About ChatGPT?
  7. Who Is Impacted by the EU AI Act and How?
  8. How Will the EU AI Act Be Enforced?
  9. Potential Penalties and Fines Under the EU AI Act
  10. AI and Data Privacy: The Connection, the Pros, and the Cons
  11. How Can Businesses Prepare for AI Regulation?
  12. Summary

What Is the EU AI Act?

The European Union’s Artificial Intelligence Act — or EU AI Act — is a formal regulation proposed by the European Commission to regulate and create a legal framework for using artificial intelligence. 

This is the first proposed legislation of its kind attempting to manage artificial intelligence, and If passed, it would likely have a global impact. 

In other words, its extraterritorial scope would regulate not only how the 27 EU member states use AI services but also affect its use in the rest of the world. 

Important Dates Regarding the EU AI Act

Let’s briefly review some important dates regarding the development of Europe’s AI regulation and consider the potential future timeline of this proposed regulation. 

April 2021

The European Commission initially presented a comprehensive AI package in April 2021, which included a proposal for the AI Act.  

This historic date marks the unveiling of the world’s first-ever legal framework for AI technology. 

November 2022

In November 2022, after a year of discussion amongst EU lawmakers and nearly five iterations of the regulation’s text, a final compromised version of the AI Act was unanimously agreed to by the Council of the EU and submitted to the Telecommunications (TTE) Council. 

March 2023

As of today, the European Parliament is scheduled to vote on the final version of the AI Act by the end of March 2023. 

Following the vote, discussions between the member states, the Parliament, and the Commission should commence in April.

A final version of the act is expected to be adopted by the end of the year. 

How Does the EU Define AI?

While the Council of the EU is still debating the definition of AI and AI systems, the proposed regulation defines it as software that is developed with one or more techniques, such as:

  • Machine learning approaches
  • Logic and knowledge-based approaches
  • Statistical approaches 

AI systems, when given a set of human-defined objectives, can generate outputs such as:

  • Content
  • Predictions
  • Recommendations 

Or it can output other decisions influencing the environments they interact with.

However, this definition is still up for consideration at the EU level, notably as to the objectives of AI. The Council also argues that certain existing software should be excluded from the definition.

Why Is Europe Regulating AI?

Europe is regulating the use of AI systems for several reasons:

  • To establish protections over the fundamental rights, health, and safety of EU citizens 
  • To set up the EU as a global leader in the development of AI services and software

The goal is to allow for the rapid continuation of AI software development while preserving human rights, helping to ensure AI has a positive effect on people’s lives instead of a negative one.

Next, let’s discuss the possible scope of the EU AI Act on the rest of the world and examine how some EU member-states have already begun implementing the proposed regulations. 

The Brussels Effect and the EU AI Act 

Many lawmakers reference something known as the Brussels Effect when talking about the potential impact of the EU AI regulation because it has the potential to sway how the rest of the globe regulates AI unilaterally.

There are two parts to the phenomenon known as the Brussels effect:

  • De Facto: Companies universally follow EU requirements to standardize products or services
  • De Jure: Other countries pass formal legislation aligning with the EU regulation to prevent conflicts with the recently standardized process

The EU’s data privacy legislation, the General Data Protection Regulation (GDPR), is often cited as the poster child of the Brussels Effect, as it significantly impacts business obligations and privacy legislation worldwide.

While it’s uncertain if the EU AI Act will have the same broad scope, many lawmakers suggest that it’s a strong possibility, so this act is worth watching no matter where your business is headquartered.

How Various European Countries Are Responding to AI

With recent developments regarding the European AI Act and the emergence of AI services marketed to the general public, European regulators have moved their focus to AI, so let’s compare how different EU countries are responding, including:

  • The United Kingdom
  • Norway
  • The Netherlands
  • France

The UKs ICO

In November 2022, the UK’s Information Commissioner’s Office (ICO) published a framework for auditing AI systems, focusing on best practices for data protection and compliance.  

The audits are voluntary and completed by the ICO’s Assurance department.

The framework can be applied whether the organization designs its own AI system, implements it from a third party, or uses AI as a service. 

This comes after an incident in May 2022, when UK authorities fined Clearview AI, an American facial recognition company, for £7.5 million and sent an enforcement notice to the group for illegally collecting over 20 billion images of individuals’ faces. 

Clearview AI also collected data from publicly available information on the internet and social media platforms all over the world to create an online database without informing the individuals of such data collection, a violation of the UK GDPR.

Norway’s Datatilsynet

In December 2022, Norway’s Datatilsynet — the national Data Protection Authority for the region — submitted a report on transparency in artificial intelligence. 

The report aims to assist organizations on how to inform users about the use of AI.

In particular, Datatilsynet noted that the law on personal data requires transparency on the use of AI, which means that organizations must decide not only what information on the use of AI they need to share but also when and how to inform users.

The Netherlands AP

December 2022 was a busy month for AI legal talk as the Dutch Data Protection Authority — Autoriteit Persoonsgegevens (AP) — started monitoring and supervising AI algorithms for: 

  • Transparency
  • Discrimination
  • Fairness 

The AP secured an additional €1 million in funding to assist with the algorithm supervision. 

The initial stage will focus on identifying high-risk algorithms, pooling knowledge, and further shaping the collaboration.

France’s CNIL

France’s Data Protection Authority — Commission Nationale Informatique & Libertés or CNIL — announced in January 2023 that it would create a service to evaluate the privacy risks posed by AI. 

The main missions are to:

  • Facilitate the understanding of the functioning of AI systems within the CNIL for professionals and individuals
  • Consolidate the expertise of the CNIL in the knowledge and prevention of privacy risks related to the implementation of these systems
  • Prepare for the entry into and application of the EU AI Act

What Kind of AI Will the EU Regulate and How?

The EU AI Act regulates several levels of AI risks that impact the requirements and obligations that companies will be subject to. 

Currently, the proposed levels are:

  • Unacceptable risk (Act Title II): AI that uses subliminal manipulation, exploits children or mentally disabled persons, and AI used for the general purpose of social scoring and remote biometric identification for law enforcement — with some exceptions.
  • High risk (Act Title III, Annex III): Safety component of regulated products and AI systems with applications in Human Services like hiring, employee management, education, public services, law enforcement, and border control.
  • Limited risk (Act Title IV): AI systems intended to interact with natural persons — i.e., deep fakes, chatbots, AI systems that generate or manipulate images, audio, or video content that resembles existing persons, objects, and places, or other entities or events and would falsely appear to a person to be authentic or truthful.
  • Minimal risk: Defined by omission, this category includes all AIs that are not in the previous three categories.

Under the AI Act, any AI technology in the unacceptable-risk category would not be permitted for use in the EU. 

Let’s discuss what we currently know about the proposed AI risk categories in greater detail. 

High Risk AI

According to the current text of the European Union’s AI act, high-risk AI will have numerous risk management requirements.

One requirement is that technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and must be kept up-to-date. This is described in detail in Annex IV, which states in part that the documentation should contain:

  • The AI system’s intended purpose
  • Applicability to the hardware it’s interacting with
  • The intended use and instructions for use where applicable 
  • The description of how the AI system is placed on the market

The act also lays out the following additional high-risk AI requirements: 

  • Data sets used for training, validating, and testing AIs should have data governance and management practices in place
  • AI designed and developed with capabilities enabling the automatic recording of events while the system is operating must conform to recognized standards or common specifications
  • You must provide transparent information to users on how the AI system is designed and developed, intended to be used, its characteristics, capabilities, and limitations
  • Enable human oversight through interface tools, either built into the AI system or implementable by the user

The act then sets numerous other obligations to high-risk AI providers, covering all of the following:

  • Quality management
  • Technical documentation
  • Conformity assessments
  • Automatically generated logs
  • Corrective actions
  • The duty of information
  • Cooperation with authorities

Limited Risk AI

The current version of the EU AI Act also outlines the transparency obligations for limited-risk AIs where there is a risk of manipulation (i.e., chatbots) or deceit (i.e., deep fakes). 

The act states that natural persons should be made aware that they’re interacting with an AI system unless it’s otherwise evident from the circumstances and the context of the use. 

But the EU AI Act makes room for some law enforcement exceptions.

Minimal Risk AI

Any AI services that fall in the minimal risk category are encouraged to adopt codes of conduct under the current version of the AI Act. Minimal-risk AIs would include anything that doesn’t fall into the other higher-risk categories.

Some concerns have been voiced over the scope of the definition of AI as proposed by the potential EU AI law. Primarily, it’s been questioned if the act covers all types of uninvented AI and in what category that technology might fall. 

What About ChatGPT?

While we’re still waiting to see what category of AI ChatGPT — the artificial intelligence chatbot developed by OpenAI — would fall under if the EU AI Act passes, it appears to be considered a general-purpose AI system under the current draft rules.

But because this technology can be used for multiple purposes, some harmless and others higher-risk, it’s causing many lawmakers to scratch their heads.

A few of the lead lawmakers on the EU AI Act have suggested it should be placed in the high-risk category because of its ability to make complex texts without any human oversight. The argument is that it has the potential to spread misinformation on a larger scale.

Still, other lawmakers pushed back, claiming that such a decision would mean numerous AI technology would then be considered high-risk, even those that aren’t actually risky in any way.

But what does ChatGPT have to say about all of this? It reportedly told POLITICO that it and other large language models should be regulated by the EU as “high-risk” due to their “potential to create harmful and misleading content” — a direct quote from the controversial AI itself!

Who Is Impacted by the EU AI Act and How?

If it comes into force, the EU AI Act could have a broader impact beyond the EU member-states because of its extraterritorial scope. 

In other words, the enforcement abilities would expand beyond the traditional territorial boundaries of the EU. 

Therefore, it would impact: 

  • Companies placing AI systems on the market or into service in the EU, even if those providers aren’t physically located within the EU itself
  • Any users of AI systems if they are located within the EU
  • Both providers and users of AI systems that are located in a country outside of the EU, where the output produced by the system is used inside of the EU

Under this European AI regulation, the EU users of AI gain rights and knowledge over how those systems work, whereas businesses producing or relying on AI technology will be expected to follow various obligations and guidelines.

However, the act wouldn’t apply to private or non-professional use. The act also exempts any AI systems exclusively developed and used for military purposes.

How Will the EU AI Act Be Enforced?

Under the current version of the European AI Act, enforcement will be left up to the national surveillance authorities, but an autonomous AI Board will supervise the act itself.

The proposed AI Board would specifically oversee consistent application and enforcement of the act across the 27 member states.  

Potential Penalties and Fines Under the EU AI Act

The AI Act outlines an enforcement framework similar to the GDPR, the EU’s primary data privacy legislation, but with heftier fees. 

Penalties include administrative fines of up to €30,000,000 or, if the offender is a global company, up to 6% of its total worldwide annual turnover for:

  • Non-compliance with the prohibition of artificial intelligence with unacceptable risks referred to in Article 5
  • Non-compliance of the AI system with the data and data governance requirements related to high-risk AIs referred to in Article 10

For other infringements, expect administrative fines of up to €20,000,000 or, if the offender is a global company, up to 4% of its total global annual turnover.

Additionally, supplying incorrect, incomplete, or misleading information to authorities is subject to a fine of up to €10,000,000 or, if the offender is a global company, up to 2% of its total annual turnover.

AI and Data Privacy: The Connection, the Pros, and the Cons

AI systems are intrinsically related to data privacy laws and regulations because some software can identify an individual who otherwise should’ve been unidentifiable based on the provided dataset. 

Machine learning, in general, relies on gigantic amounts of text files, and some of those data sets contain details that legally qualify as personal information according to privacy regulations like the:

  • General Data Protection Regulation (GDPR)
  • California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA)
  • Virginia Consumer Data Protection Act (CDPA)

Undoubtedly, AI will change how businesses comply with data privacy regulations and vice-versa. So we’ll be here to keep you updated on potential laws like the EU AI Act and help make the transition go as smoothly as possible for companies of any size.

The Pros and Cons of AI for Data Privacy

AI technology has many potential pros. It can help streamline data privacy compliance, enhance security protection measures, and limit a company’s reliance on sensitive personal data.

But, it can also be used for opposite purposes. 

Cyber attackers and security breaches often rely on AI-driven tools that adapt their malware to evade detection. It’s also difficult to accurately analyze if the results of personal information processing by AI systems contain biases or not. 

People around the world are still debating the pros and cons of AI technology. But legislation like the proposed European AI regulation hopes to protect the rights of individuals while still allowing for the development and implementation of new, groundbreaking artificial intelligence software.

How Can Businesses Prepare for AI Regulation?

Clearly, the world is prepping for AI legislation, and businesses can set themselves up for a successful transition by preparing for the regulations in advance. 

We recommend businesses establish an AI risk management assessment procedure by following these steps:

  • Step one: Run an inventory of all AI systems your business uses
  • Step two: Create a risk classification system parallel to the EU AI Act proposed categories
  • Step three: Implement risk-mitigation procedures and independent audits
  • Step four: Adopt an internal AI governance structure that parallels the EU AI Act’s proposed guidelines

Let’s discuss each step in greater detail in the next sections. 

Step One: Run An Inventory Of All AI Systems Your Business Uses

Your business must know what AI systems it uses, how, why, and where the AI gets deployed. 

So create a process for identifying and inventorying your AI systems. Be transparent and thorough during this step, so you don’t accidentally leave anything out.  

Step Two: Create An AI Risk Classification System 

Once you have an inventory of all AI systems your business uses, separate the systems into risk categories parallel to the classification system outlined by the EU AI Act. 

Attempt to determine what risk levels the AI systems you use fall into and determine if you have any high or unacceptable risk systems in place.  

Step Three: Implement AI Risk-Mitigation Procedures And Independent Audits

Build out a system of risk-mitigation procedures and a method for performing independent audits to ensure your business can promptly identify and regulate high and unacceptable-risk AI systems. 

For example, start budgeting for the appropriate risk-mitigation tools your business might need to rely on if and when the EU AI Act passes. 

Step Four: Adopt An Internal AI Governance Structure

Businesses can prepare for the EU AI Act by implementing an internal organizational system that parallels the proposed AI governance structure. In other words, use parts of the act to shape and inspire how you build the internal risk-management guidelines you implement.

We recommend creating an internal committee responsible for your AI risk management comprised of cybersecurity, legal, and data privacy professionals. 

This will help businesses achieve the previous steps, particularly Step 3, as it ensures your company is adequately educated and prepared for the potential legal implications. 

Summary

Proposals like the EU AI Act prove that AI technology is here to stay, and businesses need to start preparing for legal compliance. 

While we don’t know what the final version of the act will look like, we know enough to help businesses get ready for the potential global impacts.  

For example, any company using AI services should create a professional committee and perform independent audits to determine what risk category the AI technology might fall into.

Here at Termly, we’re always up to date, but this story is still in progress. So check back in the future because we’ll keep this guide updated as information about the EU AI Act unfolds. 

Etienne Cussol CIPP/E, CIPM
More about the author

Written by Etienne Cussol CIPP/E, CIPM

Etienne is an Information Privacy professional and compliance analyst for Termly. He has been with us since 2021, managing our own compliance with data protection laws and participating in our marketing researches. His fields of expertise - and interest - include data protection (GDPR, ePrivacy Directive, CCPA), tracking technologies (third-party cookies, fingerprinting), and new forms of privacy management (GPC and the Google Privacy Sandbox). Etienne studied International Economic Affairs at the University of Toulouse, and graduated with a Masters in 2017. More about the author

Related Articles

Explore more resources