G
News
December 14, 2023
4 min read

EU AI Act: The EU Strikes a Historic Agreement to Regulate AI

The EU's AI Act establishes rules for AI use and development, focusing on ethical standards and safety. It categorizes AI systems, highlights high-risk uses, and sets compliance requirements. This legislation, a first in global AI governance, signals a shift towards responsible AI innovation in Europe.

Javier Canales Luna

Following the landmark agreement in Brussels, Europe will be the first region in the world to control AI with a comprehensive and legally binding regulatory framework

Last 9 December 2023, following three days of intense and dramatic debate, the Council of the EU and the European Parliament reached a provisional agreement on the final version of the EU regulation on artificial intelligence (AI), the so-called EU AI Act. The deal is a huge milestone in the EU's efforts to become the first major regulator in setting rules for the development and use of AI.

The goal of this blog is to analyse the main elements of the provisional agreement, how it differs from previous versions of the Act, and map some questions regarding the implications for AI companies as well as the future development of AI. Let’s dive in!

Background: The EU'S Battle to Regulate Foundation Models

The EU AI Act had reached the last stage of the legislative process with unprecedented uncertainty. The first proposal of the Act, drafted by the European Commission in 2021, was presented at a moment when generative AI tools like ChatGPT didn’t even exist. Subsequent versions of the Act, drafted by the Council of the EU and the European Parliament, have tried to address the rise and widespread adoption of these powerful tools.

How to regulate the so-called foundation models –i.e. models trained on a vast range of data and capable of performing a wide variety of general tasks, such as OpenAI’s GPT-4 and Google’s Gemini–  has been the central question in the last round of negotiations in Brussels.

The stakes were high, hence the tension during the negotiations. Diverging opinions among institutions and member states put the legislation in real danger of failing. On the one hand, the majority of the Parliament and a big group of member states considered that foundation models should be subject to strict rules. On the other hand, countries with a strong AI industry, including France, Germany, and Italy, supported self-regulation by the AI industry as the best approach to foster innovation and harness the benefits of foundation models.

In the end, the provisional agreement aims at conciliating these two opposed views, by incorporating new requirements to protect citizens while providing flexibility in certain areas and scenarios, as we will explain in the following sections.

Understanding the EU AI Regulation: Definitions and Scope

One of the criticisms of the original proposal of the Commission was the definition of AI, perceived as so vague that even simpler software could fall under the scope of the Act.

The European Parliament addressed this issue by adopting the OECD definition of AI system, in its compromise text, published in May 2023 and analysed by Giskard in a separate article.

This definition has been also adopted in the provisional agreement. As a result, the EU AI Act now defines an AI system as:

“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

The territorial scope of the Act remains the same as the original proposal. That means that the Act applies to AI providers placing an AI system on the European market, irrespective of the location of the provider, as well as to providers and users located in a third country where the output of the system is intended for use in the EU. This extraterritorial reach follows the logic behind other EU regulations, such as the GDPR.

By contrast, the material scope of the Act has been slightly nuanced. The final agreement states that the Act only applies to areas covered by EU law and that it should not affect the power of member states regarding national security. As such, the Act doesn’t apply to AI systems developed solely for military or defence purposes. Further, some clauses have been included to foster innovation. For example, AI systems specifically developed for scientific research and development will now fall outside the scope of the regulation.

Classifying the risks of AI models in the New EU AI Regulation

This is the most significant element in the EU AI ACT as proposed by the Commission in its first draft was the horizontal, risk-based approach to classify AI systems. According to this approach, the more the risk of the AI system, the more requirements they will have to comply with to access the EU market.

In the provisional agreement, the four risk-based categories of AI systems remain, although there are substantial changes in the requirements, especially as regards high-risk systems. Moreover, a new two-tier approach has been created to classify foundation models (see below).


Four categories of AI systems in the EU AI Act. Source: European Commission

Furthermore, the agreement clarifies the allocation of responsibilities and roles of the different actors involved in the value chains of AI systems. Equally, the text refines the relationship between responsibilities under the Act and other pieces of EU legislation.

Banned Practices and Exemptions in the EU AI Act

The provisional agreement keeps the category of banned AI practices. This list includes uses such as:

  • social scoring,
  • cognitive behavioural manipulation,
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions,
  • some uses of predictive policing for individuals,
  • biometric categorisation to infer sensitive data,
  • ‘real-time’ remote biometric identification systems in publicly accessible spaces

Despite the prohibition of the use of biometric identification systems (RBI) in publicly accessible spaces, the agreement allows for some exceptions in the use of real-time and retrospective (RBI) for law enforcement purposes, subject to prior judicial authorisation and a restricted list of serious crimes.

Requirements for High-Risk AI Systems

High-risk AI systems are defined as those that are used in sectors where the failure or misuse of the AI system could have serious negative consequences for individuals, society, or the environment.

The Act highlights explicitly high-risk applications and prescribes extensive disclosure and rigorous controls to ensure AI systems are robust and trustworthy. Such an ambitious and detailed regulatory framework has been subject to the critics of the AI industry, especially SMEs, which deem it complex, costly and time-consuming.

To address the critics, the provisional agreement clarifies and adjusts the requirements in such a way that they are more technically feasible and less burdensome to comply with by AI providers. For example, the quality of data used to train AI systems, or the technical documentation required for SMEs.

Yet the easing of the requirements could translate into more likelihood of risks. To counter this, the Parliament has managed to include a mandatory fundamental rights impact assessment, among other requirements. Also, AI systems used to influence the outcome of elections and voter behaviour have been included in the list of high-risk systems.

Furthermore, some clauses have been included to increase transparency. According to the new provisions, citizens will have the right to issue complaints about AI systems and receive explanations about decisions based on high-risk AI systems. Equally, following the demands of the Parliament in its previous compromise text of the Act, the agreement states that high-risk AI systems (as well as foundation models) systems will have to report on their energy and resource use.

Foundation Models in the EU AI Act: A Separate Category with Nuanced Rules

The crucial question during the negotiations was how to integrate the so-called foundation models into the AI Act.

Among the previous three versions of the Act –the first proposal of the Commission, the compromise text of the Council, and the compromise text of the Parliament–, only the Parliament advanced a differentiated, comprehensive framework for foundation models.

In particular, the Parliament established a two-tier approach where providers of all kinds of foundation models would need to comply with strict technical obligations to ensure robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. In addition to these requirements, providers of foundation models used in generative AI systems would need to comply with transparency measures as well.

The AI industry and countries like France and Germany have criticised the regime for foundation models envisioned by the Parliament for being too burdensome. In the end, the EU institutions have agreed on a more flexible approach intended to foster innovation in the field of generative AI.

Drawing on the Parliament’s text, the provisional agreement advances a separate, two-tier approach for general-purpose AI or foundation models that is overall less stringent than the one for high-risk AI systems. According to the new regime, foundation models will only have to adhere to transparency measures, including drawing up technical documentation, complying with EU copyright law and providing summaries of the data used for training the models.

So-called ‘high-impact’ foundation models –i.e. foundation models like GPT-4 trained with large amounts of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain– will have to comply with more stringent measures. In particular, models falling under this category will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.

Balancing AI Regulation with Fair Penalties in Europe

The AI Act lays down a three-level sanction structure, which includes different fines depending on the severity of the infringement. In the provisional agreement, the sanctions have been slightly reduced overall, with the resulting sanctions as follows:

  • Non-compliance with prohibitions of use of certain AI systems. 35 million or 7% of global turnover, whichever is higher.
  • Infringements of obligations. €15 million or 3% of global turnover, whichever is higher.
  • Incorrect, incomplete or misleading information.  €7.5 million or 1.5% of global turnover, whichever is higher.

In addition, the provisional agreement provides for more proportionate caps in case of infringements of SMEs and start-ups.

Conclusion: Law-Making is (Nearly) Over, the Time for AI Quality and Governance Has Come

The agreement on the AI ACT is an important  milestone in the EU's efforts to position itself in the global AI race.

As occurred with the GDPR, the EU wants the AI Act to become a global standard in the regulation of this technology. Yet the text will still require some work at the technical level before it can be formally approved by the EU institutions, a positive outcome seems much more plausible after the agreement. Also, now that other competitors like the US and China are also making moves to regulate AI within their countries, the incentives to pass the Act are even more pressing.

Once the Act is approved, the provisional agreement states that a two-year moratorium will apply before the entry into force. During that period, AI providers will have to prepare themselves to be compliant with the regulatory framework set out in the Act.

While the exhaustivity of the requirements will vary depending on the category of the AI system and the size of the provider, testing will be one of the cornerstones of the proposed regulatory framework. That means that every AI provider will need to perform tests to comply with the EU AI Act.

Here is where AI-quality software like Giskard enters the scene. Giskard is an open-source, collaborative software that helps AI developers and providers ensure the safety of their AI systems, eliminate risks of AI biases and ensure robust, reliable and ethical AI models. Have a look at our product and get ready to be fully compliant with the upcoming EU AI Act.

In the meantime, if you want to know more information about the state of AI Regulation, we highly recommend you check out our dedicated articles:

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

EU AI Act: The EU Strikes a Historic Agreement to Regulate AI

The EU's AI Act establishes rules for AI use and development, focusing on ethical standards and safety. It categorizes AI systems, highlights high-risk uses, and sets compliance requirements. This legislation, a first in global AI governance, signals a shift towards responsible AI innovation in Europe.

Following the landmark agreement in Brussels, Europe will be the first region in the world to control AI with a comprehensive and legally binding regulatory framework

Last 9 December 2023, following three days of intense and dramatic debate, the Council of the EU and the European Parliament reached a provisional agreement on the final version of the EU regulation on artificial intelligence (AI), the so-called EU AI Act. The deal is a huge milestone in the EU's efforts to become the first major regulator in setting rules for the development and use of AI.

The goal of this blog is to analyse the main elements of the provisional agreement, how it differs from previous versions of the Act, and map some questions regarding the implications for AI companies as well as the future development of AI. Let’s dive in!

Background: The EU'S Battle to Regulate Foundation Models

The EU AI Act had reached the last stage of the legislative process with unprecedented uncertainty. The first proposal of the Act, drafted by the European Commission in 2021, was presented at a moment when generative AI tools like ChatGPT didn’t even exist. Subsequent versions of the Act, drafted by the Council of the EU and the European Parliament, have tried to address the rise and widespread adoption of these powerful tools.

How to regulate the so-called foundation models –i.e. models trained on a vast range of data and capable of performing a wide variety of general tasks, such as OpenAI’s GPT-4 and Google’s Gemini–  has been the central question in the last round of negotiations in Brussels.

The stakes were high, hence the tension during the negotiations. Diverging opinions among institutions and member states put the legislation in real danger of failing. On the one hand, the majority of the Parliament and a big group of member states considered that foundation models should be subject to strict rules. On the other hand, countries with a strong AI industry, including France, Germany, and Italy, supported self-regulation by the AI industry as the best approach to foster innovation and harness the benefits of foundation models.

In the end, the provisional agreement aims at conciliating these two opposed views, by incorporating new requirements to protect citizens while providing flexibility in certain areas and scenarios, as we will explain in the following sections.

Understanding the EU AI Regulation: Definitions and Scope

One of the criticisms of the original proposal of the Commission was the definition of AI, perceived as so vague that even simpler software could fall under the scope of the Act.

The European Parliament addressed this issue by adopting the OECD definition of AI system, in its compromise text, published in May 2023 and analysed by Giskard in a separate article.

This definition has been also adopted in the provisional agreement. As a result, the EU AI Act now defines an AI system as:

“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

The territorial scope of the Act remains the same as the original proposal. That means that the Act applies to AI providers placing an AI system on the European market, irrespective of the location of the provider, as well as to providers and users located in a third country where the output of the system is intended for use in the EU. This extraterritorial reach follows the logic behind other EU regulations, such as the GDPR.

By contrast, the material scope of the Act has been slightly nuanced. The final agreement states that the Act only applies to areas covered by EU law and that it should not affect the power of member states regarding national security. As such, the Act doesn’t apply to AI systems developed solely for military or defence purposes. Further, some clauses have been included to foster innovation. For example, AI systems specifically developed for scientific research and development will now fall outside the scope of the regulation.

Classifying the risks of AI models in the New EU AI Regulation

This is the most significant element in the EU AI ACT as proposed by the Commission in its first draft was the horizontal, risk-based approach to classify AI systems. According to this approach, the more the risk of the AI system, the more requirements they will have to comply with to access the EU market.

In the provisional agreement, the four risk-based categories of AI systems remain, although there are substantial changes in the requirements, especially as regards high-risk systems. Moreover, a new two-tier approach has been created to classify foundation models (see below).


Four categories of AI systems in the EU AI Act. Source: European Commission

Furthermore, the agreement clarifies the allocation of responsibilities and roles of the different actors involved in the value chains of AI systems. Equally, the text refines the relationship between responsibilities under the Act and other pieces of EU legislation.

Banned Practices and Exemptions in the EU AI Act

The provisional agreement keeps the category of banned AI practices. This list includes uses such as:

  • social scoring,
  • cognitive behavioural manipulation,
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions,
  • some uses of predictive policing for individuals,
  • biometric categorisation to infer sensitive data,
  • ‘real-time’ remote biometric identification systems in publicly accessible spaces

Despite the prohibition of the use of biometric identification systems (RBI) in publicly accessible spaces, the agreement allows for some exceptions in the use of real-time and retrospective (RBI) for law enforcement purposes, subject to prior judicial authorisation and a restricted list of serious crimes.

Requirements for High-Risk AI Systems

High-risk AI systems are defined as those that are used in sectors where the failure or misuse of the AI system could have serious negative consequences for individuals, society, or the environment.

The Act highlights explicitly high-risk applications and prescribes extensive disclosure and rigorous controls to ensure AI systems are robust and trustworthy. Such an ambitious and detailed regulatory framework has been subject to the critics of the AI industry, especially SMEs, which deem it complex, costly and time-consuming.

To address the critics, the provisional agreement clarifies and adjusts the requirements in such a way that they are more technically feasible and less burdensome to comply with by AI providers. For example, the quality of data used to train AI systems, or the technical documentation required for SMEs.

Yet the easing of the requirements could translate into more likelihood of risks. To counter this, the Parliament has managed to include a mandatory fundamental rights impact assessment, among other requirements. Also, AI systems used to influence the outcome of elections and voter behaviour have been included in the list of high-risk systems.

Furthermore, some clauses have been included to increase transparency. According to the new provisions, citizens will have the right to issue complaints about AI systems and receive explanations about decisions based on high-risk AI systems. Equally, following the demands of the Parliament in its previous compromise text of the Act, the agreement states that high-risk AI systems (as well as foundation models) systems will have to report on their energy and resource use.

Foundation Models in the EU AI Act: A Separate Category with Nuanced Rules

The crucial question during the negotiations was how to integrate the so-called foundation models into the AI Act.

Among the previous three versions of the Act –the first proposal of the Commission, the compromise text of the Council, and the compromise text of the Parliament–, only the Parliament advanced a differentiated, comprehensive framework for foundation models.

In particular, the Parliament established a two-tier approach where providers of all kinds of foundation models would need to comply with strict technical obligations to ensure robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. In addition to these requirements, providers of foundation models used in generative AI systems would need to comply with transparency measures as well.

The AI industry and countries like France and Germany have criticised the regime for foundation models envisioned by the Parliament for being too burdensome. In the end, the EU institutions have agreed on a more flexible approach intended to foster innovation in the field of generative AI.

Drawing on the Parliament’s text, the provisional agreement advances a separate, two-tier approach for general-purpose AI or foundation models that is overall less stringent than the one for high-risk AI systems. According to the new regime, foundation models will only have to adhere to transparency measures, including drawing up technical documentation, complying with EU copyright law and providing summaries of the data used for training the models.

So-called ‘high-impact’ foundation models –i.e. foundation models like GPT-4 trained with large amounts of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain– will have to comply with more stringent measures. In particular, models falling under this category will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.

Balancing AI Regulation with Fair Penalties in Europe

The AI Act lays down a three-level sanction structure, which includes different fines depending on the severity of the infringement. In the provisional agreement, the sanctions have been slightly reduced overall, with the resulting sanctions as follows:

  • Non-compliance with prohibitions of use of certain AI systems. 35 million or 7% of global turnover, whichever is higher.
  • Infringements of obligations. €15 million or 3% of global turnover, whichever is higher.
  • Incorrect, incomplete or misleading information.  €7.5 million or 1.5% of global turnover, whichever is higher.

In addition, the provisional agreement provides for more proportionate caps in case of infringements of SMEs and start-ups.

Conclusion: Law-Making is (Nearly) Over, the Time for AI Quality and Governance Has Come

The agreement on the AI ACT is an important  milestone in the EU's efforts to position itself in the global AI race.

As occurred with the GDPR, the EU wants the AI Act to become a global standard in the regulation of this technology. Yet the text will still require some work at the technical level before it can be formally approved by the EU institutions, a positive outcome seems much more plausible after the agreement. Also, now that other competitors like the US and China are also making moves to regulate AI within their countries, the incentives to pass the Act are even more pressing.

Once the Act is approved, the provisional agreement states that a two-year moratorium will apply before the entry into force. During that period, AI providers will have to prepare themselves to be compliant with the regulatory framework set out in the Act.

While the exhaustivity of the requirements will vary depending on the category of the AI system and the size of the provider, testing will be one of the cornerstones of the proposed regulatory framework. That means that every AI provider will need to perform tests to comply with the EU AI Act.

Here is where AI-quality software like Giskard enters the scene. Giskard is an open-source, collaborative software that helps AI developers and providers ensure the safety of their AI systems, eliminate risks of AI biases and ensure robust, reliable and ethical AI models. Have a look at our product and get ready to be fully compliant with the upcoming EU AI Act.

In the meantime, if you want to know more information about the state of AI Regulation, we highly recommend you check out our dedicated articles:

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.