G
News
October 4, 2023
8 min read

Towards AI Regulation: How Countries are Shaping the Future of Artificial Intelligence

In this article we will present the challenges and approaches to AI Regulation in major jurisdictions such as the European Union, the United States, China, Canada and the UK. Explore the growing impact of AI on society and how AI quality tools like Giskard ensure reliable models and compliance.

Javier Canales Luna

Governments around the world are moving steadily to regulate the development of AI and mitigate its potential risks.  

The release of ChatGPT took the world by storm in late 2022. Developed by OpenAI, ChatGPT is a next-generation AI chatbot capable of all kinds of tasks you may think of, from writing essays and developing end-to-end marketing strategies to summarising complex texts and composing compelling poems. With such an unlimited range of capabilities, it’s not surprising that the tool rapidly became the fastest-growing web application in history.

But ChatGPT was only the starting point of the ongoing AI revolution. In 2023, the number of tools and applications based on so-called generative AI –a type of AI specifically designed to create content–hasn’t ceased to increase. Many social and economic activities are likely to experience deep changes as a result of the massive adoption of these powerful tools.

Despite their impressive capabilities and potential benefits, these tools also come with significant societal, economic, and environmental risks that need to be addressed carefully. However, controlling this rapidly evolving technology is a difficult task, and the way to do it is contested. 

For example, in March 2023, a group of scientists and tech gurus signed an open letter calling for a six-month pause on the development of new, more powerful AI systems, in order to evaluate its risks. Six months after the signature, the impact of the letter seems limited.

Despite the voices in the tech industry that advocate for a self-governance approach, the case for AI regulation is gaining momentum. From the EU and the US to China and the UK, more and more governments around the world are currently debating regulatory frameworks to control the development of AI and mitigate its associated risks and challenges. 

In this post, we will analyse the virtues of AI regulation, as well as the challenges to establishing effective legal standards. We will also have a look at the state of AI regulation in some of the most advanced economies. Finally, we will discuss the role of AI quality tools like Giskard in future regulatory frameworks.  Let’s start!

Why should AI be regulated? The importance of AI Regulation in addressing AI risks

Source: Unsplash

As society becomes more familiar with AI tools, concerns over the potential dangers of AI are rapidly growing. Notwithstanding the important role of the tech industry in addressing these challenges, the involvement of public authorities seems mandatory, given the profound and long-term implications that the widespread adoption of AI poses on individuals and societies.

Below you can find a list of the most compelling reasons for advancing AI regulation.

  • Impact on fundamental rights. Companies and authorities are already using AI systems to support all kinds of decision-making processes, such as mortgage lending, public benefits granting, and law enforcement activities. These AI-based decisions can have a significant impact on fundamental rights, including the right to life, privacy, equality and non-discrimination.
  • A danger to democracy. AI has been used to create fake news and misinformation campaigns. As generative AI tools become more accessible and ubiquitous, malicious uses of AI can pose a severe risk to democracies, furthering polarisation and social unrest.
  • Biased results. There is already ample evidence of the discriminatory harm that AI can cause to minority and marginalised groups. This is associated with the problem of algorithm bias, which occurs when the data used to train AI systems is not representative.
  • AI accountability. By providing clear rules on how to develop, deploy and use AI, AI regulation can ensure a level playing field in terms of AI safety, trust, and accountability. 
  • AI transparency. AI systems are often powered by so-called black-box models, which are difficult to interpret and evaluate. Furthermore, opacity seems the norm in a sector where everyone is competing fiercely to take the lead. AI regulation could ensure AI companies adopt transparency measures required to audit AI systems and assess their cost and impacts on society.
  • AI and climate change. Connected with the previous point is the relation between AI and climate change. As powerful tools like ChatGPT become mainstream, there are increasing concerns about the resources (namely, electricity and water) required to develop and run these systems. Rules on environmental transparency could be a first step to bring more scrutiny to this pressuring issue.

Balancing innovation with AI risks management: challenges in implementing AI Regulation

Despite the reasons supporting the case for AI regulation, controlling the development of emerging technologies like AI is an extremely complex task for lawmakers. Here is a list of the most significant challenges to advance effective AI regulation.

  • The protection vs. innovation dilemma. The mission of public authorities is to protect society from the negative effects of technologies like AI. However, there is the risk that putting AI companies under heavy regulatory pressure may hinder innovation, thereby limiting the potential benefits of AI. 
  • Choosing the right time. While emerging technologies develop freely, in parallel with scientific progress, law-making is normally a slow process with many parties and stakeholders involved. It can take years for a proposal to make it to the end of the legislative process. By that time, technology may have already evolved. When that happens, the legislative process becomes even more lengthy and complex. For example, when the European Commission released its proposal for the EU AI Act in 2021 (see more in the next section), generative AI tools like ChatGPT didn’t even exist. Hence, policy-makers have an additional challenge.
  • The globalised and fragmented dimension of AI. States have historically used their sovereignty to regulate issues that take place within their borders. However, this theory falls short when it comes to handling technologies that are intrinsically global, such as AI. A question that a citizen user prompts to ChatGPT will probably be processed in a data centre located in the US with thousands of microchips manufactured in Taiwan with minerals extracted in Argentina, China and Congo. To address the highly fragmented and cross-border dimension of AI without stepping into the jurisdiction of other countries, global cooperation between states, companies and other stakeholders is often required, thereby adding another layer of complexity to the law-making process. 
Source: Interconnected

From the EU AI Act to Worldwide perspectives: AI Regulations across different jurisdictions

In the last months, more and more governments are starting to take action to regulate AI. In this section, we will analyse the state of AI regulation around the world. This is a non-exhaustive list, which will only cover the latest developments by major regulators.

European Union

In 2021, the European Commission presented its proposal to regulate AI. The so-called EU AI Act aims to exploit the benefits of AI while mitigating its potential risks. 

Considered the first AI regulation proposed by a major regulation, the most significant feature of the Act is the adoption of a horizontal, risk-based approach to classify AI systems. In particular, the Commission proposed four categories of AI systems depending on their risks: 

  • Unacceptable risk AI systems. These systems are forbidden.
  • High-risk AI systems. Systems under this category have to comply with a series of requirements before being put in the EU market. The core of the commission proposal focuses on this type of AI system.
  • Limited risk AI systems. They just have to comply with certain transparency obligations.
  • Low-risk AI systems. These systems are allowed without restrictions, yet AI providers are encouraged to adhere to voluntary codes of conduct.

The draft of the Commission didn’t include any mention of generative AI tools. This void was filled by the European Parliament, which presented its compromise text of the EU Act in May 2023 and adopted it formally in June 2023. As we explained in a previous article, most changes in the Parliament position address new concerns following the massive adoption of general AI systems, incorporating a separate risk category for so-called foundation models, that is, the underlying technology of tools like ChatGPT.

Pyramid of risks under the EU AI Act. 

The final version of the EU AI Act will be negotiated by the European institutions in the upcoming trilogue, that is, informal consultations between the Commission, the Council of the EU, and the Parliament. The aim is to reach an agreement by the end of 2023.

United States

The dilemmas on how to regulate AI seem more acute in the US, where most of the Big Tech is located and self-governance and voluntary regulations have historically played a more important role than in Europe. 

According to the US government, AI regulation is still in its ‘early days’. So far, the White House has only advanced an outline for an AI Bill of Rights, with five principles that should guide the design, use and deployment of AI systems. However, this act wouldn’t have legal force and thus couldn’t be enforceable. 

Concerns over the potential risks of AI have also sparked intense debate in two chambers of the US Congress, with lawmakers proposing several legislative and regulatory frameworks in areas such as AI testing, transparency, liability and AI-based misinformation.

Among the most ambitious initiatives is the Algorithmic Accountability Act, which aims at increasing AI transparency to prevent potential bias. According to the act, which is still in the draft stage, AI providers should undergo a risk assessment before deploying AI systems in the market, following a process that echoes the one laid down in the EU AI Act.

At the federal level, several States are already advancing proposals to regulate AI. The most significant initiative comes from California, a historical hub for Big Tech and AI innovation, whose government has recently signed an executive order to advance AI regulation in the State, as it did already in the field of data protection with the adoption of the California Consumer Privacy Act.

US State-by-State AI Legislations. Source: BCLP Law

While Washington and the Federal States keep deciding on the best way to regulate AI, top AI companies, including Meta, Open AI, and Google, have recently announced the implementation of voluntary safeguards

China

The slow progress in the EU and the US contrasts with the rapid development of AI regulation in China, which is rapidly building a solid and comprehensive framework for regulating AI.

Source: Reuters

The country has already adopted rules on AI recommendation algorithms and AI systems designed to synthetically generate images and video. In July 2023, the Cyberspace Administration of China (CAC) published its time-breaking rules to tame the new waves of tools like ChatGPT, the Interim Measures for Generative Artificial Intelligence Service Management.

The regulation lays down rules for providers offering AI services to the public in China. In many ways, the technical requirements –e.g. risk assessment, record keeping, mitigation and compliance obligations– for AI providers resemble the ones proposed by the EU. 

However, other aspects of the framework are singular, representing the political reality of China. In particular, the obligation for providers to adhere to socialist values, respect social morality and ethics, and not generate incitement against the government are diametrically opposite to Western values and thus could lead to regulatory headaches for AI providers from abroad.

Although its legislative and regulatory initiatives are often omitted and discredited by their Western counterparts, China is rapidly becoming a world-class AI player, and its regulation is likely to place the country in a position of geopolitical influence.

Canada

Canada is currently debating its own AI act, the Artificial Intelligence and Data Act

The Act offers a balanced approach to regulating AI within hampering innovation and market opportunities for Canadian businesses. It seems very much inspired by the EU AI Act, following a similar risk-based approach to classify AI systems. 

Most of the requirements will address the risks of so-called “high-impact” AI systems. These providers will have to comply with strict identification, assessment, record-keeping, mitigation and compliance obligations, fearing severe monetary penalties if they don’t follow the rules.

Source: Government of Canada

The Act is still in the early stages of the legislative process, and it’s not expected to come into force sooner than 2025. In the coming months, the Act will undergo new revisions, as there are still many gaps and uncertainties that haven’t been addressed in the draft text nor its companion paper. For example, it’s not clear whether generative AI tools will be covered by the Act.

United Kingdom

The UK’s current plans for AI regulation follow a substantially different approach compared to their European neighbours. While the EU AI Act sets out a horizontal, risk-based approach to regulate AI, the UK proposes a contextual, sector-based regulatory framework, based on an existing network of regulators and laws.

Source: Science Business

The UK approach is detailed in the white paper AI regulation: a pro-innovation approach, published in March 2023. According to the document, the central government will be responsible for developing general guidelines and AI principles regarding issues, such as transparency, fairness, safety, accountability, security and privacy. It will be for existing regulators to implement them according to the specific sector.

In parallel, the upcoming Data Protection and Digital Information Bill, currently debated in the UK Parliament, is also likely to have a significant impact on the governance of AI in the country.

Conclusion: The Role of Giskard in Future AI Regulation

As we have seen in this article, AI regulation is rapidly gaining momentum. Major regulators across the world are steadily advancing laws and frameworks to protect citizens from the potential dangers of AI. We have only analysed the developments in five jurisdictions, but more countries are following suit, including India, Japan and Brazil.

While there are notorious differences in the regulatory frameworks discussed above, all share the aspiration of ensuring AI safety, trust and accountability. How to achieve these goals will be a challenging undertaking, given the complexity of next-generation AI systems. 

To ensure regulatory compliance in this highly technical topic, powerful tools specifically designed to monitor and evaluate AI systems will be required. Despite the differences across jurisdictions, most of the AI regulations analysed in this article propose an extensive comprehensive quality management system capable of a wide array of tasks, including risk management, quality control, test and validation protocols, data management, record keeping, and accountability. An illustrative example of the features and capabilities that these systems will require can be found in Article 17.1 of the proposed EU AI Act:

Article 17.1 of the proposed EU AI Act.

Here is where AI quality tools like Giskard enter the scene. Designed as a developer-oriented solution for quality testing, Giskard aims to help AI providers become fully compliant with upcoming AI regulations. Giskard allows you to evaluate AI models collaboratively, test your systems with exhaustive, state-of-the-art test suites and protect AI systems against the risk of bias. Have a look at our product and get ready for the age of AI regulation.

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

Towards AI Regulation: How Countries are Shaping the Future of Artificial Intelligence

In this article we will present the challenges and approaches to AI Regulation in major jurisdictions such as the European Union, the United States, China, Canada and the UK. Explore the growing impact of AI on society and how AI quality tools like Giskard ensure reliable models and compliance.

Governments around the world are moving steadily to regulate the development of AI and mitigate its potential risks.  

The release of ChatGPT took the world by storm in late 2022. Developed by OpenAI, ChatGPT is a next-generation AI chatbot capable of all kinds of tasks you may think of, from writing essays and developing end-to-end marketing strategies to summarising complex texts and composing compelling poems. With such an unlimited range of capabilities, it’s not surprising that the tool rapidly became the fastest-growing web application in history.

But ChatGPT was only the starting point of the ongoing AI revolution. In 2023, the number of tools and applications based on so-called generative AI –a type of AI specifically designed to create content–hasn’t ceased to increase. Many social and economic activities are likely to experience deep changes as a result of the massive adoption of these powerful tools.

Despite their impressive capabilities and potential benefits, these tools also come with significant societal, economic, and environmental risks that need to be addressed carefully. However, controlling this rapidly evolving technology is a difficult task, and the way to do it is contested. 

For example, in March 2023, a group of scientists and tech gurus signed an open letter calling for a six-month pause on the development of new, more powerful AI systems, in order to evaluate its risks. Six months after the signature, the impact of the letter seems limited.

Despite the voices in the tech industry that advocate for a self-governance approach, the case for AI regulation is gaining momentum. From the EU and the US to China and the UK, more and more governments around the world are currently debating regulatory frameworks to control the development of AI and mitigate its associated risks and challenges. 

In this post, we will analyse the virtues of AI regulation, as well as the challenges to establishing effective legal standards. We will also have a look at the state of AI regulation in some of the most advanced economies. Finally, we will discuss the role of AI quality tools like Giskard in future regulatory frameworks.  Let’s start!

Why should AI be regulated? The importance of AI Regulation in addressing AI risks

Source: Unsplash

As society becomes more familiar with AI tools, concerns over the potential dangers of AI are rapidly growing. Notwithstanding the important role of the tech industry in addressing these challenges, the involvement of public authorities seems mandatory, given the profound and long-term implications that the widespread adoption of AI poses on individuals and societies.

Below you can find a list of the most compelling reasons for advancing AI regulation.

  • Impact on fundamental rights. Companies and authorities are already using AI systems to support all kinds of decision-making processes, such as mortgage lending, public benefits granting, and law enforcement activities. These AI-based decisions can have a significant impact on fundamental rights, including the right to life, privacy, equality and non-discrimination.
  • A danger to democracy. AI has been used to create fake news and misinformation campaigns. As generative AI tools become more accessible and ubiquitous, malicious uses of AI can pose a severe risk to democracies, furthering polarisation and social unrest.
  • Biased results. There is already ample evidence of the discriminatory harm that AI can cause to minority and marginalised groups. This is associated with the problem of algorithm bias, which occurs when the data used to train AI systems is not representative.
  • AI accountability. By providing clear rules on how to develop, deploy and use AI, AI regulation can ensure a level playing field in terms of AI safety, trust, and accountability. 
  • AI transparency. AI systems are often powered by so-called black-box models, which are difficult to interpret and evaluate. Furthermore, opacity seems the norm in a sector where everyone is competing fiercely to take the lead. AI regulation could ensure AI companies adopt transparency measures required to audit AI systems and assess their cost and impacts on society.
  • AI and climate change. Connected with the previous point is the relation between AI and climate change. As powerful tools like ChatGPT become mainstream, there are increasing concerns about the resources (namely, electricity and water) required to develop and run these systems. Rules on environmental transparency could be a first step to bring more scrutiny to this pressuring issue.

Balancing innovation with AI risks management: challenges in implementing AI Regulation

Despite the reasons supporting the case for AI regulation, controlling the development of emerging technologies like AI is an extremely complex task for lawmakers. Here is a list of the most significant challenges to advance effective AI regulation.

  • The protection vs. innovation dilemma. The mission of public authorities is to protect society from the negative effects of technologies like AI. However, there is the risk that putting AI companies under heavy regulatory pressure may hinder innovation, thereby limiting the potential benefits of AI. 
  • Choosing the right time. While emerging technologies develop freely, in parallel with scientific progress, law-making is normally a slow process with many parties and stakeholders involved. It can take years for a proposal to make it to the end of the legislative process. By that time, technology may have already evolved. When that happens, the legislative process becomes even more lengthy and complex. For example, when the European Commission released its proposal for the EU AI Act in 2021 (see more in the next section), generative AI tools like ChatGPT didn’t even exist. Hence, policy-makers have an additional challenge.
  • The globalised and fragmented dimension of AI. States have historically used their sovereignty to regulate issues that take place within their borders. However, this theory falls short when it comes to handling technologies that are intrinsically global, such as AI. A question that a citizen user prompts to ChatGPT will probably be processed in a data centre located in the US with thousands of microchips manufactured in Taiwan with minerals extracted in Argentina, China and Congo. To address the highly fragmented and cross-border dimension of AI without stepping into the jurisdiction of other countries, global cooperation between states, companies and other stakeholders is often required, thereby adding another layer of complexity to the law-making process. 
Source: Interconnected

From the EU AI Act to Worldwide perspectives: AI Regulations across different jurisdictions

In the last months, more and more governments are starting to take action to regulate AI. In this section, we will analyse the state of AI regulation around the world. This is a non-exhaustive list, which will only cover the latest developments by major regulators.

European Union

In 2021, the European Commission presented its proposal to regulate AI. The so-called EU AI Act aims to exploit the benefits of AI while mitigating its potential risks. 

Considered the first AI regulation proposed by a major regulation, the most significant feature of the Act is the adoption of a horizontal, risk-based approach to classify AI systems. In particular, the Commission proposed four categories of AI systems depending on their risks: 

  • Unacceptable risk AI systems. These systems are forbidden.
  • High-risk AI systems. Systems under this category have to comply with a series of requirements before being put in the EU market. The core of the commission proposal focuses on this type of AI system.
  • Limited risk AI systems. They just have to comply with certain transparency obligations.
  • Low-risk AI systems. These systems are allowed without restrictions, yet AI providers are encouraged to adhere to voluntary codes of conduct.

The draft of the Commission didn’t include any mention of generative AI tools. This void was filled by the European Parliament, which presented its compromise text of the EU Act in May 2023 and adopted it formally in June 2023. As we explained in a previous article, most changes in the Parliament position address new concerns following the massive adoption of general AI systems, incorporating a separate risk category for so-called foundation models, that is, the underlying technology of tools like ChatGPT.

Pyramid of risks under the EU AI Act. 

The final version of the EU AI Act will be negotiated by the European institutions in the upcoming trilogue, that is, informal consultations between the Commission, the Council of the EU, and the Parliament. The aim is to reach an agreement by the end of 2023.

United States

The dilemmas on how to regulate AI seem more acute in the US, where most of the Big Tech is located and self-governance and voluntary regulations have historically played a more important role than in Europe. 

According to the US government, AI regulation is still in its ‘early days’. So far, the White House has only advanced an outline for an AI Bill of Rights, with five principles that should guide the design, use and deployment of AI systems. However, this act wouldn’t have legal force and thus couldn’t be enforceable. 

Concerns over the potential risks of AI have also sparked intense debate in two chambers of the US Congress, with lawmakers proposing several legislative and regulatory frameworks in areas such as AI testing, transparency, liability and AI-based misinformation.

Among the most ambitious initiatives is the Algorithmic Accountability Act, which aims at increasing AI transparency to prevent potential bias. According to the act, which is still in the draft stage, AI providers should undergo a risk assessment before deploying AI systems in the market, following a process that echoes the one laid down in the EU AI Act.

At the federal level, several States are already advancing proposals to regulate AI. The most significant initiative comes from California, a historical hub for Big Tech and AI innovation, whose government has recently signed an executive order to advance AI regulation in the State, as it did already in the field of data protection with the adoption of the California Consumer Privacy Act.

US State-by-State AI Legislations. Source: BCLP Law

While Washington and the Federal States keep deciding on the best way to regulate AI, top AI companies, including Meta, Open AI, and Google, have recently announced the implementation of voluntary safeguards

China

The slow progress in the EU and the US contrasts with the rapid development of AI regulation in China, which is rapidly building a solid and comprehensive framework for regulating AI.

Source: Reuters

The country has already adopted rules on AI recommendation algorithms and AI systems designed to synthetically generate images and video. In July 2023, the Cyberspace Administration of China (CAC) published its time-breaking rules to tame the new waves of tools like ChatGPT, the Interim Measures for Generative Artificial Intelligence Service Management.

The regulation lays down rules for providers offering AI services to the public in China. In many ways, the technical requirements –e.g. risk assessment, record keeping, mitigation and compliance obligations– for AI providers resemble the ones proposed by the EU. 

However, other aspects of the framework are singular, representing the political reality of China. In particular, the obligation for providers to adhere to socialist values, respect social morality and ethics, and not generate incitement against the government are diametrically opposite to Western values and thus could lead to regulatory headaches for AI providers from abroad.

Although its legislative and regulatory initiatives are often omitted and discredited by their Western counterparts, China is rapidly becoming a world-class AI player, and its regulation is likely to place the country in a position of geopolitical influence.

Canada

Canada is currently debating its own AI act, the Artificial Intelligence and Data Act

The Act offers a balanced approach to regulating AI within hampering innovation and market opportunities for Canadian businesses. It seems very much inspired by the EU AI Act, following a similar risk-based approach to classify AI systems. 

Most of the requirements will address the risks of so-called “high-impact” AI systems. These providers will have to comply with strict identification, assessment, record-keeping, mitigation and compliance obligations, fearing severe monetary penalties if they don’t follow the rules.

Source: Government of Canada

The Act is still in the early stages of the legislative process, and it’s not expected to come into force sooner than 2025. In the coming months, the Act will undergo new revisions, as there are still many gaps and uncertainties that haven’t been addressed in the draft text nor its companion paper. For example, it’s not clear whether generative AI tools will be covered by the Act.

United Kingdom

The UK’s current plans for AI regulation follow a substantially different approach compared to their European neighbours. While the EU AI Act sets out a horizontal, risk-based approach to regulate AI, the UK proposes a contextual, sector-based regulatory framework, based on an existing network of regulators and laws.

Source: Science Business

The UK approach is detailed in the white paper AI regulation: a pro-innovation approach, published in March 2023. According to the document, the central government will be responsible for developing general guidelines and AI principles regarding issues, such as transparency, fairness, safety, accountability, security and privacy. It will be for existing regulators to implement them according to the specific sector.

In parallel, the upcoming Data Protection and Digital Information Bill, currently debated in the UK Parliament, is also likely to have a significant impact on the governance of AI in the country.

Conclusion: The Role of Giskard in Future AI Regulation

As we have seen in this article, AI regulation is rapidly gaining momentum. Major regulators across the world are steadily advancing laws and frameworks to protect citizens from the potential dangers of AI. We have only analysed the developments in five jurisdictions, but more countries are following suit, including India, Japan and Brazil.

While there are notorious differences in the regulatory frameworks discussed above, all share the aspiration of ensuring AI safety, trust and accountability. How to achieve these goals will be a challenging undertaking, given the complexity of next-generation AI systems. 

To ensure regulatory compliance in this highly technical topic, powerful tools specifically designed to monitor and evaluate AI systems will be required. Despite the differences across jurisdictions, most of the AI regulations analysed in this article propose an extensive comprehensive quality management system capable of a wide array of tasks, including risk management, quality control, test and validation protocols, data management, record keeping, and accountability. An illustrative example of the features and capabilities that these systems will require can be found in Article 17.1 of the proposed EU AI Act:

Article 17.1 of the proposed EU AI Act.

Here is where AI quality tools like Giskard enter the scene. Designed as a developer-oriented solution for quality testing, Giskard aims to help AI providers become fully compliant with upcoming AI regulations. Giskard allows you to evaluate AI models collaboratively, test your systems with exhaustive, state-of-the-art test suites and protect AI systems against the risk of bias. Have a look at our product and get ready for the age of AI regulation.

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.