A Comparative Analysis of the UK’s and EU’s Approaches to Artificial Intelligence Regulation (Part 1)

In the last 4 months, the EU and UK have both released guidance to artificial intelligence providers, indicating which of the three goals listed above they wish to prioritise. In this two part series, we will begin by discussing the 3 major goals of legislation for emerging industries and take a deep dive into what the UK’s white paper on artificial intelligence means for the countries approach going forward.

When crafting regulations for an emerging and growing industry such as Artificial Intelligence, policy makers must walk a tightrope – balancing the promotion of innovation with safeguarding and ethical considerations. Regulations for such industries must succeed in achieving three major goals:

To encourage public confidence in artificial intelligence systems

With new industries, particularly ones which utilise complex and unfamiliar technologies, the key to growth and widespread adoption is public confidence. Governments must consider this fact when creating regulations for AI system developers, as the higher the legal standards that must be met, the stronger the public confidence.

To foster a nurturing and enticing environment for AI development

With the massive promise of this industry, all the major world players will be looking to establish themselves as the global hub of artificial intelligence. As such, when creating regulations governments must consider whether the rules are attractive to developers, businesses, and startups. Regulations must be lax enough to allow innovators room to breathe and be creative within their industry not only to attract the necessary talent, but also to all for advancements in the technology itself.

To future proof legislation

In an industry as fast evolving as Artificial Intelligence, regulators and government bodies must ensure that the legislation passed, whether it be pro safety / business / environment, will remain relevant in years to come. This is key in order to protect both businesses and the public and, particularly important for the UK, avoid passing the buck onto judges due to the outdated nature of the legislation.

Additionally, when creating legislation, governments may also have to consider it’s international compliance and coherence. For example, if definitions on key terms do not align and even conflict, multinational Artificial intelligence companies will be unable to operate in both jurisdictions without heavy costs. This could have devastating effects for the nation the government represents, as it could lead to them losing out on the operation of industry titans within their borders.

United Kingdom’s March 2023 White Paper

The UK outlined it’s vision for the future of AI regulation in a recent white paper. The paper established 5 major principles which the government expects artificial intelligence developers to adhere to, and relevant regulatory bodies to enforce:

  1. Safety, security and robustness – regulators must introduce rules to ensure AI systems are technically secure.
  2. Appropriate transparency – regulators must encourage consistent transparency from AI startups no matter the stage of the life cycle, as this builds public trust
  3. Fairness – rights of individuals should be upheld (e.g. discrimination)
  4. Accountability and governance – regulators must issue clear expectations for compliance and good practice
  5. Contestability and redress – where AI produced content is harmful, regulators must be clear on the rights of third parties impacted

Interestingly, the focus of the paper was on enhancing the powers and responsibilities of regulatory bodies such as the FCA, the CMA, and the ICO in the AI space. Rather than announcing plans for statutory guidance, the UK government makes it clear it plans to rely solely on its regulatory bodies to achieve the above principles in the near future.

It states that it expects all relevant regulatory bodies to take the lead on Artificial Intelligence governance in the United Kingdom with the idea that all regulators will publish guidance and tools for AI development compliance within the next 6-12 months. Additionally, instead of establishing an overarching authority on artificial intelligence in the country, the government wishes regulators to work together to ensure their regulations align in jurisdictions that overlap as otherwise the regulatory landscape for artificial intelligence businesses will become far too complex.

With this white paper, the United Kingdom keeps it’s guidance of artificial intelligence to broad strokes only. For example, the document did not establish a clear definition of what it considers the technology to be (only using the descriptors adaptive and autonomous). Further, the paper did not break down the Artificial Intelligence industries into subsections, as the European Union later did. Instead, it treated the industry as one beast, with limited mention to areas which the EU demonstrated deserves special attention – such as generative AI. This further clarifies that the government itself plans to take a more hands off approach to the industry for now.

In the next article and part two of this series, we will discuss the European Union’s AI Act, and compare the two approaches to regulating the industry.