A Comparative Analysis of the UK and EU’s Approaches to Artificial Intelligence Regulation (Part 2)

European Union’s June 2023 Artificial Intelligence Act

In June 2023, the European Union passed the world’s first comprehensive set of laws on Artificial Intelligence. The goals of the EU in terms of AI development are made clear in this Act: though the EU wishes to be a hub of Artificial Intelligence Innovation, it has made it clear it will not pursue this at the expense of the safety and protection of the citizens living in their jurisdiction.

Key examples of this approach include:

  • Heavy restrictions on risky AI areas and uses,
  • The establishment of the EU AI Office to monitor AI development compliance,
  • AI systems must be overseen by people as opposed to being fully automated.

By breaking down the market into levels of risk, the act targets each level of danger posed by AI with the appropriate strength of regulation. The 5 levels of risk and the associated regulation are as follows:

LevelExampleRegulation
Minimal RiskAny AI not covered by the other categoriesNo specific compliance beyond standard statutory expectations
Limited RiskDeep fakes, AI chatbotsMust inform the user they are using AI to allow for informed consent
High RiskAI systems used in a product covered in the EU product safety legislation. AI systems in the following industries: education, law enforcement, legal interpretation assistance, biometric identificationThese companies must be registered with the EU, and pass an assessment before they go to market with the criteria: Risk management system, Human oversight, Data governance risk management system. This is the most far-reaching regulation requirement seen in any country to date.
Unacceptable RiskSocial scoring systems, cognitive behavioural manipulation, and real-time biometric identificationProhibited from operating within the EU
Generative AIMachine learning systems used as chatbots, video and image creation, software codeDisclosure of generative AI content, prevention of generation of illegal content, summary of copyrighted data
Fines for non-compliance with these rules can be up to $30 million or 6% of global revenue – a significant sum which suggests the EU is taking this regulation seriously.

Key differences in approach to artificial intelligence

EUROPEAN UNION
The proposed legislation focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.

UNITED KINGDOM
No plans for statutory implementation, whilst it does wish to ensure there is protection against ai safety issues it largely leaves it up to regulators with only a broad-strokes idea of how they should proceed – it is clear regulation statutory regulation is a lesser priority for the UK.

As mentioned in part 1, one key consideration of legislation of global synergy of definitions and regulation. Even with just the limited versions of regulation, we have today, we are already seeing divergence and conflicting rules.
Differing definitions of key aspects of an AI company already emerging from the EU and Uk include:
These divergent terms include technical robustness, diversity, security, governance and contestability. It is obvious to any multinational business that these unclear terms will present challenges for AI entities operating across both jurisdictions in years to come.

Conclusion

In the intricate world of AI regulation, balancing innovation and ethics is key. As the EU and UK step forward with distinct and differing approaches, challenges and opportunities will arise. Crafting regulations for emerging industries necessitates public confidence, nurturing innovation, and future-proofing – the level of significance each jurisdiction places on these individual goals is beginning to emerge. Regardless, it is clear that harmonizing international compliance remains a daunting task.
The UK’s white paper takes a broad-strokes approach, empowering regulatory bodies to champion AI governance. In contrast, the EU’s comprehensive AI Act segments risk, offering targeted regulation based on potential threats. The EU’s emphasis on safety aligns with its vision for AI innovation that doesn’t compromise citizens’ protection.
The divergent paths suggest different AI development futures. The UK’s reliance on regulatory bodies could enable AI innovation to thrive in the country but could be at the risk of its safety standards (though this remains to be seen). The EU’s stringent approach prioritizes safeguarding, and ensuring responsible AI deployment, yet might involve increased compliance burdens.
In this evolving narrative, the EU and UK begin to unfold as contrasting protagonists, pursuing unique aims in the quest for AI regulation. The outcomes could shape not only the future of AI within their borders but also their global positions as AI hubs.