The UK government has just unveiled an exciting, pro-innovation regulatory framework for AI that is set to propel innovation while maintaining public trust. “AI has the potential to make Britain a smarter, healthier, and happier place to live and work” said Michelle Donelan, Science, Innovation, and Technology Secretary. “Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.”
The groundbreaking new approach, detailed in the AI regulation white paper, is based on five fabulous principles:
- Safety – Ensuring AI applications function securely, safely, and robustly.
- Transparency and explainability – Organizations deploying AI should communicate when and how it’s used and be able to explain a system’s decision-making process.
- Fairness – Ensuring compatibility with the UK’s existing laws, including the Equality Act 2010 and UK GDPR.
- Accountability and governance – Introducing measures to guarantee appropriate oversight of AI.
- Contestability and redress – Providing clear routes for people to dispute outcomes or decisions generated by AI.
Instead of creating a single new regulator, these principles will be applied by existing regulators in their respective sectors. The government has generously allocated ÂŁ2m ($2.7m) to fund an AI sandbox, where businesses can test AI products and services. Not a small amount.
Over the next year, regulators will issue guidance and other resources to help organizations implement these principles. Legislation might also be introduced to ensure consistent consideration of the principles.
A consultation has also been launched by the government on new processes to improve coordination between regulators and evaluate the framework’s effectiveness.
Emma Wright, Head of Technology, Data, and Digital at law firm Harbottle & Lewis, commented, “I do welcome industry-specific regulation rather than primary legislation covering AI (such as the EU is proposing). However, I am concerned that this is essentially another consultation paper calling for regulators to produce more guidance when entrepreneurs and investors are looking for greater regulatory certainty.”
The UK’s AI industry is booming, currently employing over 50,000 people and contributing £3.7bn to the economy in 2022. Britain is home to twice as many companies offering AI services and products as any other European country, with hundreds of new firms created each year.
But, of course, there are concerns that AI could pose risks to privacy, human rights, and safety, as well as the fairness of using AI tools to make decisions that affect people’s lives, such as assessing loan or mortgage applications.
The proposals in the white paper aim to address these concerns and have been warmly welcomed by businesses, which previously called for more coordination between regulators to ensure effective implementation across the economy.
Lila Ibrahim, COO at DeepMind, enthused, “AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases. This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly.”
Grazia Vittadini, CTO at Rolls-Royce, added, “Both our business and our customers will benefit from agile, context-driven AI regulation. It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications while remaining compliant with the standards of integrity, responsibility, and trust that society demands from AI developers.”
The new framework aims to provide protections for the public without stifling the use of AI in developing the economy, better jobs, and new discoveries.
Want to dive deeper into the UK’s AI regulation white paper? You can find a full copy right here!