UK Government’s AI White Paper sparks mixed reactions
Written by Stuart Crowley Wed 28 Jun 2023
The UK government recently introduced its pro-innovation White Paper on AI Regulation. This White Paper, spearheaded by the Secretary of State for Science, Innovation and Technology, Michelle Donelan MP, has sparked mixed reactions from various stakeholders.
The White Paper outlines five key principles for AI: Safety, security, robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; and Contestability and redress.
These principles aim to guide the UK’s approach to AI, with a focus on creating a conducive environment for AI development and addressing potential risks. These risks can range from physical harm and national security threats to mental health issues.
Embracing the UK’s pro-innovation approach
The Competition and Markets Authority (CMA) has expressed support for the Government’s non-statutory approach, which aims to leverage and build on existing regulatory regimes.
In line with the principles of the White Paper, the CMA has begun considering how it might provide guidance on interpreting these principles in relation to its remit, with an emphasis on the need for clarity and consistency. The CMA also acknowledged the potential long-term, structural, and indirect economic effects of AI, as well as the potential infringements on consumer protection law.
In its response, the CMA also considered the potential challenges related to accountability for certain AI systems, especially those that may lead to collusive outcomes without explicit human coordination. The importance of transparency was highlighted, with the CMA recommending that regulators must be equipped with the necessary resources and expertise to monitor potential harms and act where necessary.
The British Computer Society (BSC) also spoke out in favour of the AI White Paper. The Chartered Institute for IT welcomed the Government’s commitment to helping UK companies become global leaders in AI while developing within responsible principles.
Rashik Parmar MBE, Chief Executive of BCS, praised the cross-sectoral and flexible approach to AI regulation in the UK, noting the need for shared ethical values among AI professionals.
“We need to remember this future will be delivered by AI professionals – people – who believe in shared ethical values,” said Parmar.
He also saw great potential in the proposed multi-regulator sandbox for breaking down barriers and removing obstacles by creating a safe testing environment.
“It is right that the risk of use, not the technology itself, is regulated. Managing the risk of AI and building public trust is most effective when the people creating it work in an accountable and professional culture, rooted in world-leading standards and qualifications,” added Parmar.
Tackling the ethical risks of AI
However, the AI White Paper has also faced criticism. The Equality and Human Rights Commission (EHRC) has argued that the safeguards included in the paper are ‘inadequate’ and ‘fall short of what is needed to tackle the risks to human rights’.
“People want the benefits of new technology but also need safety nets to protect them from the risks posed by unchecked AI advancement. If any new technology is to bring innovation while keeping us safe, it needs careful oversight. This includes oversight to ensure that AI does not worsen existing biases in society or lead to new discrimination,” said Baroness Kishwer Falkner, Chairwoman of the EHRC.
Whilst seeing the AI White Paper as a ‘step in the right direction’, The EHRC has called for a greater focus on how AI will impact equality and recommends increasing funding for regulators to manage the rapidly advancing technology.
“To rise to this challenge, we need to boost our capability and scale up our operation as a regulator of equality and human rights. We cannot do that without government funding,” added Baroness Falkner.
On a more international scale, the UK’s approach to AI regulation has been compared to the EU’s AI Act. While there are similarities between the two, the UK’s approach is described as being less detailed and more high-level, focusing more on the outcomes of AI applications rather than the characteristics of the AI systems themselves.
As the UK continues to navigate the rapidly advancing field of AI, ongoing discussions and adaptations will be crucial to achieve a robust and effective regulatory framework.
Hungry for more tech news?
Sign up for your weekly tech briefings!
Written by Stuart Crowley Wed 28 Jun 2023
Most Viewed News
February 27, 2024Bezos, Microsoft, NVIDIA invest in humanoid robot startup, Figure AI
February 27, 2024ISC2 partners with Cloud & Cyber Security Expo to advance profess...
February 26, 2024Registration for Tech Show Frankfurt 2024 now live!
February 27, 2024Chipmaker Broadcom sells workspace solutions unit to KKR for £3.1bn