fbpx
News Hub

UK thinktank urges AI incident reporting system for Government

Written by Wed 26 Jun 2024

A UK think tank has emphasised the need for an incident reporting system to log artificial intelligence (AI) misuse or malfunctions. Without it, the Department for Science, Innovation, and Technology (DSIT) may miss crucial insights into AI failures.

The Centre for Long-Term Resilience (CLTR) said incident reporting is collected and investigated in other safety-critical industries like in medicine and aviation yet there is a ‘concerning gap’ in the UK’s regulatory plans regarding AI.

The CLTR said its mission is to ‘transform global resilience to extreme risks’ by working with governments and other institutions to improve relevant governance, processes, and decision making.

“AI has a history of failing in unanticipated ways, with over 10,000 safety incidents in deployed AI systems recorded by news outlets since 2014. With greater integration of AI into society, incidents are likely to increase in number and scale of impact,” said the CLTR in the report.

Critical Gap Highlighted by CLTR

The CLTR has warned without a robust incident reporting framework, DSIT will miss incidents in highly capable foundation models like bias, discrimination, or misaligned agents which could cause harm.

The CLTR also said DSIT will lack visibility in incidents from the Government’s use of AI in public services which could directly harm the public, such as through improperly revoking access to benefits, creating miscarriages of justice.

Without incident reporting, the CLTR added DSIT will have less visibility in detecting the use of disinformation campaigns or biological weapon development which may need urgent response to protect UK citizens.

Finally, incidents of harm from AI companions, tutors, and therapists could also be missed by the Government where deep levels of trust combined with extensive personal data could lead to abuse, manipulation, radicalisation, or dangerous advice.

“DSIT lacks a central, up-to-date picture of these types of incidents as they emerge. Though some regulators will collect some incident reports, we find that this is not likely to capture the novel harms posed by frontier AI,” said the CLTR.

The Benefits of AI Incident Reporting

The CLTR stated incident reporting is a proven safety mechanism supporting the UK Government’s context-based approach to AI regulation. 

“DSIT should prioritise ensuring that the UK Government finds out about such novel harms not through the news, but through proven processes of incident reporting,” said the CLTR.

Incident reporting will allow monitoring of AI-related safety risks in real-world contexts, providing a feedback loop for regulatory adjustments. It will also enable coordinated responses to major incidents, followed by root cause investigations for cross-sector learnings. 

It can also help identify early warnings of potential large-scale harms for use in risk assessments by the AI Safety Institute and the Central AI Risk Function.

The CLTR Recommends Next Steps

The CLTR outlined three key recommendations for the UK Government to take immediate action.

Firstly, they suggested establishing a streamlined system for reporting AI incidents within public services. This could be achieved by expanding the Algorithmic Transparency Recording Standard (ATRS) to include a framework specifically designed for reporting incidents involving AI systems used in public sector decision-making. 

The ATRS aims to facilitate transparency by enabling public sector bodies to openly disclose details about the algorithmic tools they employ.

According to the CLTR, these incident reports should be directed to a Government entity and potentially made accessible to the public to enhance transparency and accountability.

Secondly, the CLTR advised the Government to collaborate with UK regulators and experts to identify critical gaps in AI oversight. This step is crucial for ensuring comprehensive coverage of priority incidents and understanding the necessary stakeholders and incentives essential for establishing an effective regulatory framework.

Lastly, the CLTR proposed enhancing the capacity of DSIT to monitor, investigate, and respond to AI incidents. This may involve setting up a pilot AI incident database under DSIT’s central function, aimed at developing the necessary policy and technical infrastructure for collecting and addressing AI incident reports. 

Initially focusing on the most urgent gaps identified by stakeholders, this database could eventually encompass reports from all UK regulators.

UK Government Intervention in AI Safety

In May, the UK Government announced the UK AI Safety Institute’s evaluation platform has been made available to the global artificial intelligence (AI) community to ‘pave the way’ for the safe innovation of AI models.

By making Inspect evaluations platform available to the global community, the Institute intends to help accelerate the work on AI safety evaluations being carried out globally, leading to better safety testing and the development of more secure models.

In February, the UK Government announced the UK will offer grants to researchers to study how to protect society from artificial intelligence (AI) risks.

The funding is also intended to be used to harness the benefits of AI like increased productivity. The most promising proposals will be developed will be developed into longer-term projects and receive further funding. 

The news arrived the same week Britain and South Korea reached a landmark agreement on the creation of a global network of AI Safety Institutes. At an AI Summit in South Korea, ten countries and the European Union have committed to collaborate on a network to improve the science of artificial intelligence (AI) safety.

Join Tech Show Paris

27-28 November 2024, Porte de Versailles, Paris

Be a part of the latest tech conversations and discover pioneering innovations in Paris.

Don’t miss one of the most exciting technology events of the year for France.

Written by Wed 26 Jun 2024

Send us a correction Send us a news tip