News Hub

UK appoints tech heavyweights to Frontier AI Taskforce

Written by Thu 7 Sep 2023

Image of Turing Prize Laureate Yoshua Bengio and GCHQ Director Anne Keast-Butler, new Frontier AI Taskforce members

The UK has appointed a team of technology leaders to its new Frontier AI Taskforce, with the aim of advising Government on the risks and opportunities of AI.

Oxford academic, Yarin Gal, was announced as the first Taskforce Research Director, while Cambridge academic, David Kreuger, will take a consultative role. They will build a team to investigate frontier AI risks like cyberattacks.

The Frontier AI Taskforce, formerly known as the Foundation Model Taskforce, will also identify new uses for AI in the public sector.

“These new appointments are a huge vote of confidence in our status as a flagbearer for AI safety as we take advantage of the enormous wealth of knowledge we have both at home and abroad,” said Michelle Donelan, Technology Secretary.

In just 11 weeks, the Taskforce hired a team of AI researchers to address the risks posed by AI and ensure the UK Government is at the cutting edge of AI safety.

“We’re working to ensure the safe and reliable development of foundation models but our efforts will also strengthen our leading AI sector,” said Ian Hogarth, who was recently appointed Frontier AI Taskforce Chair.

Anthropic, DeepMind, and OpenAI are set to grant access to their AI models to support the researchers in their work.

“Our efforts will also strengthen our leading AI sector, and demonstrate the huge benefits AI can bring to the whole country to deliver better outcomes for everyone across society,” added Hogarth.

Turing Prize Laureate, Yoshua Bengio, and GCHQ Director, Anne Keast-Butler, have also been selected to join the Taskforce’s new External Advisory Board.

“The safe and responsible development of AI is an issue which concerns all of us.

“We have seen massive investment into improving AI capabilities, but not nearly enough investment into protecting the public, whether in terms of AI safety research or in terms of governance to make sure that AI is developed for the benefit of all,” said Bengio.

The External Advisory Panel will also include:

  • Matt Clifford, Prime Minister’s Representative for the AI Safety Summit, appointed as Vice-Chair
  • Matt Collins, Deputy National Security Adviser
  • Alex Van Someren, Chief Scientific Adviser for National Security
  • Dame Helen Stokes-Lampard, Chair of the Academy of Medical Royal Colleges
  • Paul Christiano, Chief of the Alignment Research Centre

All board members will share evidence-based advice within their areas of expertise.

“Expert guidance around AI will play a critical role in gaining trust from both business and the wider public.

“The combination of academia, industry experts, Government and business within this UK initiative is a great step to identifying risks and opportunities and to provide education as this technology develops and matures further at a rapid pace.

“Collaboration between these four groups can make a huge impact, resulting in further trust in AI. It could help drive successful and safe deployments to deliver potentially significant economic benefits,” said Sridhar Iyengar, Managing Director for Zoho Europe.

What is the Frontier AI Taskforce?

The Frontier AI Taskforce is a group of leading experts backed by £100 million in funding with the goal of ensuring the safe and reliable development of frontier AI Models. It was announced by Prime Minister Rishi Sunak in April.

“Artificial Intelligence can act as a useful business tool, adding huge value when used correctly by offering tools to increase efficiencies such as business forecasting, fraud detection, and sentiment analysis. However, there are still concerns around safety and how the ethical use of AI is promoted and governed,” added Iyengar.

With international collaboration being the ‘backbone’ of the Government’s approach to AI safety, the Taskforce enlisted industry expertise through long-term partnerships with American-based companies Trail of Bits and ARC Evals.

The Government expects these partnerships to unlock expert advice on the cybersecurity and national security implications of foundation models.

This is particularly important after AI was flagged a ‘chronic risk’ in the UK National Risk register for the first time. These dangers include the potential for increased misinformation and a decline in economic competitiveness.

These initiatives follow concerns that the UK could be left behind in the advancement of AI due to the lack of accelerated regulation. MPs warned that the Prime Minister’s global AI ambitions could be at risk unless new AI laws are introduced in November.

Hungry for more tech news?

Sign up for your weekly tech briefings!

Written by Thu 7 Sep 2023

Send us a correction Send us a news tip