fbpx
News Hub

Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon, and Meta share AI safety policies

Written by Mon 30 Oct 2023

Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon, and Meta have published artificial intelligence (AI) safety policy updates following pressure from the UK Government. The technology companies hope to boost transparency and encourage the sharing of best practices within the AI community.

The updates outlined best practices for AI companies, including establishing responsible capability scaling. This is a new framework for managing frontier AI risks. Several companies have already implemented frameworks to tackle these risks. 

Frontier AI represents highly capable AI models that excel in a wide range of tasks, rivalling or surpassing today’s most advanced models.

“AI companies providing increased transparency of their safety policies is a first step towards providing assurance that these systems are being developed and deployed responsibly,” said Ian Hogarth, Chair of the Frontier AI Taskforce.

The UK Government requested AI companies outline their policies across nine areas of AI safety. These areas are:

  • Responsible Capability Scaling offers a risk management framework for organisations as they expand frontier AI systems capacity. This allows companies to address AI risks before they occur.
  • Model Evaluations and Red Teaming assist in assessing AI model risks, aiding informed decisions on training, securing, and deployment.
  • Model Reporting and Information Sharing enhances Government oversight of advanced AI development and deployment while empowering users to make informed decisions about AI system usage.
  • Security Controls Including Securing Model Weights are key for reinforcing the safety of an AI system. If not deployed securely, AI models risk being stolen or leaking sensitive data.
  • Reporting Structure for Vulnerabilities facilitates the identification of safety and security issues in AI systems by external parties.
  • AI-generated Material Identifiers offer context regarding the origin or alteration of content. This aids in the prevention of deceptive AI-generated content creation and distribution.
  • Prioritising Research on Risks Posed by AI helps identify and rectify frontier AI risks.
  • Preventing and Monitoring Model Misuse is key for companies to address as AI systems can be harmfully misused.
  • Data Input Controls and Audits aid in detecting and eliminating training data that might enhance the dangerous capabilities of frontier AI systems.

UK Government Publishes Safety Processes for AI Companies 

Prompted by the Technology Secretary, Michelle Donelan, the AI safety policies coincided with the UK Government’s release of a document titled  ‘Emerging processes for frontier AI safety’.

This paper complimented organisations’ policies by outlining safety processes for AI companies. The paper also offers guidance on maintaining the safety of their models.

The Government acknowledged the potential benefits of AI, yet recognised the risks of its unchecked development. The published paper is in response to this concern.

The paper advised firms to proactively define monitored risks, specify who should be alerted when such risks are identified, and determine the thresholds for when developers should slow down or pause their work until improved safety measures are established.

“This is the start of the conversation and as the technology develops, these processes and practices will continue to evolve, because in order to seize AI’s huge opportunities we need to grip the risks,” said Michelle Donelan, Secretary of State for Science, Innovation, and Technology.

The paper recommended that AI developers employ third parties to try to hack their systems in an attempt to identify sources of risk and potential harmful impacts. Developers were also advised to provide additional information on whether content has been AI-generated or modified. 

The paper discussed processes and practices for AI safety that some organisations have already implemented. These practices are still under consideration within academic and broader civil society.

Some processes and practices apply to different AI organisations, but specific ones like responsible capability scaling are designed exclusively for frontier AI systems.

The paper addressed the technical challenges and absence of establishing best practices in advanced AI development and related decision-making processes. It also highlighted the risk of AI models becoming too complex for human comprehension and control as they rapidly advance.

“It is challenging to talk about how to manage safety when we are dealing in some cases with systems that are too advanced for us to have yet built – but it’s important to have the vision and courage to anticipate the risks,” said Adam Leon Smith of the British Computer Society, The Chartered Institute for IT, Chair of its Fellows Technical Advisory Group.

The paper is intended to be an early contribution to the discussion and will require regular updates to evolve with the ever-changing nature of AI technology.

UK to Establish AI Safety Institute

The policies published by frontier AI companies have initiated a discussion about safety policies. As announced by Prime Minister Rishi Sunak, the newly established AI Safety Institute can advance this conversation through its programme of research, evaluation, and information sharing. It will do so in collaboration with the Government’s AI Policy team.

The AI Safety Institute is expected to further develop the AI firms’ safety policies by conducting research, evaluations, and sharing information. This collaborative effort will involve working closely with the Government’s AI Policy team.

The AI Institute will also look to share information with international partners, policymakers, private companies, academia and civil society as part of efforts to collaborate on AI safety research.

New findings indicated international support for a Government-backed AI Safety Institute. A total of 62% of Brits who were surveyed backed the concept. This institute’s primary role will be to assess the safety of advanced AI systems. 

Across all countries surveyed, most respondents supported the idea, with agreement levels ranging from 59% in Japan to as high as 76% in the UK and Singapore.  When asked about who they trust to oversee AI safety, an AI safety institute emerged as the top choice in seven out of the nine countries.


Hungry for more tech news?

Sign up for your weekly tech briefings!

Written by Mon 30 Oct 2023

Send us a correction Send us a news tip