News Hub

Microsoft, Google, OpenAI to regulate AI through Frontier Model Forum

Written by Thu 27 Jul 2023

Screenshot of OpenAI website announcing Frontier Model Forum

Some of the world’s top tech giants have joined forces for new AI safety forum. Microsoft, Google, OpenAI, and AI research company Anthropic formed the Frontier Model Forum to focus on ensuring the safe and responsible development of AI models.

The core objectives of the Forum include advancing AI safety research, identifying best practices, collaborating with policymakers, and developing applications that can help meet society’s greatest challenges like climate change, cyber threats, and cancer prevention.

“It is vital that AI companies … align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety,” said Anna Makanju, Vice President of Global Affairs at OpenAI.

Membership is open to organisations that develop and deploy frontier models, demonstrate strong commitment to frontier model safety, and contribute to advancing the Frontier Model Forum’s efforts.

“Engagement by companies, governments, and civil society will be essential to fulfill the promise of AI to benefit everyone,” said Kent Walker, President of Global Affairs at Google & Alphabet.

The Frontier Model Forum will establish an advisory board to help guide its strategy and priorities. The advisory board will represent a diversity of backgrounds and perspectives.

“We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety,” said Dario Amodei, CEO of Anthropic.

The founding Frontier Model Forum companies will establish key institutional arrangements, including a charter, governance, and funding will also be created. A working group and executive board are expected to lead these efforts.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity,” said Brad Smith, Vice Chair and President of Microsoft.

All eyes on AI

Earlier this month, Microsoft, Google, OpenAI, and Anthropic along with Amazon, Inflection and Meta made voluntary commitments to the White House AI principles, which include putting models through vulnerability and security testing before they go live.

This news follows several debates on AI. Governments and organisations generally agree that appropriate guardrails are required to mitigate risks of rapidly advancing AI systems that equally offer tremendous benefits.

To keep up with its increasing popularity and rapid application, the UN Security Council recently agreed that quickly-developed regulations are needed. The UK’s AI White Paper elaborated on the conducive environment for its development and addressed potential risks, including physical harm, national security threats, and mental health issues​.

The European Union has also proposed the AI Act, a potential world’s first, providing rules for AI that has the potential to become a global standard. The proposed rules follow a risk-based process depending on the level of risk that AI could possibly create.

Hungry for more tech news?

Sign up for your weekly tech briefings!

Get seen at Big Data & AI World

6-7 March 2024, ExCeL London

Be at the forefront of change with thousands of technologists, data specialists, and AI pioneers.

Don’t miss the biggest opportunities to advance your business into the future.

Written by Thu 27 Jul 2023

Send us a correction Send us a news tip