fbpx
News Hub

Ex OpenAI and Google Deepmind employees push for improved AI safety guidelines

Written by Thu 6 Jun 2024

Former employees at artificial intelligence (AI) companies OpenAI and Google Deepmind have signed an open letter warning of the lack of safety oversight within the industry.

In the open letter, employees expressed concerns about the risks of AI further entrenching existing inequalities, its potential to aid in manipulation and misinformation, to the eventual result of human extinction due to the loss of control of autonomous AI systems.

“We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public. However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” said the letter.

The former employees added that AI companies possess extensive non-public information about their systems capabilities, limitations, protective measures, and associated risks. Despite this, employees have have minimal obligations to share such information with governments and none with civil society, and voluntary disclosure cannot be fully trusted.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” said the letter.

The letter stressed that current whistleblower protections are insufficient as they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.

“Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues,” said the former employees in the letter.

Former Employees Call for Principles

The former employees have called AI companies to commit to a set of principles. The first principle is that companies should not create or enforce agreements that prohibit criticism regarding risk-related issues or retaliate against employees, including by hindering any economic benefits. 

The letter said organisations must also establish a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, regulators, and an independent organisation with relevant expertise. 

Additionally, they should support a culture of open criticism, allowing employees to address concerns about the company’s technologies publicly or through appropriate channels, while protecting trade secrets and intellectual property.

Finally, employees should also be protected from retaliation for publicly disclosing risk-related confidential information if other reporting processes fail. 

Although efforts should be made to avoid unnecessary release of confidential information, employees should first use any existing adequate anonymous reporting processes. If such processes are unavailable, they should remain free to report their concerns to the public.

In February, the UK’s AI Safety Institute (AISI) found AI has the potential to create personas to spread disinformation, perpetuate biases, and deceive human users.

The AISI published initial findings from its research into AI large language models (LLMs), finding that LLM safeguards can be easily bypassed. The Institute discovered these concerns through a series of case studies. Case Study 1 evaluated misuse risks, Case Study 2 assessed representative and allocative bias in LLMs, and Case Study 3 evaluated autonomous systems.

In October, Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon, and Meta published AI safety policy updates following pressure from the UK Government. The technology companies hope to boost transparency and encourage the sharing of best practices within the AI community.

Join Tech Show London

12-13 March 2025, ExCeL London

Be a part of the latest tech conversations and discover pioneering innovations.

You won’t want to miss one of the most exciting technology events of the year.

Written by Thu 6 Jun 2024

Send us a correction Send us a news tip