News Hub

OpenAI head of trust and safety resigns, chooses family over career

Written by Fri 21 Jul 2023

The Head of Trust and Safety at OpenAI has stepped down, citing pressures of the job on his family life. David Willner, in a farewell LinkedIn post said he will be ‘transitioning into an advisory role’.

“Anyone with young children and a super intense job can relate to that tension, I think, and these past few months have really crystallised for me that I was going to have to prioritise one or the other,” said Willner.

The LinkedIn post also noted that OpenAI is going through a ‘high-intensity phase in its development’, which comes as no surprise as ChatGPT continues to grow in popularity. The company’s large language model has reached more than 100 million users.

“While my job there was one of the coolest and most interesting jobs it’s possible to have today, it had also grown dramatically in its scope and scale since I first joined,” added Willner.

Willner began is his role as Head of Trust and Safety at OpenAI in February 2022. He previously worked as Head of Community Policy at AirBnB and Head of Content Policy at Facebook.

OpenAI did not immediately comment on Willner’s departure.

The importance of trust and safety in AI

Trust and safety departments have taken on an integral role in technology companies. With the rise of AI adoption, the demands of the role increase in tandem.

The Microsft-backed company said it relies on its trust and safety team to build the ‘processes and capabilities to prevent the misuse and abuse of AI technologies’. This involves minimising the spread of misinformation, hate speech, and other damaging content from existing on their platforms

Fears surrounding AI are have risen globally and numerous measures have been implemented to curb concerns.

The White House announced that multiple leading AI companies have agreed to be held accountable for managing the risks associated with the evolving technology. Amazon, Google, Meta, Microsoft and OpenAI among others have pledged their commitment to safety, security, and trust. The companies will look to build secure systems and earn the public’s trust in the development and deployment of AI products.

“As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe,” the White House stressed.

At the inaugural UN Security Council meeting on AI, China’s UN Ambassador Zhang Jun stated there should be a focus on people and AI to regulate development and block the technology from becoming a ‘runaway horse’. Accountability for AI technology, access and responsible human control were mentioned as fundamentals to its development.

The UK has developed its pro-innovation White Paper on AI Regulation and the UK Research and Innovation body has announced a £50 million package to develop trustworthy and secure AI. Headed by Ian Hogarth, the UK has also launched and expert Foundation Model Taskforce last month to propel the safe development of AI models.

With the increasing focus on trust and safety, the industry is recognising the importance of striking a balance between innovation and ethical considerations to ensure a safer AI future.

Hungry for more tech news?

Sign up for your weekly tech briefings!

Get seen at Big Data & AI World

6-7 March 2024, ExCeL London

Be at the forefront of change with thousands of technologists, data specialists, and AI pioneers.

Don’t miss the biggest opportunities to advance your business into the future.

Written by Fri 21 Jul 2023

Send us a correction Send us a news tip