Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

June 7, 2024

This week, a group of current and former employees from OpenAI and Google DeepMind  penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers.

The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Guests

William Saunders is an AI research engineer focusing on AI alignment and interpretability. He worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team.

Episode Highlights

Major Takeaways

Take Action

Share These Ideas