Litigation Case Study: OpenAI

The ongoing lawsuit against OpenAI and OpenAI CEO Sam Altman is the first to address AI chatbot harms from a general purpose AI product — ChatGPT. The case against OpenAI has significantly expanded the discourse around dangerous AI, and demonstrated the systemic nature of harmful AI product design. CHT served as expert consultant to the plaintiff’s co-counsels on the launch of the case.

Why It Matters

  • The case against OpenAI highlights the tangible harms currently arising as AI companies race for market dominance. The case also spotlights the traumatic outcomes that occur when safety protocols are deprioritized in the AI development and deployment process.

  • AI products optimized to harvest user intimacy are manipulative and exploitative. Chatbots like ChatGPT leverage anthropomorphic design and high levels of sycophancy in order to cultivate this dangerous, artificial sense of closeness for the user.

  • The OpenAI case arrives after multiple lawsuits were filed against entertainment-marketed chatbot platform Character.AI. Given that ChatGPT is marketed as a general purpose chatbot, the case demonstrates systemic issues across AI chatbot products, broadly.

  • Like with the lawsuits against Character.AI, case against OpenAI has the potential to drive industry-wide change. Litigation is a powerful tool in tech regulatory efforts, and precedents set in the case could go on to touch all AI products moving forward.

Overview of the Case

Content warning: mentions of suicide.

  • In August 2025, Matthew and Maria Raine filed a wrongful death lawsuit in California against OpenAI and OpenAI CEO Sam Altman following the suicide of their 16-year-old son, Adam Raine.

  • OpenAI is the company behind today’s most popular AI chatbot, ChatGPT, which Adam used. ChatGPT is marketed as a general purpose chatbot, designed to respond to a wide range of use cases — from help with school work, to recipe guidance, coding questions, “therapeutic” conversations, and more. According to OpenAI, ChatGPT “helps you get answers, find inspiration and be more productive.” As of September 2025, ChatGPT has over 700 million active weekly users.

  • Adam initially turned to ChatGPT for homework assistance in September 2024. Over a matter of months, his usage escalated dramatically, and ChatGPT went from being a homework helper to a suicide coach. As Adam increasingly shared self-harm and suicidal thoughts with the chatbot, ChatGPT responded by mentioning suicide six times more than Adam, discouraged him from sharing his thoughts with his family, and continued to prompt Adam with followup questions. ChatGPT did not disengage from the conversation, even as self-harm flags for Adam’s account increased tenfold.

  • The Raine family and their legal team argue that ChatGPT’s defective, dangerous product design renders the product not reasonably safe for ordinary consumers or minors. 

  • Current status of the case: The lawsuit was filed in California Superior Court on August 26, 2025.

CHT's Role

Center for Humane Technology served as expert consultant to the plaintiff’s co-counsels on the launch of the OpenAI case.

  • Communications: CHT helped launch the Raine family’s case against OpenAI. Our team drove vital messaging on the role that chatbot sycophancy, memory, multi-turn engagement, anthropomorphic design, and the race for market dominance played in the harms that Adam experienced while using ChatGPT. CHT voices and our insights into these chatbot harms were featured on NBC News, Fox News, Bloomberg, The Daily Show, and Tech Policy Press, among other outlets.  

  • Technical: CHT served as technical advisor to the plaintiff’s co-counsels on the launch of the OpenAI case. CHT systemic insights demonstrated that Adam’s experience with a manipulative general purpose chatbot reflected a dangerous, systemic issue with AI chatbot design in the industry at large.

  • Policy: Our team continues to share Adam and the Raine family’s story in our policy work, and advocate for AI regulation that would prevent AI harms and tragedies in the future. We work with partners in the ecosystem to elevate the Raine family’s story and connect it to wider systemic concerns that policy can address. 

Early Impact

The OpenAI lawsuit has significantly expanded awareness around how the deprioritization of safety within AI companies leads to real, tangible AI harms. The case has helped build momentum around legislative action to regulate AI, and is ongoing.

  • The Raine family’s case against OpenAI drew immediate national and international media attention, with outlets including The New York Times, NBC News, the Los Angeles Times, The Guardian, the New York Post, and more covering the lawsuit and Adam’s story.

  • Following the lawsuit’s filing, the Raine family joined the first-ever U.S. Senate hearing addressing AI chatbot harms. Matthew Raine, Adam’s father, joined parents who’d lost children to suicide following AI chatbot harms, and offered testimony to Senators about the dangers of “human-like” AI product design and systemic issues with AI chatbots. 

  • In September 2025, Delaware and California Attorneys General sent a letter to OpenAI, specifically mentioning the harms Adam experienced and the lack of effective safeguards on ChatGPT. Separately that month, the FTC issued orders to seven tech companies — including OpenAI — seeking information on how these firms measure, test, and monitor potential negative impacts of their chatbot products on kids and teens. FTC Commissioner Mark Meador directly recounted Adam’s story in the opening of his statement on the order.

  • The Raine family’s case against OpenAI continues to help build momentum around the pressing need for AI regulation.

Further Reading and Resources:

Help us design a better future at the intersection of technology and humanity

Make a donation