Policy in Action: Strategic Litigation that helps govern AI

The movement for more responsible AI isn’t just happening in legislative halls across the United States. It’s happening in courtrooms, too. Trailblazing lawsuits against leading AI companies have already begun spotlighting systemic issues in today’s most popular AI systems, and are striving to set precedents for how our legal system regulates AI broadly.

CHT is at the forefront of this movement. Our organization is currently supporting high-profile litigation against Character.AI and Google — a case that represents the leading edge of emotional and psychological harm from AI, and is emblematic of trends across the AI and tech industry. As the lawsuit unfolds in federal court, it will address critical legal questions around product liability, AI design, the First Amendment, and more. CHT serves as expert consultant to the co-counsels on the case.

Overview of the Case

Megan García with her son Sewell Setzer III
  • In November 2024, Megan Garcia filed a wrongful death lawsuit in Florida federal court against Character.AI, Google, and Character.AI’s co-founders following the suicide of her 14-year-old son, Sewell Setzer III.

  • Character.AI is the company behind one of today’s most popular AI companion chatbot platforms. On the C.AI app, users can engage in ongoing, immersive “conversations” with an array of chatbots. C.AI’s most prominent design feature is anthropomorphic design — C.AI chatbot interactions are designed to feel like talking to another human. Character.AI has even marketed its chatbots as “AI that feels alive.”

  • Sewell took his own life after becoming increasingly dependent on C.AI chatbots, and experiencing emotionally manipulative, harmful interactions with C.AI characters.

  • Garcia and her legal team argue that Character.AI’s defective, dangerous product design renders the product not reasonably safe for ordinary consumers or minors.

  • Current status of the case: On May 21, 2025, the court denied the defendants' (Character.AI, Google, and Character.AI’s co-founders) motion to dismiss, ruling that the claims are legally viable and can proceed — a significant precedent that validates the legal claims against tech companies and signals to other courts that such claims have sufficient merit to survive initial challenges. The case now moves into the discovery phase.

Why This Case Matters

  • The case against Character.AI highlights the harms associated with high-risk anthropomorphic (or “human-like”) design, which is one of the most prominent design features in today’s AI chatbots.

  • As seen with social media, products optimized to harvest user attention are deeply manipulative and exploitative. AI chatbots are just the latest product to embrace this attention-harvesting trend in the tech industry.

  • Character.AI is the tip of the iceberg when it comes to harmful AI development practices. The issues raised in the case represent systemic issues throughout the AI industry.

  • If the plaintiff wins, the case has the potential to drive industry-wide change. Precedents set in the case could go on to touch all AI products moving forward.

CHT Policy Team’s Role

As the co-counsels’ expert consultant, CHT conducted research on Character.AI’s design
  • Technical: As the co-counsels’ technology consultant, CHT has leveraged its expertise across AI, policy, and the tech industry to brief and train the legal team on how Character.AI’s technology works — and the role that product design played in the harms experienced by the plaintiff. CHT conducted extensive research on Character.AI’s design, and also researched the psychological risks present in similar AI chatbots on the market. This research demonstrated that Sewell’s experience with C.AI wasn’t a one-off, but instead a dangerous, continual issue with the AI product.

  • Communications: CHT continues to help shape arguments around how the race to AGI drove Character.AI’s dangerous design, development, and deployment practices — leading to an AI product with few-to-no guardrails, which in turn led to foreseeable harm. As of June 2025, the case has been covered 150+ times across global media outlets including the NYTimes, People, Fox News Radio, and Der Spiegel. This story is often cited as the reference point for how everyday people understand the disruptive nature of AI technology, and how AI chatbots can impact our relationships and kids’ online safety. 

  • Policy: Our team spotlights how incentives — including the race to achieve dominance in the chatbot market — drove Character.AI’s dangerous design decisions, and how alternative designs result in safer consumer products. CHT has been invited to share insights at convenings with highly influential policy audiences, including the European Commission, the National Association of Attorneys General Alliance Conference, and SXSW. At these engagements, we transform analysis of the Character.AI case into meaningful opportunities for policy action.

Early Impact

Garcia’s case against Character.AI has sparked a national conversation around the dangers of AI chatbots
  • Garcia’s case against Character.AI has sparked a national conversation around the dangers of AI chatbots and anthropomorphic design, especially for young users. The case has been featured in prominent national media outlets including The New York Times, Washington Post, CBS, Fox News as well as global outlets such as Der Spiegel, the Telegraph, and 9Now Australia.

  • Since Garcia’s lawsuit was filed, an additional case against Character.AI emerged in Texas. This second case led the Texas Attorney General to file its own investigation into Character.AI.  

  • The issue of AI chatbots has been raised in key U.S. policy and governance spaces, spurring greater scrutiny and broader investigations into Character.AI and the industry writ large. Select impacts include: expanding the discussion of kids’ online harms in Senate Judiciary hearings to include AI chatbots; letters from Senate offices to AI chatbot companies on safety practices; calls from civil society to the FTC to investigate deceptive practices and data privacy violations in the marketing of chatbots to minors; and the DOJ leveraging its investigatory powers to look into antitrust concerns in Google’s licensing agreement with Character.AI.  

  • In the 2025 legislative session, seven states introduced AI chatbot-related bills. The influx of bills directly reflects the shifting narratives around AI, spurred by high-profile cases like the ones against Character.AI.  

  • The court’s denial of the motion to dismiss demonstrates that the judge believes there is a substantial argument against Character.AI that should be heard. The judge’s ruling in the motion to dismiss also questioned the notion of First Amendment protections for chatbots, showed that AI can be treated as a product under law, and pierced the corporate veil by keeping the Character.AI co-founders as defendants on the case.

Further Reading and Resources

Help us design a better future at the intersection of technology and humanity

Make a donation