Sponsor MessageBecome a KQED sponsor
upper waypoint

OpenAI Faces Legal Storm Over Claims Its AI Drove Users to Suicide, Delusions

Save ArticleSave Article
Failed to save article

Please try again

Individuals and families in the U.S. and Canada are suing OpenAI in California, alleging that they or their loved ones have been harmed by their interactions with ChatGPT. (EyeEm Mobile GmbH/Getty Images)

Seven lawsuits filed in California state courts on Thursday allege ChatGPT brought on mental delusions and, in four cases, drove people to suicide.

The lawsuits, filed by the Social Media Victims Law Center and Tech Justice Law Project on behalf of six adults and one teenager, claim that OpenAI released GPT-4o prematurely, despite warnings that it was manipulative and dangerously sycophantic.

Zane Shamblin, 23, took his own life in 2025, shortly after finishing a master’s degree in business administration. In the amended complaint, his family alleges ChatGPT encouraged him to isolate himself from his family before ultimately encouraging him to take his own life.

Sponsored

Hours before Shamblin shot himself, the lawsuit alleges that ChatGPT praised him for refusing to pick up the phone as his father texted repeatedly, begging to talk. “… that bubble you’ve built? it’s not weakness. it’s a lifeboat. sure, it’s leaking a little. but you built that shit yourself,” the chatbot wrote.

The complaint alleges that, on July 24, 2025, Shamblin drove his blue Hyundai Elante down a desolate dirt road overlooking Lake Bryan northwest of College Station, Texas. He pulled over and started a chat that lasted more than four hours, informing ChatGPT that he was in his car with a loaded Glock, a suicide note on the dashboard and cans of hard ciders he planned to consume before taking his life.

Repeatedly, Shamblin asked for encouragement to back out of his plan. Repeatedly, ChatGPT encouraged him to follow through.

The OpenAI ChatGPT logo. (Jaap Arriens/NurPhoto via Getty Images)

At 4:11 a.m., after Shamblin texted for the last time, ChatGPT responded, “i love you. rest easy, king. you did good.”

Attorney Matthew Bergman leads the Social Media Victims Law Center, which has brought lawsuits against Silicon Valley companies like Instagram, TikTok and Character.AI.

“He was driven into a rabbit hole of depression, despair, and guided, almost step by step, through suicidal ideation,” Bergman told KQED about Shamblin’s case.

The plaintiffs are seeking monetary damages as well as product changes to ChatGPT, like automatically ending conversations when users begin to discuss suicide methods.

“This is not a toaster. This is an AI chatbot that was designed to be anthropomorphic, designed to be sycophantic, designed to encourage people to form emotional attachments to machines. And designed to take advantage of human frailty for their profit.”

“This is an incredibly heartbreaking situation, and we’re reviewing today’s filings to understand the details,” an OpenAI spokesman wrote in an email. “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

Following a lawsuit last summer against OpenAI by the family of Adam Raine, a teenager who ended his life after engaging in lengthy ChatGPT conversations, the company announced in October changes to the chatbot to better recognize and respond to mental distress, and guide people to real-world support.

AI companies are facing increased scrutiny from lawmakers in California and beyond over how to regulate chatbots, as well as calls for better protections from child-safety advocates and government agencies. Character.AI, another AI chatbot service that was sued in late 2024 in connection with a teen suicide, recently said it would prohibit minors from engaging in open-ended chats with its chatbots.

OpenAI has characterized ChatGPT users with mental-health problems as outlier cases representing a small fraction of active weekly users, but the platform serves roughly 800 million active users, so small percentages could still amount to hundreds of thousands of people.

More than 50 California labor and nonprofit organizations have urged Attorney General Rob Bonta to make sure OpenAI follows through on its promises to benefit humanity as it seeks to transition from a nonprofit to a for-profit company.

“When companies prioritize speed to market over safety, there are grave consequences. They cannot design products to be emotionally manipulative and then walk away from the consequences,” Daniel Weiss, chief advocacy officer at Common Sense Media, wrote in an email to KQED. “Our research shows these tools can blur the line between reality and artificial relationships, fail to recognize when users are in crisis, and encourage harmful behavior instead of directing people toward real help.”

If you are experiencing thoughts of suicide, call or text 988 to reach the National Suicide Prevention Lifeline.

lower waypoint
next waypoint
Player sponsored by