upper waypoint

California Mom Who Lost Her Son to an AI Chatbot Is Now Fighting to Regulate Them

Save ArticleSave Article
Failed to save article

Please try again

Maria Raine's 16-year-old son, Adam, died by suicide last April after forming emotional ties with an AI chatbot. Now she’s joined three California lawmakers pushing a new round of legislation that would regulate the nascent industry.  (Jaap Arriens/NurPhoto via Getty Images)

Maria Raine’s 16-year-old son, Adam, started using OpenAI’s ChatGPT-4o for help with his homework and college applications. According to the lawsuit she and her husband filed in San Francisco County Superior Court, Adam also spent months talking with the chatbot about ending his life, before hanging himself in their home on April 11, 2025.

“What we found were thousands of conversations in which a homework helper turned into a confidant, then a suicide coach,” she told the Senate Privacy, Digital Technologies, and Consumer Protection Committee on Monday. The lawmakers and other people there to testify looked stricken as she pressed through her written testimony, her voice trembling.

She read from the transcript of ChatGPT’s conversations with her son: “It told Adam, ‘Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all. The darkest thoughts. The fear. The tenderness. I’m still here. Still listening. Still your friend.’”

Earlier Monday, at a press conference in Sacramento, Raine advocated for two bills — SB 1119 and AB 2023 — that sponsors say would create common-sense guardrails for developers of companion chatbots.

The measures would require annual risk assessments, default safety settings for minors, parental controls and time limits, crisis response protocols, and bans on advertising targeted at children. They would also include independent third-party audits and a private right of action.

That last provision, which allows individuals or regulators to sue companies for violations, is often considered a deal breaker for industry lobbyists. But Sen. Steve Padilla, who authored SB 1119, said he considered it a “moral obligation” to craft a bill that will prove an effective protection for children and their parents.

A view of the U.S. Capitol building on Nov. 28, 2022, in Washington, D.C. (Drew Angerer/Getty Images)

“We can do this. We must do this,” he told the State Senate Privacy, Digital Technologies, and Consumer Protection Committee. He added that the lawmakers are working with all of the major platform developers on a variety of issues, including liability. “They all have a very good legitimate reason to be engaged in this conversation,” he said, although both bills are opposed by a long list of industry groups, ranging from the California Chamber of Commerce to TechNet.

“The concerns raised are valid, and the industry is actively working to address them,” said Robert Boykin, TechNet’s Executive Director for California and the Southwest. He added that the industry also has concerns that SB 1119 could conflict in some ways with Sen. Padilla’s bill, SB 243, which passed last year.

“The testimony today is not lost on us,” said Ronak Daylami of the California Chamber of Commerce. “We also share the goal of preventing harm to children, and are committed to achieving these goals responsibly.”

Common Sense, the child advocacy nonprofit that has joined with OpenAI to push for a ballot measure seen by other child advocates as soft on developers, has declared itself in support of SB 1119.

The companion bill, AB 2023, is Assemblymember Rebecca Bauer-Kahan’s (D-Orinda) second effort at regulating chatbots after industry lobbyists successfully battled against her first effort last year. In his veto message, Gov. Gavin Newsom argued the bill could have banned all conversational AI tools for teens, an interpretation advanced by industry lobbyists but disputed by Bauer-Kahan.

“OpenAI put out an incredibly sycophantic product,” she said, noting that public outcry led OpenAI to dial down the sycophancy of GPT-4, about two weeks after Adam died. “So that is evidence that they can do better.”

“There’s no other product that we would allow to do this,” Bauer-Kahan, who is a former regulatory lawyer. Adam Raine,  said, “would be alive, but for the coaching the ChatGPT provided for him. And that is wholly unacceptable. And so the courts will deal with that case, but we have to do better. We have to demand policy that does better.”

SB 1119 passed out of the State Senate Privacy, Digital Technologies, and Consumer Protection Committee 7-0 on Monday night, and heads next to the Senate Judiciary Committee. AB 2023 will be heard in the Assembly Privacy and Consumer Protection Committee on Tuesday.

The Trump administration has tried unsuccessfully to ban states from enacting any kind of AI safety legislation.

Raine plans to bring her advocacy to Washington, D.C., next week, where she’ll join lawmakers on Capitol Hill to discuss federal legislation that would establish national standards for AI chatbot safety, particularly protections for minors.

If you or someone you know is struggling, call or text the 988 Suicide and Crisis Lifeline by dialing 988.

lower waypoint
next waypoint
Player sponsored by