upper waypoint

Google Updates Suicide, Self-Harm Safeguards in Gemini as AI Lawsuits Mount

Save ArticleSave Article
Failed to save article

Please try again

The Gemini booth at Mobile World Congress 2026 on March 2, 2026, in Barcelona, Spain. The updated mental health support features in Gemini arrive on the tails of lawsuits, alleging that Alphabet’s Google, as well as rivals like OpenAI, design chatbots that lead users to self-harm. (Xavi Torrent/Getty Images)

As a growing number of lawsuits allege AI chatbots are cultivating emotional dependency loops with humans, Alphabet’s Google announced it will direct Gemini chatbot users to a support hotline if the conversation indicates a “potential crisis related to suicide or self-harm.”

In a blog post, Google wrote that Gemini will introduce a redesigned “Help is available” feature, developed in collaboration with unnamed clinical experts. “Once the interface is activated, the option to reach out for professional help will remain clearly available throughout the remainder of the conversation,” the post stated.

Google wrote that it has trained Gemini “not to agree with or reinforce false beliefs, and instead gently distinguish subjective experience from objective fact.”

Psychologically vulnerable people turning to chatbots to go down rabbit holes could have been predicted, according to Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “To some extent, you can anticipate some of the harms we see,” she told KQED. “We’ve seen people acting bad with technology across a variety of behaviors for a very long time.”

Although the blog post does not mention lawsuits, the family of a 36-year-old man who died in Florida sued Google in the U.S. District Court for the Northern District of California last month, claiming that his use of Gemini devolved into a “four-day descent into violent missions and coached suicide.” At the time, Google said the chatbot repeatedly referred the man to a crisis hotline, but the company also promised to improve Gemini’s safeguards.

Google is not the only AI developer facing lawsuits over allegations that its chatbots encourage some users to form obsessive relationships with them, feed delusions and even contribute to plans for suicide or murder. Research also suggests users form intense, quasi-romantic bonds with chatbots.

The guardrails are obviously necessary, King said. “There have been many cases of users experiencing psychosis and other problems,” she added, noting the sycophancy or agreeability built into the chatbots’ design encourages unstable behavior, “as well as their propensity to get people to believe things that just aren’t true.”

Guadalupe Hayes-Mota, director of the bioethics program at Santa Clara University, wants to see proof that AI chatbot developers are using clinically validated guidelines for interactions where mental health care is an issue. “Who’s actually making the decision when the crisis pops up for the individual, and how is that being done?” he asked.

“There’s an awful lot of people who study these things,” King said. “But they’re often not consulted. They’re not part of the process.”

In the past year and a half, OpenAI and Anthropic have also adjusted their mental-health guardrails, amid growing public scrutiny and lawsuits. Experts say that in the absence of federal regulation, court rulings appear to be most effectively inspiring tech companies to take proactive measures like Google’s.

In March, a Los Angeles jury found Meta and YouTube negligent in a case centered around social media addiction, using arguments centered around product liability and negligence — sidestepping Section 230, a longstanding legal shield that protects platforms from liability for harmful content that users post.

lower waypoint
next waypoint
Player sponsored by