Sponsor MessageBecome a KQED sponsor
upper waypoint

AI Safety Expert Warns Parents to Watch Kids in Wake of Chatbot Ban

Save ArticleSave Article
Failed to save article

Please try again

A 17-year-old demonstrates the possibilities of artificial intelligence by creating an AI companion on July 15, 2025, in Russellville, Arkansas. A UC Berkeley bioethics professor warns that kids may be susceptible to self-harm or suicide when Character.AI bans youth under 18 from using its chatbots this month.  (Katie Adkins/AP Photo)

A leading artificial intelligence researcher is warning that Character.AI’s plan to ban chatbots for kids by late November may leave them susceptible to self-harm or suicide if they detach from an AI companion too quickly.

Jodi Halpern, a UC Berkeley bioethics professor, celebrated the ban overall, but wants parents to be on the lookout for emotional changes or needs in the weeks following children’s separation from their chatbots.

“Parents do not realize that their kids love these bots and that they might feel like their best friend just died or their boyfriend just died,” Halpern said. “Seeing how deep these attachments are and aware that at least some suicidal behavior has been associated with the abrupt loss, I want parents to know that it could be a vulnerable time.”

Sponsored

Character.AI announced its decision to disable chatbots for kids in late October, in response to political pressure and news reports of teens who had become suicidal after prolonged use.

One of those teens, a 14-year-old boy from Florida, fell in love with his chatbot and spent days on end confiding in it and exchanging sexual fantasies. When his mother took away his phone as punishment for misbehaving at school, the boy became despondent, a state his mother interpreted after his death as a blend of withdrawal and grief.

Kids may be susceptible to self-harm or suicide when Character.AI bans youth under 18 from using its chatbots this month, according to a UC Berkeley bioethics professor who asked the state to issue a public service announcement. (EyeEm Mobile GmbH/Getty Images)

Character.AI is taking care to roll out the ban slowly, according to company spokesperson Cassie Lawrence. The company consulted with experts in teen online safety, has limited the hours per day kids can spend chatting ahead of the termination, and offered them lists of alternative teen forums and mental health resources.

“We have widely announced the forthcoming changes to our users, in a variety of channels, including through our app/website, on our blog, in our help center, and in user forums on Reddit and Discord, so that affected users would have time to adjust to this new paradigm,” Lawrence said in a statement.

Still, Halpern is concerned enough about the risks teens might face once the ban is completed on Nov. 25 that she asked the California Department of Public Health to issue a public service announcement warning parents to watch their kids for mental health needs in the weeks after.

The department did not respond to requests for comment or indicate whether it would issue a warning or not.

Other youth advocates see a role for schools and educators to start discussions about chatbots, as many parents are unaware their children have been using them at all, said Robbie Torney, a senior director at Common Sense Media, a nonprofit that conducts AI research, risk assessment, and education.

Their polling shows nearly three out of four teens said they have used an AI chatbot, about half used one regularly, and a third said they prefer to talk to a chatbot rather than a human being.

Torney argued Character.AI should be doing much more to prepare young people and their parents for the upcoming phaseout. While the time limits are better than cold turkey, he argued that a more gradual weaning process would be safer.

The company should be taking more proactive steps to connect kids in distress to real-life mental health clinicians or telehealth appointments, he added, and should at least provide educational resources for parents on how to recognize if their child is developing a chatbot dependence and how to talk to them about it.

“Character.AI built this problem and now they’re pulling the plug without taking responsibility for the harm they’ve caused or providing support for the withdrawal they’ve created,” Torney said.

*An Associated Press photo caption in an earlier version of this story incorrectly identified Character.AI as the company generating an AI companion. 

lower waypoint
next waypoint