Sponsor MessageBecome a KQED sponsor
upper waypoint

California Warns Families to Watch Out for Teens as Character.AI Shuts Off Chatbot Access

Save ArticleSave Article
Failed to save article

Please try again

A teenager uses a chatbot on her phone while studying at her desk. California health officials are warning young people and their families to take care as Character.AI, a Bay Area–based company, begins enforcing its ban on children using its chatbots Tuesday. (Thai Liang Lim/Getty Images)

California health officials are warning young people and their families to take care, as Bay Area artificial intelligence company Character.AI bans the use of its chatbots by children as of Tuesday.

The California Department of Public Health issued the public advisory on the eve of the ban taking full effect and at the request of prominent online safety experts who had raised alarms earlier this month that detaching from an AI companion too quickly could leave teens vulnerable to emotional changes, even self-harm.

“While data and science on the topic are still evolving, ongoing reports on youth dependency on this technology are of concern and warrant further research,” Dr. Rita Nguyen, assistant state health officer, said in a statement. “We encourage families to talk and to take advantage of the numerous resources available to support mental health.”

Sponsored

Character.AI announced its decision to disable chatbots for users younger than 18 in late October and began limiting how much time they could interact with them in November. The move came in response to political pressure and news reports of teens who had become suicidal after prolonged use, including a 14-year-old boy who died by suicide after his mom took away his phone and he abruptly stopped communicating with his AI companion.

“Parents do not realize that their kids love these bots and that they might feel like their best friend just died or their boyfriend just died,” UC Berkeley bioethics professor Jodi Halpern told KQED earlier this month. “Seeing how deep these attachments are and aware that at least some suicidal behavior has been associated with the abrupt loss, I want parents to know that it could be a vulnerable time.”

The health department’s alert was more muted, advising parents that some youth may experience “disruption or uncertainty” when chatbots become unavailable, while other experts have labeled the feelings that could arise as “grief” or “withdrawal.” Still, the state stepping in to promote mental health support for kids weaning off of chatbots is novel, noteworthy, and perhaps even unprecedented.

Kids may be susceptible to self-harm or suicide when Character.AI bans youth under 18 from using its chatbots, according to a UC Berkeley bioethics professor who asked the state to issue a public service announcement. (EyeEm Mobile GmbH/Getty Images)

“This is the first that I’ve heard of states taking action like this,” said Robbie Torney, senior director of AI programs at Common Sense Media, which conducts risk assessments of chatbots. “CDPH is treating this like a public health issue because it is one. While the relationships aren’t real, the attachment that teens have to the companions is real for those teens, and that’s a major thing for them to be navigating.”

Earlier this year, California became one of the first states to tackle the legislative regulation of AI chatbots. Gov. Gavin Newsom signed SB 243 into law, requiring chatbots to clearly notify users that they are powered by AI and not human. It also requires companies to establish protocols for referring minors to real-life crisis services when they discuss suicidal ideation with a chatbot, and to report data on those protocols and referrals to CDPH.

“This information will allow the Department to better understand the scope and nuances of suicide-related issues on companion chatbot platforms,” said Matt Conens, an agency spokesperson.

Newsom vetoed another bill, AB 1064, that would have prohibited companion chatbots for anyone under 18 if they were foreseeably capable of causing harm, for example, by encouraging children toward self-harm, drug or alcohol use, or disordered eating.

For families who may need immediate support through the transition off of companion chatbots, state health officials recommended accessing free youth behavioral health platforms like BrightLife Kids and Soluna, or the web and print resources on youth suicide prevention from Never a Bother. They can also call or text the crisis lifeline 988.

Character.AI has also expanded its resources for teens and parents in recent weeks, according to Deniz Demir, the company’s head of safety engineering, including a partnership with nonprofit Koko to provide free emotional support tools directly on its platform, and with the company ThroughLine to help with off-boarding and redirecting young users in distress to its network of teen resources for “real help, in real time.”

“We recognize that this may be a significant change for some of our teen users, and therefore, we want to be as cautious as possible in this transition,” Demir said in a statement.

Character.AI represents just a fraction of the market for AI companions, and while its self-regulating actions are laudable, Torney said, there are still other platforms that kids can turn to and probably already have.

“This isn’t just about one company,” he said. “We need all other platforms that offer AI companionship or AI mental health advice or AI emotional support to follow Character.AI’s lead immediately.”

lower waypoint
next waypoint
Player sponsored by