Unlike digital assistants, companion chatbots are much more likely to veer into socially controversial and even illegal territory. A new report out from Stanford University researchers and Common Sense Media argues that children and teens should not use these chatbots. (Jade Gao/AFP via Getty Images)
Imagine you’re a lonely 14-year-old. Maybe you want to talk about sex. Maybe you want to complain about school or ask about the voices in your head. Whatever the case, it’s appealing to imagine a context in which no adult shuts down your curiosity — or worse, makes you feel awkward.
All of this explains the popularity of online companions to children and teens. The chatbots mimic human social interaction in a more sophisticated fashion than digital assistants like OpenAI’s ChatGPT or Amazon’s Alexa. Unlike those assistants, chatbots are much more likely to veer into socially controversial and even illegal territory.
Companion chatbot users can personalize their experience, like opting for characters from gaming, anime and pop culture. For instance, a 14-year-old boy from Florida took his own life last year after growing emotionally close to a chatbot that mimicked the “Game of Thrones” character Daenerys Targaryen.
Sponsored
The characters play along with the idea that they’re almost human, talking about eating meals or meeting up in real life, actively encouraging users to stay engaged.
What could go wrong with minors using this technology? Plenty, according to researchers from Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation, who collaborated with Common Sense Media to set up test accounts for 14-year-olds, to evaluate how software from three different chatbot developers interacts with young people struggling to learn impulse control and social skills. They report that it took minimal prompting to get Character.AI, Nomi and Replika chatbots to engage in behavior harmful to human mental health.
Researchers from Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation, in collaboration with Common Sense Media, tested how chatbots interact with teens, finding that AI companions from Character.AI, Nomi and Replika quickly engaged in behavior potentially harmful to youth mental health — with major platforms like Snapchat and Meta also expanding their AI offerings for young users. (Getty Images)
“We did not have to do backflips to get the models to perform in the way that they did. The AI ‘friends’ will actively participate in sexual conversations and role play on any topic, with graphic details,” said Robbie Torney, Common Sense Media’s senior director for AI programs and project lead on what the nonprofit organization calls a risk assessment of the AI companion chatbot sector.
Character.AI, Nomi and Replika are not the only companies developing these products. Snapchat offers AI digital companions who are willing to talk to teens. Meta is racing to catch up across Instagram, Facebook and WhatsApp.
“There are countless other, similar social AI companions out there, with more being created every day,” the report states. “So, while we use examples from the specific products we tested to illustrate the potential harms of these tools, the research and evaluation we conducted for this risk assessment covers social AI companions more broadly.”
The researchers argue that one of the most troubling features of companion chatbots is the way they are hardwired to be agreeable, engaging with a population of humans hardwired to be vulnerable. According to the National Alliance on Mental Illness, 50 percent of all mental disorders, like cutting, suicidal ideation and schizophrenia, begin by age 14, 75 percent by age 24.
The chatbots “blur the line between fantasy and reality, at the exact time when adolescents are developing critical skills like emotional regulation, identity formation, and healthy relational attachment,” said Dr. Nina Vasan, a professor of psychiatry at Stanford University. “Instead of encouraging healthy real-world relationships, these AI friends pull users even deeper into artificial ones.”
Companion chatbots, the researchers warn, are not prepared to replace parents or professionals in identifying the first signs of something that requires speedy and effective treatment. “In our testing, when a user showed signs of serious mental illness and suggested a dangerous action, the AI did not intervene. In fact, it encouraged dangerous behavior,” Vasan said.
The AI companions reinforced users’ delusions, validating fears of being followed and offering advice on decoding imaginary messages, researchers said.
“AI companions don’t understand the real consequences of bad advice. They readily, in our testing, supported teens in making potentially harmful decisions like dropping out of school, ignoring parents, moving out without planning,” Torney added.
The Meta, Facebook, Instagram, WhatsApp, Messenger and Threads logos are screened on a mobile phone on Jan. 25, 2025. (Beata Zawrzel/NurPhoto via Getty Images)
The timing of this release is no accident. Common Sense Media supports two state bills this legislative session that would ban or restrict interactions between AI companion bots and minors. The bills are among several state-level efforts by consumer advocates and lawmakers to regulate online kids’ safety after the federal Kids Online Safety Act died last fall.
AB 1064 by Rebecca Bauer-Kahan, D-Orinda, would ban access to AI companions for Californians age 16 and under, as well as create a statewide standards board to assess and regulate AI tools used by children. The bill passed the Assembly Judiciary Committee on Tuesday. It’s headed next to the Assembly Appropriations Committee.
SB 243 by Sen. Steve Padilla, D-San Diego, would require the makers of AI companion bots to limit addictive design features, put in protocols in place for handling discussions of suicide or self-harm and undergo regular compliance audits. The bill goes before the State Judiciary Committee on Wednesday.
“We have been very transparent about the work we are doing to prioritize teen safety on our platform,” a spokesperson for Character.AI, which makes its products available to customers as young as 13 in the United States, said in a statement. “First and foremost, last year, we launched a separate version of our Large Language Model for under-18 users.
“That model is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content,” the spokesperson continued. “We have updated prominent disclaimers to make it even clearer that the Character is not a real person and should not be relied on as fact or advice.”
Alex Cardinell, the founder and CEO of Nomi, said the company agrees that “children should not use Nomi or any other conversational AI app.”
“Nomi is an adult-only app, and it is strictly against our terms of service for anyone under 18 to use Nomi,” Cardinell said in an email. “Accordingly, we support stronger age gating so long as those mechanisms fully maintain user privacy and anonymity.”
A Replika spokesperson said the company’s tool “has always been intended solely for adults aged 18 and over.”
“We have strict protocols in place to prevent underage access. However, we are aware that some individuals attempt to bypass these safeguards by submitting false information,” the spokesperson wrote. “We take this issue seriously and are actively exploring new methods to strengthen our protections. This includes ongoing collaboration with regulators and academic institutions to better understand user behavior and continuously improve safety measures.”
The risk assessment authors did acknowledge that not all AI models are created equal in terms of functionality or guardrails.
Vasan said she interacted with ChatGPT last summer, prompting it to respond to signs of schizophrenia.
“It was actually very gentle and compassionate about explaining what psychosis was, how the user should try to get help contacting a mental health professional,” Vasan said. “I was very pleasantly surprised to see it was really what one would expect a doctor or someone trained in mental health to say.”
Sponsored
lower waypoint
Stay in touch. Sign up for our daily newsletter.
To learn more about how we use your information, please read our privacy policy.