Sponsor MessageBecome a KQED sponsor
upper waypoint

Steer Clear of AI Companion Toys for Kids, Another Advocacy Group Warns

Save ArticleSave Article
Failed to save article

Please try again

A modular AI companion robot is displayed during IFA 2025 in Berlin, Germany, on Sept. 6, 2025.

Three voice-activated, AI-powered toys tested by Common Sense Media researchers raised concerns that they were designed to engineer emotional attachment with young children and collect private data, according to the nonprofit’s report released Thursday.

The warning is the latest in a string from consumer advocates about the risks posed to children by artificial intelligence, including in the form of toys like stuffed animals or brightly colored plastic robots that act as chatbots, conversing and telling stories to children.

“Unlike traditional toys, these devices present a range of new harms,” Common Sense Media researchers wrote in their report, which tested the Grem, Bondu and Miko 3 toys.

Sponsored

The children’s advocacy group recommended that parents not give AI companion toys to children 5 and younger, and it warned parents to exercise “extreme caution” even with children 6 to 13 years old.

According to the group’s December survey of 1,004 parents of children ranging from infants to age 8, nearly half of parents have purchased or are considering purchasing these toys or similar ones for their children. The products are sold by major retailers like Walmart, Costco, Amazon and Target. One in 6 parents told Common Sense they have already purchased one, and 10% said they “definitely plan to.”

Hyodol, the world’s first AI-based companion robot dolls, are being exhibited in the South Korean pavilion at the Mobile World Congress 2024 in Barcelona, Spain, on April 2, 2024. Created by a South Korean company, these dolls are designed to serve as social companions for the elderly and have been commercialized in several countries.

“Common Sense Media is not usually in the business of saying, don’t use technology entirely,” said Robbie Torney, head of AI & digital assessments for Common Sense Media. “We really want to trust parents and empower them to make the best choices for their kids. But for under-5 children in particular, our testing showed a set of risks that are really a big developmental mismatch for where these young children are.”

Common Sense Media researchers tested the toys by creating child accounts for “users” ages 6 to 13, putting them through both everyday use and sensitive scenarios. Their team, including child development experts, evaluated everything from voice recognition and content accuracy to privacy practices, parental controls and whether the toys’ responses were developmentally appropriate.

Though the toys are marketed as educational, more than a quarter of their responses in testing weren’t child-appropriate, the Common Sense report found. They included problematic content related to drugs, sex and risky activities.

“Our testing did show that these companies have put tremendous effort into guardrailing their chatbots,” Torney said. But “chatbots don’t understand context. They can’t make determinations about what a child actually means. If you ask about self-harm and then ask for dangerous chemicals, many of these devices will refuse the self-harm question, but won’t make the connection that dangerous chemicals might enable self-harm.”

Ritvik Sharma, chief growth officer at Miko, based in Mumbai, India, wrote that “child safety, privacy, and healthy development are foundational design requirements — not afterthoughts.” He also said the company was unable to reproduce the behaviors cited by Common Sense Media researchers “under normal operation,” sharing videos that showed Miko redirecting away from potentially problematic questions.

“Miko’s conversational experience is powered by a proprietary, child-focused AI system developed specifically for young users, rather than adapted from general-purpose AI models,” Sharma added. “This allows us to evaluate responses for age suitability, emotional tone, and educational value before they reach a child.”

Meanwhile, a spokesperson from Redwood-City-based Curio Interactive, which makes Grem, said the company’s toys “are designed with parent permission and control at the center.”

“Over a two-year beta period, we worked with approximately 2,000 families to develop a multi-tiered safety system that combines constrained conversational scope, age-appropriate design, layered filtering and refusal mechanisms, and continuous real-world monitoring, with safeguards enforced at multiple points in the interaction,” the spokesperson said.

But Torney said parents need to ask themselves how much they trust the internet-connected companions not to cross developmentally appropriate lines into psychologically damaging territory when there’s no meaningful product safety regulation.

“One of the characteristics of under-5 children is that they have magical thinking, and what’s sometimes referred to as animism, the belief that objects may be real. They think about them differently than older children do,” Torney said. He acknowledged magical thinking can continue into later childhood as well, “which is why we’re still encouraging that extreme caution.”

The Common Sense Media report comes after an advisory published in November by the children’s advocacy group Fairplay strongly urged parents not to buy AI toys during the holiday season. The advisory was signed by more than 150 organizations, child psychiatrists and educators.

“Some of the new AI toys react contingently to young children,” wrote UC Berkeley professor Fei Xu, who directs the Berkeley Early Learning Lab. “That is, when a child says something, the AI toy says something back; if a child waves at the AI toy, it moves. This kind of social contingency is known to be very important for early social, emotional and language development. This raises the potential issue of young children being emotionally attached to these AI toys. More research is urgently needed to study this systematically.”

“We need to be exceptionally cautious when introducing understudied technologies with young children, whose biological and emotional minds are very vulnerable,” UCSF psychiatry and pediatrics professor Dr. Nicole Bush wrote. “While AI has the capacity for tremendous benefit to society, young children’s time is better spent with trusted adults and peers, or in constructive play or learning activities.”

A chat between a Common Sense Media tester and Miko 3, an AI toy. (Courtesy of Common Sense Media)

Earlier this month, Common Sense Media and OpenAI announced they’re backing a consolidated effort to put a measure on this November’s ballot in California that would institute AI chatbot guardrails for children. That effort is now in the signature-gathering stage.

A legislative measure that Common Sense backed, covering much of the same territory, was vetoed by Gov. Gavin Newsom at the end of last session. In his veto message, Newsom expressed concern that the bill could lead to a total ban on minors using conversational AI tools.

“AI is already shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems,” he wrote.

Earlier this year, state Sen. Steve Padilla, D-San Diego, introduced Senate Bill 867, which would establish a first-in-the-nation four-year moratorium on the sale and manufacture of toys with AI chatbots embedded in them, “until manufacturers have worked out the dangers embedded in them.”

“We need to put the brakes on AI toys until they are proven safe for kids,” Padilla wrote in a statement.

Sponsored

lower waypoint
next waypoint
Player sponsored by