Sponsor MessageBecome a KQED sponsor
upper waypoint

OpenAI-Backed Ballot Measure Draws Scrutiny From Child Safety Advocates

Save ArticleSave Article
Failed to save article

Please try again

The OpenAI logo on March 21, 2023. A group of kids safety advocates say the proposed Parents & Kids Safe AI Act provides insufficient protections. (Michael Dwyer/Associated Press)

Online safety groups have criticized OpenAI and child advocacy group Common Sense Media’s jointly proposed ballot measure creating chatbot guardrails for kids, saying it would shield tech companies from accountability.

The California Initiative for Technology and Democracy, or CITED, and Tech Oversight California — two groups that have sponsored anti-deepfake and AI laws — circulated a letter shared with lawmakers on Wednesday addressing the Parents and Kids Safe AI Act, announced by co-sponsors OpenAI and Common Sense Media in January.

“Though seemingly well-intended, the measure would exempt AI companies from the robust framework of laws already established in California to give consumers meaningful protections,” the letter states.

Sponsored

The letter warned that the proposed measure could undermine age and privacy protections, in part by narrowly defining child protections to “severe harms,” effectively shielding AI companies from liability related to children’s mental health.

“This definition fails to account for mental or emotional distress caused by companion chatbots or exposure to age-inappropriate content that may contribute to psychological harm,” the letter reads.

John Bennett, initiative director of CITED, told KQED that the definitions “raised a lot of alarm bells in our heads, because we didn’t think it was sufficiently protective of children.”

Unlike digital assistants, companion chatbots are much more likely to veer into socially controversial and even illegal territory. A new report out from Stanford University researchers and Common Sense Media argues that children and teens should not use these chatbots. (Jade Gao/AFP via Getty Images)

The first alarm bell, Bennett said, was the fact that Common Sense and its CEO, Jim Steyer, negotiated alone with OpenAI, leaving out the fold of child and consumer advocates that had previously been working together to lobby for strong laws with lawmakers like Assemblymember Rebecca Bauer-Kahan (D-Orinda), chair of the Assembly Privacy and Consumer Protection Committee and author of a closely-watched AI child safety bill ultimately vetoed by Gov. Gavin Newsom last legislative session.

In a statement sent to KQED on Thursday, Common Sense Media did not directly address the concerns outlined in the letter, but wrote the measure “will be the strongest, most comprehensive youth AI safety law in the country, whether it’s passed by the voters or the legislature.”

That said, in his remarks introducing the joint effort on Jan. 9, 2026, Steyer presented his approach as primarily strategic, saying he would use any political tool available to get most of what he wants on behalf of children and their parents.

“I cannot begin to know where Mr. Steyer’s mind actually is at,” Bennett said, adding that he was perplexed by this initiative nonetheless. “Usually, you try and introduce something that’s extremely strong — some might think overly strong. Then you use that as a negotiating arm within the legislature.”

In the absence of comprehensive, effective child protection legislation from Washington, California has helped lead the way on kids’ and teens’ tech privacy laws, as well as general consumer-focused tech safety laws. As a result, child advocates pay a lot of attention, early and often, to the rough and tumble of California AI-focused politics.

Outside the U.S., Australia and Spain have rolled out aggressive restrictions on youth smartphone use, including banning social media use for children under 16. Some advocates speculate the fear of a similar ban in California prompted OpenAI, which did not respond with a comment in time for this story, to reach out to Common Sense Media and negotiate a compromise.

Bennett has another theory. As with other ballot measures, if voters approve it, any changes will require a two-thirds vote of the legislature, making stronger, more effective regulation later difficult, if not impossible. “We can’t just come back and change this in a year or two if we see that there are new dangers and new harms that are coming about because technology’s evolving so quickly,” he said.

The Parents & Kids Safe AI Act is still in the signature-gathering phase and has not yet qualified for the November 2026 ballot. Supporters have said they expect to start collecting the requisite 546,651 valid signatures from registered California voters this month.

lower waypoint
next waypoint
Player sponsored by