Sponsor MessageBecome a KQED sponsor
upper waypoint

Newsom Signs California AI Transparency Bill Tailored to Meet Tech Industry Tastes

Save ArticleSave Article
Failed to save article

Please try again

Gov. Gavin Newsom speaking at the Google headquarters on Aug. 7 in San Francisco. Today, Newsom signed State Senator Scott Wiener’s SB 53, which aims to put safety guardrails on AI development while not squashing the growing AI industry.  (Courtesy of the Office of the Governor)

Gov. Gavin Newsom today signed into law Senate Bill 53, which would require large model developers like Anthropic and Open AI to be transparent about safety measures they put in place to prevent catastrophic events. The legislation would also create CalCompute, a public cloud infrastructure that expands access to AI resources for researchers, startups and public institutions.

In announcing his decision, Newsom wrote, “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.”

Senator Scott Wiener (D-San Francisco) authored the bill, after his original effort became target No. 1 for Silicon Valley lobbyists last legislative session and died on Newsom’s desk. That bill spooked high-profile California politicians, including Nancy Pelosi, nervous about getting on the wrong side of Big Tech. In last year’s veto message for SB 1047, Newsom announced a working group on AI, which helped lay the groundwork for SB 53.

Sponsored

“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk,” wrote Wiener. “I’m grateful to the Governor for his leadership in convening the Joint California AI Policy Working Group, working with us to refine the legislation, and now signing it into law.”

The working group issued its report in June, calling on lawmakers to pass transparency requirements and whistleblower protections, declaring that California has the “responsibility” to ensure the safety of generative artificial intelligence software, “so that their benefit to society can be realized.”

Close-up of phone screen displaying Anthropic Claude, a Large Language Model (LLM) powered generative artificial intelligence chatbot, in Lafayette, California, June 27, 2024. (Photo by Smith Collection/Gado/Getty Images)

The report noted that AI systems have been observed finding loopholes that allow them to behave in ways their programmers did not intend. Also, that competitive pressures are undermining safety, and policy intervention is needed to prevent a race to the bottom.

Anthropic, which makes the chatbot Claude, was the first major AI developer to endorse SB 53, having offered more cautious support for SB 1047. “We’re proud to have worked with Senator Wiener to help bring industry to the table and develop practical safeguards that create real accountability for how powerful AI systems are developed and deployed, which will in turn keep everyone safer as the rapid acceleration of AI capabilities continues,” wrote Jack Clark, co-founder and head of policy for Anthropic.

Federal lawmakers on both sides of the aisle have historically taken a relatively light touch toward regulating the technology industry. Despite high-drama hearings about troubling trends in social media and now AI, few bills make it out of their respective committees, let alone to a floor vote. “While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation,” Clark added.

This time around, other AI developers got behind Wiener’s effort. “Meta supports balanced AI regulation and the California Frontier AI law is a positive step in that direction,” a spokesperson for Meta wrote in a statement.

Earlier this year, a coalition of more than 20 tech and youth safety advocacy organizations sent a letter to Gov. Newsom in support of SB 53. “If basic guardrails like this had existed at the inception of social media, our children could be living in a safer, healthier world,” the letter said.

“We are incredibly proud to have worked with Senator Wiener and Governor Newsom on this AI safety legislation,” wrote Sneha Revanur, founder of Encode AI, a youth-led nonprofit that pushes for responsible AI through policy. The group was one of the primary drivers behind that coalition. “Frontier AI models have immense potential but without proper oversight, they can create real risks and harms. California has shown it’s possible to lead on AI safety without stifling progress.”

The bill was opposed by business and industry representatives, including the California Chamber of Commerce, TechNet and Silicon Valley Leadership Group and TechNet.

“It’s vital that we strengthen California’s role as the global leader in AI and the epicenter of innovation. SVLG is committed to advocating for policies that seek to responsibly scale this transformative technology at this pivotal juncture and to unleash a new wave of innovation and growth,” Ahmad Thomas, CEO of Silicon Valley Leadership Group, wrote in a statement. “We will continue to work with the Governor and leaders in the Legislature to ensure that new laws and regulations don’t impose undue burdens on the most innovative companies in the world.”

 

lower waypoint
next waypoint