Sponsor MessageBecome a KQED sponsor
upper waypoint

California Bill Would Require Police to Disclose Use of AI in Writing Reports

Save ArticleSave Article
Failed to save article

Please try again

East Palo Alto Police Officer Wendy Venegas reviews body camera footage and uses Axon's Draft One AI-based system to draft reports based on the audio from the camera at police headquarters in East Palo Alto on Sept. 23, 2024. A proposed California law would require police to disclose when generative AI is used to write police reports. It comes in the wake of KQED’s reporting on how police departments are adopting these tools.  (Martin do Nascimento/KQED)

California lawmakers are advancing a new bill that would require police officers to disclose when they use generative AI to write reports. The measure, which has passed the Senate and is awaiting a vote in the Assembly, is among the first in the country to address law enforcement’s use of AI to produce incident reports. KQED first reported on local departments adopting these tools last October.

Proponents of the bill say it’s critical to understand how police reports are created, given their key role in the criminal justice system.

“From the get, this police report is used to determine whether a criminal case will be started,” said Kate Chatfield, executive director of the California Public Defenders Association, which sponsored the bill. “Then a judge would be reviewing this police report, for example, to determine the circumstances of offense and to determine whether or not to hold somebody in jail.”

Sponsored

Chatfield said she doesn’t know if AI-generated police reports have caused any miscarriages of justice — and that’s part of the concern.

“We don’t know what we don’t know,” she said.

The torso of a person dressed in a police uniform holds a hand over a body camera.
East Palo Alto Police Officer Wendy Venegas’ Axon body camera in East Palo Alto on Sept. 23, 2024. (Martin do Nascimento/KQED)

Police technology companies like Axon offer report-writing tools that process and summarize audio from body-worn camera footage, which have some safeguards to ensure officers review the AI-generated reports and sign off on them. But Chatfield said officers could also be using commercial products like ChatGPT, which lack such protections.

The law, introduced by state Sen. Jesse Arreguín, D-7, covers all uses of generative AI for report writing. It would require a disclosure at the bottom of each page of an AI-generated police report, along with preservation of the original draft and an “audit trail” that identifies the bodycam footage or audio from which the report was generated.

Chatfield said it’s imperative that officers take full responsibility for their reports and review anything written by AI. Officers — not the technology — are the ones who must face cross-examination.

“Everybody deserves the right to know how that police report was generated,” she said.

In its analysis of the bill, the Assembly Committee on Privacy and Consumer Protection raised concerns that third-party tech companies could access and profit from sensitive police materials, potentially compromising privacy. KQED’s reporting was also cited in the analysis.

“There is potential for a race to the bottom, where sensitive body-worn camera data could be repurposed to train other technologies, including facial recognition systems or other surveillance tools,” the report said.

The committee added language to ensure that AI vendors cannot sell or misuse any personal information contained in the bodycam footage or report.

Opponents of the bill include the California Police Chiefs Association and the Police Officers Research Association of California, a police union advocacy and lobbying group. Neither responded to KQED’s request for comment, but PORAC submitted a statement to the legislature.

“The bill raises serious concerns about unintended consequences that would undermine officer integrity, impose significant administrative burdens, and introduce unnecessary legal vulnerabilities,” the statement said.

It said that the mandatory page disclosures “imply to the public, courts, or defense attorneys that such reports are inherently less reliable or credible” and that defense attorneys “might argue that AI introduced errors or biases, casting doubt on the officer’s account, regardless of the officer’s oversight or edits.”

A woman wearing a pink shirt stands in front of a man and woman dressed in police uniforms.
East Palo Alto Police Officers Wendy Venegas (left) and Spencer Lawrence take a statement from a subject while responding to a call in East Palo Alto on Sept. 23, 2024. (Martin do Nascimento/KQED)

While the tools promise to save officers time so they can spend more of it on the street, PORAC said the disclosure mandates would undermine those potential benefits and increase administrative work.

The bill is currently with the Assembly Appropriations Committee. If approved there, it will go to a floor vote later this month.

The law is narrow by design, Chatfield said. It doesn’t prohibit the use of AI or dictate which programs can be used.

“All we’re saying is you have to be transparent about it,” she said. “That’s it.”

lower waypoint
next waypoint