upper waypoint

OpenAI Back in Court Over Canada School Shooter’s Use of ChatGPT

Save ArticleSave Article
Failed to save article

Please try again

A young boy brings flowers to a memorial in honor of the victims of one of Canada's deadliest shooting in Tumbler Ridge, British Columbia, Canada, on Feb. 13, 2026. A lawsuit alleges negligence and wrongful death on account of the shooter’s interactions with the chatbot in the weeks and months leading up to the fatal attack. (Paige Taylor White/AFP via Getty Images)

The families of victims of a school shooting in a British Columbia town sued artificial intelligence company OpenAI in a San Francisco court this week, alleging that the company behind ChatGPT failed to alert police of the shooter’s alarming interactions with the chatbot.

One of the lawsuits was filed on behalf of Shannda Aviugana-Durand, an education assistant who was shot and killed in a library at Tumbler Ridge Secondary School. The suit alleges negligence, aiding and abetting a mass shooting, wrongful death and liability, among other claims. According to the lawsuit, Aviugana-Durand’s daughter was present at the time of the attack.

The educational assistant was one of six people who were killed by an 18-year-old in February. The teen — who later shot herself — also killed her mother and her 11-year-old half-brother at home beforehand. Twenty-five people were also injured in the attack, Canada’s deadliest mass shooting in years.

Another lawsuit was filed Wednesday on behalf of 12-year-old Maya Gebala, who was critically injured in the February shooting. The plaintiffs’ attorney, Jay Edelson, said in an interview with the Associated Press that decisions made by OpenAI and its CEO Sam Altman “have destroyed the town. The people are really resilient, but what happened is unimaginable.”

Altman sent a letter last week formally apologizing to the community that his company did not notify law enforcement about the shooter’s online behavior in the weeks leading up to the attack.

The case highlights concerns about the harms posed by overly agreeable AI chatbots and what obligations the tech industry has to control them or notify authorities about planned violence by chatbot users. This month, prosecutors investigating the deaths of two University of South Florida doctoral students said that the suspect asked ChatGPT about body disposal in the lead-up to the students’ disappearance.

OpenAI CEO Sam Altman speaks during the BlackRock Infrastructure Summit on March 11, 2026, in Washington, D.C. (Anna Moneymaker/Getty Images)

“It’s not the first lawsuit of its kind,” said Robin Feldman, law professor at UC Law San Francisco and director of its AI Law and Innovation Institute. “This is part of an early wave of lawsuits in which citizens are asking to hold LLMs responsible for harms that happen down the line, whether they are crimes, mental health problems, suicide.”

“ChatGPT was first on the scene. And it is the most widely known of the LLMs,” Feldman said. “That puts it in the hot seat as the law tries to understand how to wrangle this unusual beast.”

In response to the lawsuit, OpenAI said in a written statement that the “events in Tumbler Ridge are a tragedy. We have a zero-tolerance policy for using our tools to assist in committing violence.”

“As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators,” the company said.

Edelson, a Chicago-based lawyer known for taking on the tech industry, is already juggling a number of high-profile cases against OpenAI, including from the family of a California teenager who killed himself after conversations with ChatGPT and another from the heirs of an 83-year-old Connecticut woman killed by her son after ChatGPT allegedly amplified the man’s “paranoid delusions.”

“This is not a passive technology,” Edelson said, comparing the chatbot interactions with a more conventional online search for information. “What we’ve seen in the past is that (for) people who are mentally ill, the chatbot will validate what they’re saying and then amplify what they’re saying.”

Last week, Edelson visited the small town of Tumbler Ridge and met with dozens of people in the basement of a visitor center. He also visited Gebala at a children’s hospital in Vancouver, where she remains hospitalized and seemed alert but unable to speak.

“It was so heartbreaking,” he said.

Candles, flowers, photographs, plush toys and other items at a makeshift memorial for the victims four days after a deadly mass shooting took place at a school, in the town of Tumbler Ridge, British Columbia, Canada, on Feb. 13, 2026. (Paige Taylor White/AFP via Getty Images)

The lawsuits filed Wednesday also represent the families of the five slain children targeted in the school shooting: Zoey Benoit, Abel Mwansa Jr., Ticaria “Tiki” Lampert and Kylie Smith, all 12, and Ezekiel Schofield, 13.

After the shootings, OpenAI came forward to say that last June, the company flagged the shooter’s account as having been used to discuss violence against other people.

The company said it considered whether to refer the account to the Royal Canadian Mounted Police, but determined at the time that the account activity didn’t meet a threshold for referral to law enforcement. OpenAI banned the account in June for violating its usage policy.

The lawsuits filed Wednesday allege “the victims didn’t learn this because OpenAI was forthcoming, but because its own employees leaked it to The Wall Street Journal after they could no longer stomach the company’s silence.”

In his letter, Altman said he was “deeply sorry that we did not alert law enforcement to the account that was banned in June.”

“While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered,” Altman wrote.

British Columbia Premier David Eby, in a social media post, called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

The Gebala lawsuit accuses OpenAI of negligence involving a failure to warn law enforcement and “aiding and abetting a mass shooting.”

Along with damages, the Gebala lawsuit seeks a court order that would require OpenAI to ban users from ChatGPT if their accounts were deactivated for violent misuse, and to require the company to alert law enforcement when its systems identify someone who poses a “real-world risk of violence.”

An earlier case was filed in a court in British Columbia, but a team of lawyers in both countries is seeking to bring the affiliated cases to San Francisco, where OpenAI is headquartered.

‘Untried territory’

Feldman called reports that the company flagged the risk but failed to act effectively “deeply troubling.”

“As with so much about AI, the lawsuit will take us into untried territory,” she said. “The old doctrines are being applied to new circumstances.”

She said if the families were to win, the company would have to pay damages and assume responsibility for altering its platform to identify and respond to risks.

The major issues that the lawsuit will tackle are whether OpenAI and ChatGPT are protected by the First Amendment and whether or not OpenAI had “a duty to act,” she said.

Community members attend a vigil to honor the victims of one of Canada’s deadliest mass shootings in Tumbler Ridge, British Columbia, Canada, on Feb. 13, 2026. (Paige Taylor White/AFP via Getty Images)

She said that there are parts of U.S. law that shield tech companies from liability for content that their users host. Essentially, this means platforms are more like “bulletin boards” and “are not responsible for the content.”

But this case would raise the question, she said, “Are LLMs like a bulletin board or publisher? Or they like a facilitator who helped with the crime?”

Some companies struggle with the burden of responsibility when reviewing potential threats to public safety, Feldman said, “If they try to help out, they can be viewed as accepting the mantle of responsibility.”

According to Feldman, families are also likely to argue that the LLM “is a defective product without appropriate safeguards.

“In that case, the question is the following: ‘Is the LLM a defective product, or merely a product that was used improperly? And is it analogous to a product at all?”

“All of these are tough questions as we enter the age of AI, and the courts are just beginning to explore them,” Feldman said.

The Associated Press’ Jim Morris contributed to this story.

lower waypoint
next waypoint
Player sponsored by