Mourners visit a makeshift memorial near the Inland Regional Center on Dec. 4, 2015, in San Bernardino, California. The FBI officially labeled the attack carried out by Syed Farook and his wife, Tashfeen Malik, an act of terrorism. (Justin Sullivan/Getty Images)
When Gregory Clayborn heard that his daughter, Sierra, had been one of 14 people killed in a mass shooting in San Bernardino on Dec. 2, 2015, he says he was filled with rage.
"But I decided to channel that rage into something positive to try and bring about a change so that this won't have to happen again," Clayborn says.
For Clayborn, that change is getting digital platforms like Facebook, Twitter and Google to prevent terrorist organizations like ISIS from using their websites.
In a lawsuit filed along with the families of two other victims, Clayborn claims the technology companies know ISIS is using their platforms to recruit followers and plan attacks, and that they aren't doing enough to prevent it.
"For years, Defendants have knowingly and recklessly provided the terrorist group ISIS with accounts to use its social networks as a tool for spreading extremist propaganda, raising funds, and attracting new recruits," alleges the lawsuit, which was filed in federal district court in Los Angeles on May 3.
Tashfeen Malik, who opened fire at the Inland Regional Center along with her husband, Syed Rizwan Farook, on Dec. 2, 2015, pledged allegiance to ISIS on her Facebook page the day of the attack and had exchanged messages on the platform about her desire to carry out an attack.
The lawsuit details numerous other examples of terrorist organizations' active presence on social media sites and their use of those sites to recruit and radicalize individuals who go on to commit acts of terror.
The suit is the latest chapter in what has become an ongoing legal battle to force companies to take a more active role in preventing terrorists from using their platforms. A similar suit was filed against the same three companies in connection with the June 12, 2016, massacre at the Pulse nightclub in Orlando, Florida.
Current law offers digital platforms broad protections from liability for content posted by users, which would protect companies like Facebook, Twitter and Google from these types of suits.
Clayborn's attorneys argue this law doesn't protect them because they are making money by putting advertisements on ISIS posts and pages. They say this creates new content that the companies should be held responsible for.
"Their bottom line is making money," Clayborn says about Facebook, Google and Twitter. "They don't really care about anything else. If they did, they'd have algorithms to stop this and monitor this type of activity. They know how to do it, but they won't do it because of the money involved."
Clayborn wants the sites to shut down any accounts connected with terrorists or terrorist organizations, but some worry that could lead to regular users having their content censored as well.
"Generally, we find the more you do to prevent criminals and terrorists from using the internet, the more you're going to be preventing legitimate internet users from doing the same things," says Jeremy Malcolm, a senior global policy analyst with the Electronic Frontier Foundation.
Malcolm says the voices pushing for more content regulation have gained steam after concerns over the proliferation of "fake news" during the 2016 presidential campaign and terror attacks in the United States and abroad. But he says it's not reasonable to expect terrorists not to use social media any more than one could expect them not to use roads or the post office.
Instead, he advocates for empowering users to call out and report dangerous content like they would on a subway or at an airport.
"I think that's the best approach rather than requiring platforms to proactively seek out and remove information, because they're going to make mistakes if that happens," he says. "That's just going to lead to unnecessary censorship."
Clayborn says that argument doesn't work when you've lost a child to an act of terror.
"Where would you be if you were the parent of that child? Would you still be talking about limiting the censoring of the internet? Or would you be pursuing a way so that this wouldn't happen to anyone else?" he asks.
A Facebook spokesperson said in an email, "There is no place on Facebook for groups that engage in terrorist activity or for content that expresses support for such activity, and we take swift action to remove this content when it’s reported to us."
A Twitter spokesperson declined to comment, and Google did not respond to KQED's request for comment.