Keller points to an incident last year when Facebook took down a post of an iconic Vietnam War photo of a naked girl running from a napalm attack. The removal upset users.
Keller says Facebook isn't actually under legal obligation to keep anything up or to take down a video of a crime. The company wants to respond to keep users happy. "They want to take things like this down, and they're working really hard to have a good way to do
that," she says.
Keller thinks part of Facebook's dilemma is that society isn't sure yet whether the company should be like the phone company, which isn't responsible for what people say, or if it should be like a traditional broadcaster, subject to strict regulations on what can be put on air.
"And I think Facebook isn't really exactly like either of those two things," says Keller, "and that makes it hard as a society to figure out what it is we do want them to do."
Nearly 2 billion people use Facebook each month, and millions of them are uploading videos every day. Facebook also pays media outlets, including NPR, to upload videos. That volume of content makes Facebook's job a lot harder.
The company has three ways of monitoring content: There are the users — like the ones who flagged the murder videos from Cleveland. Facebook also has human editors who evaluate flagged content. And, there's artificial intelligence, which can monitor enormous amounts of content.
But, even AI has its limits, says Nick Feamster, a professor of computer science at Princeton University. Take that iconic naked girl photo from Vietnam, he says. "Can we detect a nude image? That's something that an algorithm is pretty good at," he says. "Does the algorithm know context and history? That's a much more difficult problem."
Feamster says it's not a problem that's likely to be solved anytime soon. However, he says AI might be able to detect signs of a troublesome account. It's sort of like the way a bank assesses credit ratings.
"Over time you might learn a so-called prior probability that suggests that maybe this user is more likely to be bad or more likely to be posting inappropriate or unwanted content," Feamster says.
So, Facebook would keep a closer eye on that account.
Between artificial intelligence and more human monitoring, it might be possible to stop the posting of criminal videos and hate speech.
But Stanford's Keller wonders if that's really what we want.
"Do we want one of our key platforms for communication with each other to have built-in surveillance and monitoring for illegal activity and somebody deciding when what we said is inappropriate and cutting it off?" she asks. "That's kind of a dystopian policy direction as far as I'm concerned."