Facebook logos are pictured on the screens of a smartphone and a laptop computer in central London on Nov. 21, 2016. (Justin Tallis/Getty images)
How does Facebook decide to remove a post for hate speech or bullying? What about a photo that shows a woman’s breasts? Now, for the first time, we know.
Facebook has published its internal guidelines for flagging and removing content. This comes a little more than a week before F8, the company’s annual developers conference. The tone at the event could be markedly different this year given the recent scandal over how the company collects, handles and monetizes user data.
Facebook has been criticized for years about how it censors content and for a lack of transparency into the process. Critics point out that there are hate groups with pages that haven’t been shut down, while at the same time something like a post of artwork showing naked breasts might gets pulled. For example, a famous, nude painting of Bea Arthur of the Golden Girls. It made the news when it was sold, but posts about it on Facebook were flagged and taken down.
Jillian York said she was one of those who posted about this painting and then had the post removed, which is ironic. York is director of International Freedom of Expression at the Electronic Frontier Foundation.
The EFF has battled against censorship online since the early days of user-run, internet forums. York said back then the idea was that users could self-police and self-regulate their content, and that these small forums should be places for uncensored expression. But the internet has changed a lot since then.
Sponsored
Facebook is not a small forum run by its users. It’s one of a few corporations that have increasingly monopolized the internet, York said. The website is filled with all kinds of content posted by all kinds of people, many of whom are antagonizing each other, she added. Like at any large media company, she said, it makes sense that some dangerous content should be moderated or regulated.
But it’s tricky, York said. Here’s an example. Take the word “dyke.” It’s a slur, yet it has also been reclaimed and used in a positive way by some of those it is used against. The advocacy group Listening 2 Lesbians criticized Facebook for censoring the word and taking down posts of those using it positively. York said Facebook said it’s trying to address the nuances in situations like this, but still, York said posts using the term “dyke” in a positive way still get taken down.
Facebook just made public how it makes decisions in cases like this. In a 27-page webpost about its “Community Standards,” Facebook lays out what you can and cannot put on the site. Facebook is also going to now let users appeal when a post is taken down. The guidelines cover everything from animal cruelty and nudity to false information, which the company admits in the guidelines is a “challenging issue.”
Transparency advocates applaud Facebook’s move, but some, like Benjamin Decker a research fellow with the Shorenstein Center on Media, Politics and Public Policy, worry that disclosing the guidelines will help people find ways around the rules.
“It literally gives a playbook for the purveyors of disinformation and fake news on how to survive on the platform,” Decker said.
Here is the rationale Facebook gave for making its internal guidelines public.
“We decided to publish these internal guidelines for two reasons. First, the guidelines will help people understand where we draw the line on nuanced issues. Second, providing these details makes it easier for everyone, including experts in different fields, to give us feedback so that we can improve the guidelines – and the decisions we make – over time.”
The rules laid out in the 27 pages are detailed, but Santa Clara University law professor Eric Goldman said they show how the decision of what to censor is, in the end, subjective and therefore arguable.
“Every single element of Facebook’s content moderation guidelines had some place where we as lawyers could start litigating over what that meant and how that would be applied to specific facts,” Goldman said.
Is it bullying or a joke between friends? Art or porn? Facebook explains that artificial intelligence and computer analysis assists in determining potentially problematic posts. But at the end of the day, it is humans who decide.
Facebook has hired a whole team of “content moderators” to review and censor what is posted on the site. Goldman says the guidelines and human curation show how Facebook, like Twitter, is not a neutral platform featuring third-party content, but a media company that makes conscious editorial decisions about what content can be on the site, how it is packaged and what gets prioritized
Goldman said reading Facebook’s guidelines feels like reading the editorial guidelines at a media organization.
“At some point I don’t think there’s any positioning Facebook can take other than it is a media company with editorial standards that it tries to apply consistent with its editorial discretion,” Goldman said.
But unlike a media company, Facebook is not liable for what users post, no matter how much they curate the content. This is thanks to a law put on the books in 1996 that absolves websites of liability over any content posted by third-party users.
Regardless, Facebook is now facing enough pressure from its users that it has hired 7,500 people to review content. Goldman said that number is definitely going to grow as Facebook tries to police the posts uploaded by its more than 2 billion users.
lower waypoint
Stay in touch. Sign up for our daily newsletter.
To learn more about how we use your information, please read our privacy policy.