Leading up to the verdict in the trial of Derek Chauvin, the former Minneapolis police officer convicted of murdering George Floyd, Facebook announced its efforts to prevent online content from leading to offline harm.
Facebook Says It Will Scrub Posts That Incite Violence in Derek Chauvin Verdict

In a blog post, Facebook Vice President of Content Policy Monika Bickert wrote teams are removing calls to violence in Minneapolis, but not other locations. Notably, George Floyd’s death last year prompted protests nationwide.
That geographic limitation, however, could change. “We will continue to monitor events on the ground to determine if additional locations will be deemed as temporary, high-risk locations,” Bickert wrote.
She added, “We want to strike the right balance between allowing people to speak about the trial and what the verdict means, while still doing our part to protect everyone’s safety.”
Facebook and Instagram posters will be allowed to discuss the trial without seeing their posts erased, since the social media giant considers Derek Chauvin a public figure. Facebook considers Floyd an involuntarily public figure, so praise, celebration or mockery of his death will be removed. In addition, content that Facebook’s screeners consider graphic will be marked as disturbing or sensitive.
This is not the first time in recent months Facebook has openly declared war on a topical subject that attracts misinformation (defined typically as unwittingly inaccurate posts or sharing) and disinformation (defined typically as intentional and/or coordinated by political actors). Consider the company’s efforts to protect the integrity of the U.S. presidential election last November, and protect public health during the coronavirus pandemic.
“There are some who believe that we have a financial interest in turning a blind eye to misinformation,” Facebook VP of Integrity Guy Rosen wrote in late March. “The opposite is true. We have every motivation to keep misinformation off of our apps and we’ve taken many steps to do so at the expense of user growth and engagement.”
Facebook, directly and indirectly, employs tens of thousands of content screeners, not to mention artificial intelligence.
But outside research has found the company’s efforts often undercut at the highest levels of management, as well as inconsistent and slow. A study out this week from the nonprofit advocacy group Avaaz noted COVID-19 related misinformation — in English, in the U.S., which is to say the arena where Facebook’s content moderation is at its best — took the company the better part of a month to address.
This post was updated at 3:20 p.m. to reflect the conviction of Derek Chauvin.