Meta set up one of the most extensive partnerships with fact checkers after the 2016 presidential election, in which Russia spread false claims on Facebook and other online platforms. The company created what has become a standard for how tech platforms limit the spread of falsehoods and misleading information.
However, the 2020 election and the COVID pandemic accelerated a backlash among conservatives who cast content moderation as a form of censorship. Facebook, along with Twitter and YouTube, banned Trump from their platforms after the Jan. 6, 2021, attack on the Capitol but eventually allowed him to return ahead of his second run for office. In recent years, fact checkers, researchers of false narratives, and social media content moderation programs have become targets of Republican-led Congressional probes and legal challenges.
Zuckerberg said his views on content moderation have changed. Meta has made “too many mistakes” in how it applied its content policies, he said, and pointed to Trump’s election to a second term as “a cultural tipping point towards once again prioritizing speech.”
“So we are going to get back to our roots, focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms,” he said.
Meta said instead of working with third-party fact checkers, it would shift to a “community notes” program where users write and rate notes that appear next to specific posts. That’s similar to the approach Elon Musk has championed on X, the platform formerly known as Twitter.
Meta also said it would change how it enforces its policies, relying less on automated systems except for “illegal and high-severity violations,” including terrorism, child sexual exploitation, and fraud. The company’s U.S. content moderation team will move from California to Texas. The move should “help us build trust to do this work in places where there is less concern about the bias of our teams,” Zuckerberg said.