Image by Natalie Adams

Before Shutdown, Meta’s Fact-Checking Program Only Labeled 14 Percent of Russian, Chinese, and Iranian Disinformation Posts

Flaws in process could spill over into coming community notes system — with worse results

A full version of this report is available through NewsGuard’s Reality Check.

 

By Dimitris Dimitriadis, Eva Maitland, and McKenzie SadeghiPublished on Jan. 22, 2025

Only 14 percent of posts advancing a sampling of 30 Russian, Chinese, and Iranian disinformation narratives identified by NewsGuard analysts on Meta platforms Facebook, Instagram, and Threads from global users were tagged as false under a fact-checking program that Meta is soon ending in the U.S., NewsGuard found. 

Mark Zuckerberg announced on Jan. 7, 2025, that Meta’s fact-checking program — launched in December 2016 following criticism that the company did not do enough to curb foreign influence in the U.S. presidential election — was being dropped in the U.S. The program has contracted third party fact-checkers at major news outlets including USA Today, Reuters, and The Associated Press and overlays their fact-check articles to false content on Meta platforms.

In its place, Meta says that it will adopt “Community Notes.” That is a crowdsourced approach similar to the practice of Elon Musk’s X. 

However, as explained below, if Meta applies the same technology and rules for applying community notes to posts that it has used for fact checker-generated labels, the results are likely to be no more promising. In fact, the results could be even weaker in terms of speed and coverage because a community note requires a process in which a community of users first must be shown to have what Facebook has said has to be “a range of perspectives.”