Facebook Doesn’t Remove False: Verified by Animal Poltico, or How to Fight Fact-Checking Biases
Fact-checking outlets would argue that something was false, and then Facebook’s tool would not remove the original content but would instead add a label such as “False: Verified by Animal Político,” or another one of the network’s organizations. Meta said that the post’s reach was reduced after it received the label. An official Meta post states that the program is working and that users find value in the labels after a fact-checker rates them. It was not in the best interest of the company to remove the original posts in order for them to show confidence in the organizations that did this work.
The company said that the move was to counter fact checkers’ political bias and censorship. “Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how,” Meta’s chief global affairs officer Joel Kaplan wrote on 7 January.
Nature spoke to communication and misinformation researchers about the value of fact-checking, where perceived biases come from and what Meta’s decision could mean.
“Ideally, we’d want people to not form misperceptions in the first place,” adds van der Linden. If we have to work on the fact that people are already exposed, then reducing it is almost as good as it gets.
According to Jay Van Bavel, a psychologist at New York University in New York City fact-checking is less effective when an issue is divided. “If you’re fact-checking something around Brexit in the UK or the election in United States, that’s where fact-checks don’t work very well,” he says. People who are partisans do not want to believe things that look bad for their party.
On Facebook, articles and posts deemed false by fact-checkers are currently flagged with a warning. They are shown to fewer users by the platform. People are more likely to ignore flagged content than to read and share it.
Flagging posts as problematic could also have knock-on effects on other users that are not captured by studies of the effectiveness of fact-checks, says Kate Starbird, a computer scientist at the University of Washington in Seattle. She says that measuring the direct effect of labels on user beliefs and actions isn’t comparable to measuring the broader effects of fact-checks.
He says the conservative misinformation is the reason for the increase in spread. When one party spreads most of the misinformation, it will look like fact-checking is biased because they are getting called out a lot.
The “news” that was being debunked was photos or videos taken out of context, such as one that falsely claimed that a group of migrants had hijacked a truck in Chiapas. There were also tales about the abduction of children in Latin America. Then came the Covid-19 pandemic. The independent fact-checkers played a lead role in debunking ideas such as drinking bleach eliminates the virus or 5G networks caused the pandemic.
When Meta asked me to start her project I was an editorial supervisor at Animal Poltico. To be part of it, you needed to join the Poynter Institute certification, which was funded by the International Fact-checking Network and sets the editorial rules for verification with a highly rigorous and transparent code of principles. Meta has its own requirements, and also trusted this network to fulfill them. Political discourse or any other type of content that was classified as an opinion couldn’t be refuting. The misinformation about the first migrant caravan that crossed Mexico in the first year of AMLO’s six-year term, and strengthened a racist anti-immigrant discourse, could not be questioned.
On January 7, Mark Zuckerberg announced that the program in the United States would be ending. It seems only a matter of time before the initiative disappears in Latin America and the rest of the world, undermining independent news organizations that depend, to a greater or lesser extent, on that funding. Animal Poltico in Mexico, Agencia Lupa or Aos Fatos in Brazil and Maldita.es in Spain are some of the places that will be affected. Why get rid of something that was working, according to the company itself?
First implemented in the United States and then in the rest of the world, the project seemed to be working. So far, according to its own data, there were more than 100 international organizations actively participating in it. Last year, in the context of the European Union elections, Meta announced the effectiveness of its labeling system. 95 percent of people didn’t click on the information when it was labeled false or misleading.
As part of his counteroffensive, the company created the Third Party Fact-Checkers, or Independent Fact-checkers, program to address misinformation on its platforms.
It was 2016, and the problem of fake news kept Mark Zuckerberg, CEO of Meta (then called Facebook), up at night. The young inventor was under a lot of pressure and was frequently questioned by the US legal system. Donald Trump’s victory in the presidential race, and accusations of Russian interference in the elections, raised serious concerns. For the first time, the question of the platform’s influence on the political landscape was being addressed—US lawmakers demanded that the company “protect democracy.” Eventually, Zuckerberg would appear before the Senate in 2018.