Fresh reports highlight content moderation issues at Facebook

1 Apr 2022

Image: © Cloudy Design/

A now-fixed bug potentially led to increased views of harmful content on the platform, while a study found that many conspiracy theory articles were not labelled as misinformation.

A bug that led to a “massive ranking failure” left Facebook users exposed to harmful content.

First reported by The Verge, a group of Facebook engineers identified a bug that left as much as half of all News Feed views exposed to potential “integrity risks”. Engineers spotted the issue last October when a surge of misinformation started flowing through the News Feed.

The bug meant that posts from repeat misinformation offenders were being distributed instead of suppressed, boosting views by as much as 30pc worldwide. According to an internal report on the incident seen by The Verge, the ranking issue was fixed on 11 March.

A spokesperson from Facebook’s parent company Meta confirmed the incident in a statement to the Verge and said the bug “has not had any meaningful, long-term impact on our metrics”.

In a statement to, a Meta spokesperson said The Verge “vastly overstated what this bug was”, adding that only a very small number of views were ever impacted because most posts are not eligible to be down-ranked in the first place.

“After detecting inconsistencies, we found the root cause and quickly applied fixes. Even without the fixes, the multitude of other mechanisms we have to keep people from seeing harmful content – including other demotions, fact-checking labels and violating content removals – remained in place.”

A lack of warning labels

While this incident stemmed from a bug, another report has pointed to issues with how harmful or misleading content is labelled on Facebook.

According to The Guardian, a new study released by the Center for Countering Digital Hate (CCDH) said the social media platform failed to label 80pc of articles promoting a conspiracy theory that the US is funding bioweapons for Ukraine as misinformation.

Imran Ahmed, chief executive of the non-profit disinformation research group, said “conspiracy theories are given a free pass” in the majority of cases.

“If our researchers can identify false information about Ukraine openly circulating on its platform, it is within Meta’s capability to do the same,” he said.

In response, a Meta spokesperson told that the report’s methodology is “flawed” and misrepresents “the scale and scope” of the company’s efforts.

“We have the most robust system for fact-checking false claims of any platform and our fact-checking partners have debunked dozens of claims about the Ukrainian bioweapons hoax in several languages including Ukrainian, Russian and English,” they said.

This is not the first time the CCDH has identified shortcomings in the company’s labelling of false information.

A report from the non-profit in February found that many posts containing Russian propaganda about Ukraine did not carry any warning labels highlighting misinformation.

CCDH researchers analysed a sample of 3,593 articles posted by outlets that have been identified as part of “Russia’s disinformation and propaganda ecosystem”, including and Sputnik News.

The researchers then used Meta’s own CrowdTangle tool to identify more than 1,300 posts featuring the 100 most popular articles from the sample. Of these 1,304 posts, the researchers found that 91pc did not carry any warning labels.

Another CCDH report found that the social media platform had also failed to add misinformation warning labels to posts promoting articles from the world’s leading publishers of climate denial.

‘Err on the side of adult’

A third report, published this week in The New York Times, has identified another issue when it comes to content moderation at Facebook.

While the company reports millions of photos and videos suspected of child abuse each year, The New York Times reports that when ages are unclear, content moderators have been instructed via a training document to “err on the side of adult”, which could lead to images and videos of child abuse going unreported.

Antigone Davis, head of safety for Meta, confirmed the policy to The New York Times and said it stemmed from privacy concerns for those who post sexual imagery of adults.

A Meta spokesperson told “We report more material to the National Center for Missing and Exploited Children than literally all other tech platforms combined, because we are the most aggressive. What’s needed is for lawmakers to establish a clear and consistent standard for all platforms to follow.”

Content moderation on the social media platform has been under scrutiny in recent years.

In 2018, a Channel 4 investigation revealed systemic failures regarding the removal of content flagged as inappropriate or recommended to be removed by users. Following this, content moderators themselves came into the spotlight, with many reports highlighting the impossible task they face.

With millions of new Facebook posts going live every day, making judgements on moderating all of this content while being subjected to distressing images can lead to serious struggles for those working in content moderation.

In December 2019, a group of moderators sought damages for personal injuries caused by exposure to graphic content while working on behalf of Facebook in Dublin. In May 2020, Facebook agreed to pay a settlement of $52m to current and former content moderators who said they developed mental health issues from their work.

Updated, 11.21am, 1 April 2022: This article was updated to add comments from a Meta spokesperson.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Jenny Darmody is the editor of Silicon Republic