Removal of child abuse content on Facebook was impacted by Covid-19

12 Aug 2020

Image: © wachiwit/Stock.adobe.com

As Covid-19 forced Facebook to rely more on AI to moderate its content, fewer child abuse and self-harm posts were removed from its platforms.

Facebook’s latest Community Standards Enforcement Report – covering content moderation between April and June 2020 – has revealed how the social network and its sister platform Instagram faired during the Covid-19 pandemic.

Facebook announced in March that content reviewers would be sent home, with many now working remotely. However, with limited content reviewers, Facebook said that AI would be taking up the reins and there would be an increase in automated content moderation. This resulted in an increase in the removal of some categories of content, but a reduction in others.

In particular, hate speech posts saw a significant increase in removals from 9.6m at the start of the year to 22.5m pieces of content between April and June. This, Facebook said, was due to the expansion of its automation technology in Spanish, Arabic and Indonesian as well as improvements to its English detection technology. The system was able to spot 95pc of hate speech cases before they were reported by users.

On Instagram, the number of hate speech posts removed increased from 809,000 at the beginning of the year to 3.3m between April and June. Content promoting terrorism also saw an increase in removal, from 6.3m posts in Q1 to 8.7m in Q2.

However, a reduction in human moderators saw the company remove less than half the number of child abuse posts from Instagram than it did in the previous quarter, while the removal of suicide and self-harm posts on the platform fell from 1.3m in Q1 to 275,000 in Q2.

Bad timing

Speaking with Protocol, a Facebook spokesperson said that while its AI is the frontline responder for finding child abuse imagery or posts, human reviewers are essential for “banking” them. This means cataloguing these images to train the AI to be able to spot similar posts in the future.

“Without humans banking this content then our machines can’t find it at scale,” the spokesperson said. “And this compounds after a while, so our content-actioned numbers decreased.”

It was reported earlier this year that a surge in online child abuse and sexual exploitation cases emerged during the Covid-19 pandemic, with the US National Center for Missing and Exploited Children seeing a 106pc increase in suspected cases in March versus the same period in 2019.

In response, Facebook and other major companies in the Technology Coalition launched a new project to try and eliminate online child sexual exploitation and abuse as much as possible.

Writing in an update yesterday (11 August) about the company’s Community Standards Enforcement Report, Facebook’s vice-president for integrity, Guy Rosen, said: “With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.

“Despite these decreases, we prioritised and took action on the most harmful content within these categories. Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible.”

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com