Facebook reports increased removal of hate speech across its platforms

20 Aug 2021

Image: © PhotoPlus+/Stock.adobe.com

In its tenth quarterly Community Standards Enforcement report, Facebook said it had removed 31.5m posts containing hate speech and 9.8m from Instagram.

Facebook has published Community Standards Enforcement report for the second quarter of 2021, detailing the measures it has taken to prevent hate speech and other harmful content on its platform.

The social media giant has only begun publishing these reports in the last few years in a bid to improve transparency. So far, it has published 10 reports. Hate speech content removal has increased more than 15 times on Facebook and Instagram since the reporting began.

The reports, which provide detailed data on both the company’s Facebook and Instagram platforms, come from Facebook’s Transparency Center, which was launched earlier this year.

According to the report, Facebook removed 31.5m pieces of hate speech content (compared to 25.2m in Q1) and 9.8m from Instagram (compared to 6.3m in Q1).

In a blog post released to coincide with the publication of the report, Facebook’s VP for integrity, Guy Rosen, said: “We’re committed to sharing meaningful data so we can be held accountable for our progress, even if the data shows areas where we need to do better.”

About 7.9m pieces of content were removed from Facebook on bullying and harassment grounds, although some of those removals were later reversed as claims in this category can be subjective and people are allowed to appeal.

About 2.3m posts were actioned due to child nudity and sexual abuse, and more than 97pc of these were identified and removed by Facebook itself, with the remainder flagged by users.

The company has also put significant effort into removing terrorist content and content posted by organised hate groups. It had to remove 7.1m terrorism-related posts in the second quarter, compared to 9m in the first quarter. And it deleted 6.2m organised hate posts, down from almost 10m in the first quarter.

Facebook said it has also clamped down on fake accounts, removing approximately 1.7bn of these in Q2. The Q1 report claimed that roughly 5pc of its monthly users during that period were fake accounts.

There was less data available when it came to Instagram as its metric for measuring the same data is less developed than Facebook’s. However, the Community Standards Enforcement report did say that Instagram actioned 9.4m posts containing child nudity and sexual abuse, 95.8pc of which the company identified itself without user input.

Facebook ascribed the progress it has made in identifying and removing incidences of hate speech to its investment in AI technologies, which it says has enabled it to enforce its policies “across billions of users and multiple languages.”

According to Rosen: “In Q2 2021, we improved our proactive detection technology on videos and expanded our media-matching technology on Facebook, allowing us to remove more old, violating content. Both enabled us to take action on more violating content.”

Blathnaid O’Dea was a Careers reporter at Silicon Republic until 2024.

editorial@siliconrepublic.com