Facebook is testing a new way to combat extremist content

2 Jul 2021

Image: © Belkin & Co/Stock.adobe.com

The social media giant is testing prompts on some US users who may have been exposed to extremist content.

Facebook is testing prompts that ask users in the US if they are worried about someone they know becoming an extremist.

According to CNN Business, the prompts are part of the social media giant’s Redirect Initiative, which aims to combat violent extremism. It does this by redirecting hate and violence-related search terms towards resources, education and outreach groups.

As part of the tests, some users are also being alerted that they may have been exposed to extremist content.

The alerts include: “Are you concerned that someone you know is becoming an extremist?” and “Violent groups try to manipulate your anger and disappointment”.

The prompts then direct users to a variety of resources such as Life After Hate, a US non-profit organisation to help people leave far-right groups.

Speaking to CNN Business, a Facebook spokesperson said the test is “part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content or may know someone who is at risk”.

The new prompts are the latest in several steps Facebook has taken in terms of tackling harmful, extremist content, misinformation and abuse on its platforms.

In May 2020, the first 20 members of Facebook’s oversight board were chosen. This board was established as an independent entity that would review controversial content moderation or suspension decisions that the company makes.

In October last year, it said it would strengthen its measures to tackle the radical, far-right conspiracy movement QAnon on its platforms following a number of incidents during the year.

Facebook was also one of several social media companies to ban former US president Donald Trump following the violent attack on the US Capitol at the beginning of the year.

In terms of abusive content, the company introduced a new feature to let users control who can comment on a Facebook post earlier this year.

This week, the platform launched a Women’s Safety Hub, which aims to centralise resources for women leaders, journalists and those who have received abuse online.

Resources range from training for politicians using Instagram for civic engagement, to blocking keywords and comment control for those in the public eye.

As the parent company for Instagram, Facebook also launched a feature to filter abusive and unwanted messages within the photo-sharing platform.

Jenny Darmody is the editor of Silicon Republic

editorial@siliconrepublic.com