Facebook plans to increase automated content moderation

12 May 2020

Image: © Thaspol/Stock.adobe.com

Facebook is relying more on AI and automation to help moderate its platform during the Covid-19 pandemic and beyond.

Today (12 May), Facebook released the latest edition of its Community Standards Enforcement Report (CESR), providing metrics on how it enforced policies on its platforms from October 2019 to March 2020.

The company outlined how it repurposed the tools and technologies it had developed to prevent the spread of misinformation relating to elections, in order to fight the waves of coronavirus misinformation that have been shared online.

In a separate blogpost, the company said that it put warning labels on around 50m pieces of content related to Covid-19 during the month of April, and removed more than 2.5m pieces of content advertising the sale of masks, hand sanitisers, surface disinfecting wipes and Covid-19 test kits.

“AI is a crucial tool to address these challenges and prevent the spread of misinformation, because it allows us to leverage and scale the work of the independent fact-checkers who review content on our services,” the company said.

Automated moderation

Since Facebook has had to temporarily send many content reviewers home who can only do their jobs on site, the company has increased its reliance on automated systems and has prioritised high-severity content for its moderation teams in order to keep its apps safer for users.

“Over the last six months, we’ve started to use technology more to prioritise content for our teams to review based on factors like virality and severity, among others,” the company wrote.

“Going forward, we plan to leverage technology to also take action on content, including removing more posts automatically. This will enable our content reviewers to focus their time on other types of content where more nuance and context are needed to make a decision.”

Earlier this year, Monika Bickert, Facebook’s VP for global policy management, told Siliconrepublic.com that machine learning is starting to make more moderation decisions, but humans are still a vital part of the process, especially when it comes to more nuanced content such as bullying, harassment or hate speech. It comes as concerns have been raised about the role and responsibilities of content moderators working on behalf of the company.

Improved technology

The company said that it has improved the technology that proactively finds content that violates its terms of use, which has helped Facebook to remove more violating content so that fewer people see it.

The company said that AI now proactively detects 88.8pc of the hate speech content it removes, up from 80.2pc in the previous quarter. It took action on 9.6m pieces of content for violating hate speech policies in the first quarter of the year.

“We are able to find more content and can now detect almost 90pc of the content we remove before anyone reports it to us,” Facebook said. “In addition, thanks to other improvements we made to our detection technology, we doubled the amount of drug content we removed in Q4 2019, removing 8.8m pieces of content.”

On Instagram, there have been improvements made to text and image matching technology, which has helped the company to find more suicide and self-injury content. It has increased the amount of content it took action on in this category by 40pc.

Facebook also said that it has improved its technology for finding and removing content similar to existing violations in its databases, which has helped to take down more child nudity and sexually exploitative content on both Facebook and Instagram.

Kelly Earley was a journalist with Silicon Republic

editorial@siliconrepublic.com