Facebook by numbers: Inappropriate content removal figures revealed

15 May 2018

Entrance to Facebook’s offices in Menlo Park. Image: Sundry Photography/Shutterstock 

Facebook shares figures relating to the removal of hate speech and other content for the first time.

Facebook has had community standards and a dedicated safety team for years now, but has previously been reluctant to share exactly how guidelines are enforced and how much content is actually removed.

Today (15 May), the company released figures around the removal of content for the first time in an unprecedented demonstration of transparency.

Increased accountability

Vice-president of product management, Guy Rosen, said: “We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too.

“This is the same data we use to measure our progress internally – and you can now see it to judge our progress for yourselves.”

Millions of fake accounts disabled by Facebook

837m pieces of spam were removed in the first quarter of 2018 alone, nearly 100pc of which Facebook says was flagged before any users reported it. 583m fake accounts were disabled in Q1 of this year, in addition to the millions of registration attempts prevented by the company. During this time, 3 to 4pc of accounts on the site during this period were still fake.

21m pieces of adult nudity and sexual activity were pulled in Q1, 96pc of which the company said was flagged by technology before it was reported. Facebook estimates that out of every 10,000 pieces of content viewed, seven to nine of those were of content that violated its pornography and nudity rules.

Rosen noted that the technology to spot hate speech is still not up to scratch, so content review teams check for instances. 2.5m pieces of hate speech were removed in Q1 of 2018, 38pc of which was flagged by technology.

Rise in graphic violence

The 86-page report also detailed the rise in posts containing graphic violence in the first quarter of the year, with 22 to 27 pieces out of 10,000 containing violence up from an estimate of 16 to 19 in late 2017. In all, 3.4m pieces of content were flagged with a warning screen or removed in this time period, nearly triple the amount from Q4 of 2017.

Continued conflict in Syria may be one factor, said Alex Schultz, vice-president of data analytics: “Whenever a war starts, there’s a big spike in graphic violence.”

AI is still in its infancy

Rosen added that AI is still “years away from being effective for most bad content because context is so important”. He also said it is an ongoing battle for the team in terms of fighting bad actors and malicious bodies.

“In addition, in many areas – whether it’s spam, porn or fake accounts – we’re up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts.”

AI did, however, flag 99.5pc of terrorist content on Facebook and 95.8pc of posts containing nudity.

“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East and Africa.

Entrance to Facebook’s offices in Menlo Park. Image: Sundry Photography/Shutterstock 

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com