Content moderation is a disturbing job, but someone has to do it

9 Dec 2019

Image: © fizkes/Stock.adobe.com

In the battleground of content moderation, it’s the frontline workers who will suffer most, writes Elaine Burke.

The challenge of online content moderation has been gaining widespread attention in recent years. From exposés on the trauma these workers can experience and clean-up commitments made by the largest platforms, to legislation proposed by policymakers and legal challenges against employers, the public at large is becoming more aware of the ‘silent heroes of the internet’.

The greatest hurdle to overcome in content moderation is one of immense scale. Taking Facebook, the world’s largest social network, for example, the figures are overwhelming. For starters, there are an estimated 3.3m posts to Facebook per minute. Giving each one of these just a one-second glimpse would take more than a month.

According to Facebook’s 2019 transparency reports, an estimated 20 to 25 posts per 10,000 contain violent or graphic content. That’s a potential 8,250 pieces of content per minute requiring moderation under just one parameter.

True to its word, Facebook doubled its content moderation staff in 2018 from about 7,500 that summer to 15,000 by year end. Even if every one of these 15,000 workers (not all of whom are direct employees of Facebook) were presented with the basic task of human moderation for these posts, they would have one to two posts per person per minute to evaluate for violent and graphic content alone. (Of course, this is just some rough mathematics based on the figures available, and the system of moderation includes automated assistance, but it does give a sense of the scale of content requiring moderation and the limited capacity of humans to keep up.)

Then, the next layer of this challenge arises. On top of the sheer volume of content there is the scope and complexity of content moderation. Content flagged for moderation can range from offensive words, political advertising and nudity to suicidal ideation, child abuse imagery and acts of graphic violence. Decisions have to be sensitive, localised and based on a suitable interpretation of sentiment and intent. There’s nudity and there’s art. There’s hate speech and there’s satire. There are graphic images and there are historic documents.

As you can see, there’s no way to manage the people power to enact this level of content moderation, and there’s no way to do this without people involved.

‘If the job of content moderation is as injurious to mental health as workers have stated, we need to account for that in these workers’ rights’

The demand for content moderation often focuses on user impacts. Coverage of the proposed Online Safety Act – for which the heads of bill are still awaited – centred on the danger posed to children coming across harmful online content, or the risk of radicalisation through extremist content. Yet if this proposal to enforce moderation goes ahead, then it will require more people to face the worst of the internet. And we need to be concerned for their wellbeing, too.

We have strict – sometimes annoying and overwrought – health and safety rules because of physical injuries and illness incurred in workplaces. If the job of content moderation is as injurious to mental health as workers have stated, we need to account for that in these workers’ rights. Especially if this role is going to become a necessity under legislation.

The Online Safety Act could herald ‘a GDPR moment’ for content moderation: if brought to bear, we could see moderation officers becoming as business critical as data protection officers did in the wake of this EU-wide legislation (which is now being similarly matched by the California Consumer Privacy Act).

‘The internet holds a mirror up to society and we have to also face what we don’t want to see’

Unfortunately, we must acknowledge the inevitability of extreme, graphic and upsetting online content. As I said when hashing out this topic with Marian Finucane last weekend, the internet is simply humanity online – that’s all the good parts and the bad parts too. It holds a mirror up to society and we have to also face what we don’t want to see.

We can’t simply eliminate horrible online content as much as we can’t simply eliminate horrible people. Dealing with the worst of us isn’t just a ‘hard question’ for Facebook leaders to blog about, but one of the greatest challenges society faces.

Even if Facebook were to decide to crack down tomorrow with more restrictive guidelines, bad content will find a home elsewhere. We’ve seen this with 4chan splintering into 8chan and NeinChan because apparently the ‘toilet of the internet’ was too clean for some users.

In actuality, the fact that some bad actors surface on Facebook or other mainstream platforms can be useful to authorities trying to uncover the darkest corners of the internet where content shared is not just disturbing but illegal. The insight content moderators on the front line can glean can have a huge impact on these investigations.

Content moderators are indeed the silent heroes of the internet. But silent doesn’t have to mean doomed to diligent martyr-like drudgery. They should not be voiceless. We need to hear them out and help in every way we can.

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

Elaine Burke is the host of For Tech’s Sake, a co-production from Silicon Republic and The HeadStuff Podcast Network. She was previously the editor of Silicon Republic.

editorial@siliconrepublic.com