White men a protected group under Facebook rules, but not black children

29 Jun 2017

A Facebook page. Image: Brilliantist Studio/Shutterstock

The latest leaked documents from Facebook reveal that its algorithms have confusing priorities for who is and isn’t protected against hate speech.

As Facebook surpasses the milestone of 2bn active monthly users, its role in policing hateful and inaccurate content has never before been under such scrutiny.

Despite Mark Zuckerberg’s original claim that the social media platform was not capable of influencing millions of people’s opinions, he and Facebook have now done a complete 180.

In the past few days, Facebook joined an online counterterrorism group with major Silicon Valley giants, boasting that algorithms will be its biggest weapon in the fight against extremism.

However, these same algorithms – trained to find and censor content of hate speech online – have been thrown into the spotlight for all the wrong reasons, after the website ProPublica obtained leaked slides from Facebook that detail what is and isn’t deleted by its system.

By far, the most confusing aspect of the slides is that, in any given situation, an attack against a group of white men is considered a protected category against hate speech, but not black children.

Explaining this reason, Facebook said that the algorithms are trained to identify that, in this instance, the words ‘white’ and ‘men’ are put into protected categories from hate speech.

However, because Facebook’s algorithms do not identify a person’s age as a protected category, the use of the word ‘children’ results in black children being grouped into an unprotected category.

Migrant protection

An example it gives to differentiate between protected and unprotected is that a slur made against an Irish woman qualifies as hate speech, but a slur against an Irish teen does not.

Another interesting inclusion is that, following the ongoing Syrian refugee crisis, migrants were added as a quasi-protected category into the algorithm.

Under Facebook’s guidelines, migrants are protected against calls to dehumanise or show violence towards them, but are not protected when someone calls for them to be excluded from society.

This raises troubling issues where a migrant may be referred to as ‘filthy’ and Facebook’s algorithms will let it be, but will block it if someone leaves out the letter ‘y’ from that word.

According to ProPublica, Facebook has responded to the documents by saying that its exact wording has likely been changed in subsequent updates.

No ‘perfect outcomes’

“The policies do not always lead to perfect outcomes,” said Monika Bickert, head of global policy management at Facebook.

“That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”

One such imperfect outcome occurred earlier this year following US president Donald Trump’s call for a temporary travel ban from seven countries, which Facebook’s algorithms would define as exclusion and therefore a protected category.

However, Zuckerberg stepped in to allow the president’s comments stand on the site.

In the meantime, fully aware of an algorithm’s limitation, Facebook revealed recently that it was to hire thousands of humans to police the site following the live stream of someone’s murder, which was broadcast to thousands of people around the world.

A Facebook page. Image: Brilliantist Studio/Shutterstock

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com