Facebook’s VP of global policy on the future of content moderation

19 Feb 2020

Monika Bickert, Facebook’s VP for global policy management and counterterrorism. Image: Lorcan Mullally/IIEA

Facebook’s VP for global policy management and counterterrorism was in Dublin to discuss the contentious future of regulation and content moderation.

Facebook – and especially its founder Mark Zuckerberg – has been on the charm offensive of late, with the social media company now saying that it is ready to embrace regulation in some shape or form. It marks a significant reversal from Zuckerberg who, following the election of Donald Trump in 2016, described the notion that Facebook had any role in the spread of disinformation as a “crazy idea”.

On Monday (17 February), the company released a white paper putting forward suggestions for online content regulations that could be implemented by governments around the world. However, its top figures have been quick to add that this white paper is not a list of demands, but a starting point for a conversation.

One of those figures is Monika Bickert, Facebook’s VP for global policy management and counterterrorism, who spoke at an event at the Institute for International and European Affairs (IIEA) in Dublin yesterday (18 February).

“One of the questions I often hear is, ‘What gives [Facebook] the right to set rules?’,” Bickert said during her keynote.

“We don’t think we should be making all decisions by ourselves. We meet with hundreds of organisations, but that’s not the same as making sure we have a consultative approach with government.”

‘Productive’ dialogue

This stance was criticised by some EU officials and others, who have viewed Facebook’s white paper and recent statements around regulation as an effort to dictate what that regulation should be.

Billionaire philanthropist George Soros even went as far as to say that Zuckerberg was “obfuscating the facts by piously arguing for government regulation”, and that he and COO Sheryl Sandberg should be “removed from control”.

Bickert, however, said that the conversations Facebook has had so far with policymakers since the release of the white paper have “been productive”.

“Our intent with it is not a proposal,” she said, “but [by] putting forward the ideas and principles we’ve heard and our experiences, we can inform the conversation.”

‘We don’t think we should be making all decisions by ourselves’
– MONIKA BICKERT

One issue that will certainly fall under potential regulation is the area of content moderation, where thousands of humans – and trained machine learning algorithms – sift through the deluge of posts that appear on Facebook in a given day. According to Bickert, more than 1m fake accounts that do nothing but create automated memes are closed each day.

The question is: who are the people in front of those screens sifting through some of the worst things on the internet and how can this be managed in the long term? Last October, one of Facebook’s biggest contractors for content moderation, Cognizant, said it was gradually pulling out of its deal with the social network because the work was “not in line” with its strategic vision.

Review in moderation

Bickert confirmed that Facebook isn’t planning to bring its moderators entirely in-house. “It’s a mix [of in-house and contractors] right now and it will continue to be a mix,” she said, adding that technology is continuing “to make some of these decisions without people having to even be involved”.

“If you think about reviewing child exploitation imagery, the machines can do a lot of that themselves now,” she said.

However, as has been reported in a number of first-hand accounts over the past few years, machine learning is still a long way from taking humans completely out of the process of moderating child abuse imagery, as well as extreme violence and terrorist acts.

Bickert admitted that humans are still a vital part of the process, especially when it comes to more nuanced flagged content, such as bullying, harassment or hate speech.

“Some of that is so contextual and manual you actually need people to look at it more deeply than before,” she said. “Not only are [humans] making the decision, but now they’re labelling what they’re seeing that can be fed back into our machine learning classifiers.

“In that case, you might think we need to expand the number of people working on this. So these are the sort of factors that determine how many reviewers we need.”

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com