How will Facebook’s oversight board affect content moderation?


13 May 2020

Image: © tashatuvango/Stock.adobe.com

With the announcement of Facebook’s new oversight board members, the team at William Fry examines what this will mean for content moderation.

Some 18 months after announcing an independent external review board, the first 20 members of Facebook’s new oversight board were revealed on 6 May 2020. The board, embodying an idea once heralded by Mark Zuckerberg as being “almost like a supreme court”, will hear appeals against some content removal decisions made by Facebook’s in-house moderation teams.

The board will also make recommendations regarding Facebook’s content policies. The board members include a Nobel Peace Prize laureate, a former prime minister, lawyers, journalists and free speech advocates. The full list of initial members can be viewed here.

What will the oversight board do?  

The board’s charter states its purpose is “to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook’s content policies”.

In practice, it will decide whether Facebook’s content moderators correctly applied Facebook’s existing content policies in choosing to remove certain content from its platforms. Facebook has committed to treating the board’s appeal decisions as final and binding.

In reaching decisions, the board will view its own past decisions as highly persuasive and will “pay particular attention to the impact of removing content in light of human rights norms protecting free expression”. The board may also issue advisory opinions recommending that Facebook amend its content policies.

What content and decisions are in scope?

At the outset, the board will review only decisions to remove individual pieces of content (such as posts, photos, videos and comments) from Facebook or Instagram.

Everything else is out of scope for now. The board will not initially review decisions by Facebook to leave content up on its platform following receipt of a take-down request. It is intended that the board’s remit will expand in the future to include leave-up decisions and decisions to remove groups, pages and user accounts.

The board will not review content on other Facebook products. Unsurprisingly, the board will not have jurisdiction over content Facebook removes due to a belief that it has a legal obligation to do so.

Importantly for businesses, creators and owners of intellectual property rights, the board will not review reports involving copyright, trademarks or counterfeits for sale, for example, on Facebook Marketplace.

At least for now, the hundreds of thousands of intellectual property infringement reports received by Facebook every year will continue to be decided internally at Facebook. Any lingering disputes will continue to be resolved offline by the parties involved.

How did we get here?

Social media platforms typically avail of internet intermediary immunity laws in many jurisdictions. These include the hosting defence found in the EU’s e-commerce directive, which excludes liability where the host acts expeditiously to remove unlawful content once notified.

Such frameworks led to platforms relying on notice-and-takedown systems for unlawful content, while allowing them space to govern other user-generated content on their platforms as they saw fit.

‘Techlash’ and regulation

Since coming to prominence as stewards of public space, leading platforms have faced criticism as vectors of problematic content including bullying, hate speech, and misinformation.

Fake news and the virality of content such as footage of the Christchurch mosque shootings in 2019 have led to unprecedented scrutiny. At the same time, platforms have come under fire for over-censoring content. Criticism has often been accompanied by calls for greater platform accountability and transparency.

Some governments have reacted to those calls with more stringent regulation. Germany led the way in Europe with its Network Enforcement Act, requiring removal of obviously illegal content within 24 hours and increased transparency. France’s Avia Law proposes a similar regime, and the European Commission is now proposing a Digital Services Act in a similar spirit.

The UK is consulting on proposals to regulate online harms, perhaps quite strictly. In Ireland, the outgoing Government had approved the outline for an Online Safety and Media Regulation Bill to include a regulation of harmful online content.

Facebook’s response

Facebook has reacted by doubling its content moderation workforce, allowing internal appeals against content decisions and publishing more-detailed internal guidelines used by its teams to enforce its content policies.

Facebook has also publicised its use of artificial intelligence to proactively identify and remove harmful content and to down-rank sensational and provocative borderline content. The final piece of this jigsaw is the oversight board, first promised in November 2018 and now, with founding documents published and members revealed, close to coming into operation.

What will happen next?

How much the board pushes back on Facebook decisions, and its published reasons for doing so, will be closely watched. The board’s initially limited scope means that any appeal it upholds may be seen as a victory for free speech advocates. That view may evolve if or when the board’s role expands to reviewing leave-up decisions – the board will then have an opportunity to tell Facebook to remove or down-rank content it would otherwise have left up.

Reaching decisions that harmonise Facebook’s content policies with human rights norms while meeting the expectations of diverse political communities will undoubtedly be a challenge. If the speed, quality and impact of the board’s decisions go some way towards achieving that, it is possible to imagine the board’s role expanding and, if other platforms adopt a similar approach, the appetite for greater regulation of online content tapering off.

Success is not guaranteed and implementing the board’s decisions at scale will remain a significant obstacle. Missteps, controversies or delays in the first months and years may see the board’s role being overtaken by the drive for stricter regulation in Europe and elsewhere.

By Leo Moore and Laura Scott, with contribution by John Sugrue

Leo Moore is a partner in the William Fry technology group. Laura Scott is a partner in William Fry’s litigation and dispute resolution department. John Sugrue is an associate in William Fry’s litigation and dispute resolution department.

A version of this article originally appeared on the William Fry blog.