Two months after Christchurch, Facebook reveals tougher live-streaming rules

15 May 2019

Christchurch. Image: © totajla/Stock.adobe.com

Social network will ban anyone who violates its “most serious policies”.

Facebook has introduced tougher rules for its live-streaming feature, two months after the mass shooting of 50 people in Christchurch, New Zealand.

From today (15 May), people who break Facebook’s “most serious policies”, including its Dangerous Individuals and Organisations policy, will be immediately banned for a period of time, such as 30 days.

‘One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack’
– GUY ROSEN

“Tackling these threats also requires technical innovation to stay ahead of the type of adversarial media manipulation we saw after Christchurch when some people modified the video to avoid detection in order to repost it after it had been taken down,” said Guy Rosen, vice-president of integrity at Facebook.

“This will require research driven across industry and academia. To that end, we’re also investing $7.5m in new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

One-strike policy

Rosen said that a new “one-strike” policy at Facebook Live will be applied to a broader range of offences.

Until now, if a user violated the community standards the company took down the post and blocked them if they kept posting violating content.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example, 30 days – starting on their first offence. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.

“We plan on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook.

“We recognise the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook. Our goal is to minimise risk of abuse on Live while enabling people to use Live in a positive way every day,” Rosen said.

The horrific attack in Christchurch in March caused something of a reckoning for Facebook and the broader tech industry as people grappled with the horror of what happened, but also the fact that the perpetrator was able to livestream what was happening and share it faster than the industry could react.

In the first 24 hours Facebook removed 1.5m videos of the attack globally, of which more than 1.2m were blocked at upload. However, within hours footage still remained on Facebook as well as Instagram, WhatsApp and Google’s YouTube. The footage also remained available on file-sharing sites such as New Zealand-based Mega.nz.

“One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People – not always intentionally – shared edited versions of the video, which made it hard for our systems to detect.

“Although we deployed a number of techniques to eventually find these variants, including video- and audio-matching technology, we realised that this is an area where we need to invest in further research.”

Rosen said that Facebook is partnering with the University of Maryland, Cornell University and the University of California Berkeley to research new techniques to detect manipulated media across images, video and audio, and also to distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs.

“Dealing with the rise of manipulated media will require deep research and collaboration between industry and academia – we need everyone working together to tackle this challenge,” he explained.

“These partnerships are only one piece of our efforts to partner with academics and our colleagues across industry. In the months to come, we will partner more so we can all move as quickly as possible to innovate in the face of this threat.

“This work will be critical for our broader efforts against manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred). We hope it will also help us to more effectively fight organised bad actors who try to outwit our systems as we saw happen after the Christchurch attack.

“These are complex issues and our adversaries continue to change tactics. We know that it is only by remaining vigilant and working with experts, other companies, governments and civil society around the world that we will be able to keep people safe,” said Rosen.

John Kennedy is a journalist who served as editor of Silicon Republic for 17 years

editorial@siliconrepublic.com