How your YouTube is expected to change in reaction to terrorism

19 Jun 20175 Shares

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Memorial for Westminster Bridge attack. Image: Ioannis Liasidis/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

In the wake of further terrorist attacks across the world, Google has revealed how YouTube will change to help stem the flow of extremist content on the platform.

In the modern world of terrorism, online platforms such as YouTube have become important tools for groups attempting to spread their messages and, in some cases, recruit new followers.

This was the case following the attack in Westminster, London last March, with reports that ISIS had used the incident to encourage more people to join its cause.

Now, Google has revealed its latest attempts to try and stop this practice, with visible changes on the user side.

In a blog post, Google’s general counsel, Kent Walker, revealed a four-point plan that will involve improving YouTube’s detection algorithms, but also recruiting more humans to identify terrorist content before it can spread online.

From a technology perspective, Google said that it has so far run half of the previously removed terrorist videos through its analytical tools, to help find the subtle differences between a video of a terrorist attack for propaganda and a news report.

However, Walker added that the technology will not be a “silver bullet” for limiting the spread of content on YouTube, so there is a need to “greatly increase” the number of human watchdogs through its Trusted Flagger programme.

“Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech,” Google said.

Suppressing terrorist videos

The most obvious change from a user perspective will be the introduction of warning messages before videos that create a blurred line between banned and acceptable – for example, those containing inflammatory religious or supremacist content.

Google added that in these cases, the video creator will not be allowed to monetise the video and comments will be disabled in an effort to make it harder to find.

Finally, those watching recruitment videos for organisations such as ISIS will now be targeted with specialised advertising that will attempt to show them other anti-terrorist videos.

“In previous deployments of [the Redirect Method], potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages,” it said.

Google’s message follows on just a few days after Facebook revealed its plans to use artificial intelligence in the fight to tackle extremism.

“There’s no place on Facebook for terrorism,” said Monika Bickert, director of global policy management at Facebook.

“We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And, in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities.”

Memorial for Westminster Bridge attack. Image: Ioannis Liasidis/Shutterstock

66

DAYS

4

HOURS

26

MINUTES

Buy your tickets now!

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com