European Commission puts pressure on tech firms to tackle illegal content

29 Sep 2017

The European Commission is cracking down on illegal content. Image: Ugis Riba/Shutterstock

The European Commission has some strong words for the world’s most powerful tech companies.

Yesterday (28 September), the European Commission (EC) presented a series of new guidelines and principles for online platforms to help them detect – and swiftly get rid of – inflammatory online content that could incite hatred, terrorism or acts of violence.

‘We cannot accept a digital Wild West’

Věra Jourová, commissioner for justice, consumers and gender equality, said: “The rule of law applies online just as much as offline. We cannot accept a digital Wild West, and we must act.

“The code of conduct I agreed with Facebook, Twitter, Google and Microsoft shows that a self-regulatory approach can serve as a good example and can lead to results.”

According to Mariya Gabriel, commissioner for the digital economy and society, in more than 28pc of reported cases, it takes more than a week for platforms to remove illegal content. It’s the view of the EC that these large companies need to show “corporate social responsibility for the digital age”.

What are the new guidelines?

Proactive and effective weeding out of illegal content

The EC would like to see common tools used by large online platforms to efficiently remove illegal content.

Detection and notification

There should be special points of contact appointed within large tech firms whose role is to aid cooperation with national authorities. “To speed up detection, online platforms are encouraged to work closely with trusted flaggers, ie specialised entities with expert knowledge on what constitutes illegal content.”

The EC also recommended the establishment of mechanisms for users to flag problematic content and investment increases in automatic-detection technologies.

Effective removal

The concept of specific timeframes for the removal of content is being floated, particularly in cases related to incitement to terrorism.

Platforms will need to be more transparent with users when it comes to their content policies, by issuing reports detailing the types of content they have removed in a given time. ISPs should also introduce safeguards to prevent over-removal of content from platforms.

Prevention of reappearance

Platforms have to take a tougher stance in order to put users off repeatedly uploading illegal or dangerous content.

What next?

Yesterday’s statement was something of a warning to Twitter, Facebook et al that they need to be more vigilant when it comes to monitoring. The EC also said it would be carefully monitoring progress, with additional measures (possibly large fines) to be taken if platforms don’t step up. New regulations will need to be in place by May 2018.

Criticism of the EC

Although internet companies have welcomed more of an onus being put on platforms themselves to deal with illegal content, some people have reservations about the vague nature of the proposal. The guidance seems to apply across a large swathe of illegal online content, from hate speech and propaganda to copyrighted content.

This is causing concern among digital rights groups such as the EDRI, which said: “The document puts virtually all its focus on internet companies monitoring online communications, in order to remove content that they decide might be illegal.

“It presents few safeguards for free speech and little concern for dealing with content that is actually criminal.”

There are also general worries around just how effective automated algorithmic tools will be in differentiating between the nuances of human communications on these platforms.

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com