EU plans hefty fines for tech firms that are too slow to remove extremist content

20 Aug 2018

EU commissioner Julian King. Image: Alexandros Michailidis/Shutterstock

The European Commission is planning to roll out stricter regulations on social media platforms such as Facebook with regard to the removal of terrorist propaganda.

The discussions of technology and its connection to terrorist and extremist radicalisation have been ongoing for some time now.

In March of this year, the EU said it would be giving companies just a few months to make notable improvements in their content flagging and removal methods.

Prior to this, European leaders Theresa May and Emmanuel Macron criticised tech platforms such as Facebook and Twitter in a meeting last September.

EU taking a tougher stance

Now, according to a report in the Financial Times, the European Commission has decided to scrap a voluntary approach to ensure firms remove content. This will apparently be replaced with a much starker draft regulation, which will be published in September.

EU commissioner for security, Julian King, said that lawmakers had not seen adequate progress on the removal of terrorist material. He added that Brussels would now “take stronger action in order to better protect our citizens”.

King continued, warning that EU leaders “cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon”.

The draft legislation’s details are still to be finalised. There will likely be a time limit of an hour imposed on the companies to delete material flagged as terrorist content.

Not enough transparency

The legislation needs the approval of the European Parliament and a majority of EU member states to go ahead.

King added that the rules would apply to all websites regardless of size.

He also criticised the lack of transparency from some companies in how they deal with such content.

The firms in question have put numerous plans in place to stem the tide of terrorist content. In December 2016, Facebook, Twitter and YouTube teamed up to create an image-sharing database in an effort to help flag content. Individually, the companies are developing AI flagging systems and hiring more content review staff.

Despite efforts from the likes of these companies to remove content swiftly, there is still plenty slipping through the cracks. According to a study by Counter Extremism Project, ISIS member and supporters uploaded 1,348 YouTube videos between March and June of this year. 24pc of these videos were left on the platform for more than two hours, allowing for the content to be copied and distributed across other online platforms.

EU commissioner Julian King. Image: Alexandros Michailidis/Shutterstock

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com