Firms such as Facebook, Google and Twitter will have to abide by strict new regulations or face fines.
The government in Germany is taking a firm stance against online hate speech and illegal content as its new network enforcement law (NetzDG) comes into effect.
Authorities had given internet platforms a three-month grace period to set up new complaint management systems, but this ended on 1 January.
According to Deutsche Welle, any platform with more than 2m users will be subject to NetzDG, meaning that Google, Twitter, Facebook and Instagram will have to answer to authorities if their responses to complaints about content are deemed unsatisfactory.
The BBC reported that hefty fines of up to €50m could be issued to firms that don’t act fast to remove hate speech, fake news and material that can be described as “obviously illegal”.
Concerned citizens can now use forms made available by the German justice ministry to report content that violates the new law as well as material that has remained online for longer than the 24-hour period in which networks can act.
The new law was introduced after several cases arose in Germany involving the spread of false stories and xenophobic material via major social media platforms.
If the case is deemed to be a complicated matter, platforms will be given a week to determine the most suitable course of action as opposed to just 24 hours for more straightforward incidents.
Firms are taking action to comply
Twitter has already added an option to its report function that specifies violation of NetzDG as the reason for flagging content, and Google has also created an online form for the reporting of materials under the purview of the new law.
Facebook’s system for NetzDG compliance exists outside of its regular reporting frameworks, and it involves screenshotting offending posts and choosing one of 20 options that most closely matches the problem with the specific content.
There has been some criticism of the law from the German far-right side, which deemed it an exercise in censorship, but some internet activists have also raised concerns, as they say that the government leaving the task of deleting content to platforms means it could be difficult to discern why certain posts have been deleted.
Platforms can delete posts without informing authorities, unless a threat of violence is issued or child pornography is posted.
Monitoring content is no easy task, with a recent ProPublica investigation showing that Facebook content reviewers had made mistakes in flagging 22 out of 49 posts.
Facebook vice-president of global operations and media partnerships, Justin Osofsky, said: “We’re sorry for the mistakes we have made – they do not reflect the community we want to help build.”