It has been more than a month since the spread of fake news on Facebook was blamed for the election of Donald Trump, and now the social network has revealed its plan of action.
With more and more people turning to social media to learn what is going on in the world – or at least just reading the headlines – the need to tackle the spread of fake news has recently been identified as a priority by Facebook.
In the days that followed Donald Trump’s election to the US presidency, stories from within the Facebook camp revealed that a number of people within the company were angry at its founder Mark Zuckerberg for claiming it was a “pretty crazy idea” that fake news played a part.
Tackling the ‘worst of the worst’
In the weeks that followed, Facebook and Zuckerberg eventually admitted its part in not doing enough to stem the spread of fake news on its platform and said it was going to do something about it.
That plan has now been revealed in a blog post by the company, with a new notification and reporting system designed to tackle the “worst of the worst” websites.
The first change will allow users to report sites that they believe to be fake news and why they think that it should be flagged by the social network.
To help determine whether someone flagging a post might be a bit too hasty with their decision – or perhaps are doing it simply because they disagree with the sentiment of the post – Facebook has said it will work with external fact-checking organisations to vet the websites.
Reducing financial incentives
If the fact checking organisations identify a story as fake, it will get flagged as disputed and there will be a link to the corresponding article explaining why.
Such stories will not be deleted entirely from Facebook, but will be pushed lower down a user’s feed, with an additional warning message highlighting it as potentially fake content.
Facebook has also addressed the reality that promoting fake news using clickbait headlines is a profitable business in the wrong hands, but has been vague as to how it will “reduce the financial incentives” for adverts on the platform.
It will also prevent groups from spoofing domains in order to appear as if they are a credible news organisation.
“We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we’re approaching this problem carefully,” said Facebook’s Adam Mosseri.
“We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third-party organisations.”
Meanwhile, a survey in the US conducted by the Pew Research Center into fake news has found that by a slim margin, the American people believe the country’s government has the greatest responsibility to tackle the issue, closely followed by the tech industry.