New Facebook tech to fight ‘fake news’ inspired by age-old Google ranking

11 Apr 2019

Image: Mactrunk/Depositphotos

Facebook has revealed its latest playbook in the fight against misinformation, with its new tech inspired by how Google ranks pages.

The Facebook news feed is about to undergo yet another change as the company continues to swing – sometimes missing – at the flood of misinformation being spread on the platform.

In a blogpost, Facebook said that a technology called Click-Gap will be one of its key weapons in this fight. Similar to Google’s own PageRank technology that has been running its search engine for decades, Facebook’s algorithms will scour the web and index everything.

In doing so, they can identify when a particular piece of content receives almost all of its traffic from Facebook. If it isn’t being shared among other platforms, such as Reddit or Google, then that’s a major red flag that it is questionable content.

As the company put it: “This can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it, and is producing low-quality content.”

Facebook also revealed that it will be expanding its collaboration with academics, fact-checking experts, journalists, survey researchers and civil society organisations as it admitted it faces a serious “challenge of scale” to hire humans to sift through the 1bn or so posts each day.

“There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time,” it said.

Clamping down on toxic groups

Another area the social network is clamping down on is the power of groups, and specifically those it deems as being the propagators of misinformation. This means that if a group repeatedly shares a story that has been flagged by its third-party fact-checkers – such as the Associated Press programme it has also just expanded – the group’s appearance in the news feed will fall.

Not only that, but the administrators of groups Facebook deemed as toxic will also be held more accountable under the company’s community standards.

“Starting in the coming weeks, when reviewing a group to decide whether or not to take it down, we will look at admin and moderator content violations in that group, including member posts they have approved, as a stronger signal that the group violates our standards,” it said.

The announcement comes not long after Facebook appeared before US Congress to answer questions on whether it was partly responsible for the spread of the livestreamed video of the mass shooting in New Zealand last month. When asked why the social network’s algorithms didn’t flag the content for removal, Facebook’s policy director for counterterrorism, Brian Fishman, said it was not “particularly gruesome”.

Person using Facebook on a tablet. Image: Mactrunk/Depositphotos

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com