Facebook denies giving its users ‘trustworthiness’ scores

22 Aug 2018

Facebook app on mobile. Image: East Pop/Shutterstock

Facebook says its trust ranking score is more about monitoring misinformation than users themselves.

According to The Washington PostFacebook has been ranking its users on their trustworthiness when it comes to flagging false information on their News Feed.

The newspaper said that Facebook has been developing the previously unreported ratings system over the past year. Facebook executive Tessa Lyons said the company noticed some users falsely reporting items as untrue. She said it is “not uncommon” for users to tell the company that something is false simply because they are trying to target a publisher or disagree with the content.

Unclear criteria

Lyons explained: “One of the signals we use is how people interact with articles. For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”

The criteria that Facebook uses to determine this trust score are unclear. In general, systems such as this one are fairly impenetrable. Companies are also often wary of discussing how the tools work, as further gaming could be a result of publicising the inner workings. Combined with increasing calls for transparency, tech firms have a problem.

Facebook responds

In a statement provided to Gizmodo, Facebook pushed back against the criticism the trust score has been receiving from some privacy and civil rights groups.

It said it does not maintain a “centralised reputation score”. According to Facebook, the system is part of “a process to protect against people indiscriminately flagging news as fake and attempting to game the system” so that it can ensure its efforts to counter misinformation are effective.

The trust ranking, which is a decimal score between zero and one, is only applicable to users who have opted to flag content as false news. The company emphasised that the score is one of many signals it uses to monitor misinformation.

While the trust score may not be as nefarious as some reports would have you believe, privacy campaigners are still calling for more transparency from the firm following a whirlwind few months. Even its fact-checkers said they were frustrated at the lack of openness in a report from the Columbia Journalism Review in April.

CEO of secure content collaboration firm Wire, Morten Brøgger, told Siliconrepublic.com: “Gaining user trust is make or break for any organisation. Companies need to ensure that the applications they use are fully open sourced and independently audited, so their software can be held to account, if they are to instil that trust.”

Facebook app on mobile. Image: East Pop/Shutterstock

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com