Facebook might be eager to get its hands on this latest ‘fake news’ AI

4 Oct 2018234 Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Image: ©voyata/Stock.adobe.com

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

MIT has debuted a new AI it said can detect ‘fake news’ at the source, no doubt intriguing social media companies eager to find a solution.

In one of Facebook’s latest PR gaffes, the company admitted that its algorithms incorrectly flagged a story highlighting the company’s recent data breach as spam because it was being shared so much.

While it is reasonable to assume Facebook was not actively trying to suppress negative news about itself, it highlighted the problems with artificial intelligence’s (AI) ability to correctly identify what is and isn’t so-called ‘fake news’.

This is one reason why Facebook said it plans to have 20,000 human moderators by the end of the year to sort through reports as its technology isn’t up to scratch yet.

Bad outlets will likely offend again

However, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Computing Research Institute (QCRI) believe they have found an alternative way to sniff out dubious articles.

Rather than focusing on the factuality of the news itself, the new AI will analyse the news sources themselves. This different approach, the team claimed, means AI is able to accurately determine an article’s truthfulness.

“If a website has published fake news before, there’s a good chance they’ll do it again,” said Ramy Baly, lead author on a new paper about the system. “By automatically scraping data about these sites, the hope is that our system can help figure out which ones are likely to do it in the first place.”

Baly said that the AI only needed approximately 150 articles to determine whether a news source can be trusted, meaning it would be able to ‘stamp out’ problematic outlets before the stories can spread.

Screenshot of an Infowars article with highlighted trigger words for AI.

MIT example highlighting trigger words that AI interprets as being from an untrustworthy source. Image: MIT CSAIL

What its potential is

The data was compiled from Media Bias/Fact Check, a website of human fact-checkers who analysed the accuracy and biases of more than 2,000 news sites. It was then added to the algorithm, known as a support vector machine (SVM) classifier.

When given a news outlet, the algorithm was found to be 65pc accurate at detecting whether it has a high, low or medium level of factuality; and was roughly 70pc accurate at detecting if it is left-leaning, right-leaning or moderate.

The system also found correlations with an outlet’s Wikipedia page, going on the assumption that the longer it was, the more legitimate it was, as well as finding identifying factors in the structure of a source’s URL. If it had lots of special characters and complicated subdirectories, it was considered less reliable.

QCRI senior scientist and co-author of the paper, Preslav Nakov, said of its potential: “If outlets report differently on a particular topic, a site like PolitiFact could instantly look at our ‘fake news’ scores for those outlets to determine how much validity to give to different perspectives.”

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com