Study shows humans are to blame for the bulk of false news on Twitter

9 Mar 2018197 Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

MIT building in Boston. Image: Paper Cat/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Blaming bots for the spread of misinformation online may not be an accurate assumption.

For the past number of years, propaganda and misinformation has been festering on platforms such as Twitter, as numerous outlets and individuals prey on preconceived notions and inflammatory content to glean clicks from users.

The US election in 2016 was a prime example of this phenomenon, and the resulting fallout has seen companies such as Twitter and Facebook implement strategies to improve transparency across the board.

A new study published in the journal Science today (9 March) and carried out by MIT researchers shows that, even when bots are out of the equation, Twitter users generally share a lot more false stories than they do legitimate facts.

A co-author of the research paper, Prof Sinan Aral, said: “We found that falsehood defuses significantly farther, faster, deeper and more broadly than the truth, in all categories of information and, in many cases, by an order of magnitude.”

The study was born out of examination of the aftermath of the 2013 Boston Marathon bombing. Another co-author of the paper, Soroush Vosoughi, found that a lot of what he read online after the atrocity was simply rumour.

The researchers examined what are known as Twitter ‘cascades’ – unbroken retweet chains – and charted how they were spreading. They had support from Twitter itself, and access to its archives meant the team could look at approximately 126,000 new story cascades.

The largest amount of cascades consisted of political news, followed by urban legends, business terrorism, science, entertainment and natural disasters. Within the paper, researchers chose to refer to unfounded stories as ‘false news’ as opposed to ‘fake’.

The team explained their reasoning behind this decision: “As politicians have implemented a political strategy of labelling news sources that do not support their positions as unreliable or fake news, whereas sources that support their positions are labelled reliable or not fake, the term has lost all connection to the actual veracity of the information presented, rendering it meaningless for use in academic classification.”

Untrue stories more likely to be retweeted

According to the researchers, false news stories are a massive 70pc more likely to be retweeted than true stories are.

The team also found that factual news does not travel fast. “It also takes true stories about six times as long to reach 1,500 people as it does for false stories to reach the same number of people.”

Researchers assessed the veracity of the cascades against six fact-checking organisations, including Factcheck.org and Snopes.com, and found a 95pc overlap in judgement across all six bodies.

Surprising results

Director of MIT Media Lab’s Laboratory for Social Machines, Deb Roy, said: “These findings shed new light on fundamental aspects of our online communication ecosystem,” adding that the researchers were “somewhere between surprised and stunned” by the results.

Prof Aral said: “False news is more novel, and people are more likely to share novel information.” As he explained, there is a speed factor attached to the spread of these stories: “People who share novel information are seen as being in the know.”

In terms of the emotional responses to different kinds of stories, false news content elicited more surprise and disgust whereas true stories saw responses that were more sad, anticipatory and trusting.

Behavioural solutions

So, how can this research be implemented to stop the spread of propaganda and untrue stories? Prof Aral says more behavioural interventions need to be developed, particularly as the role of bots is not as prominent in comparison to human interactions with such stories.

The problem is twofold, hypothesised Vosoughi: there are people intentionally spreading fake news and other people then disseminating it without fact-checking.

The team members concluded: “Understanding how false news spreads is the first step toward containing it. We hope our work inspires more large-scale research into the causes and consequences of the spread of false news as well as its potential cures.”

MIT building in Boston. Image: Paper Cat/Shutterstock

Ellen Tannam is a writer covering all manner of business and tech subjects

editorial@siliconrepublic.com