New findings from Pew Research show just how much of a scourge bots have become.
Bots have become something of a shorthand for all kinds of online interference, from political tweets to malicious fake activism designed to stoke racial tensions in the US.
A Pew Research Center report published on Monday (9 April) dove deep into the proliferation of bots on Twitter and found that the majority of tweeted links to popular websites (66pc) are estimated to come from bots as opposed to human users.
The research zeroed in on the 2,315 most popular websites, analysing more than 1m tweets sent between 27 July and 11 September 2017.
The 500 most active accounts that Pew pegged as bots represented 22pc of links tweeted to popular sites; the 500 most active human accounts were responsible for just 6pc of the links tweeted to the same sites. Pew did not find evidence of political bias among the accounts it suspected were bots.
It said: “Suspected bots share roughly 41pc of links to political sites shared primarily by Conservatives and 44pc of links to political sites shared primarily by Liberals – a difference that is not statistically significant. By contrast, suspected bots share 57pc to 66pc of links from news and current events sites shared primarily by an ideologically mixed or centrist human audience.”
What websites did bots link to the most?
According to the findings, there were certain categories of websites that seemed to be more popular with bots than others. Approximately 90pc of links to adult websites were likely tweeted from bot accounts in Pew’s sample, while 76pc of links tweeted to sites relating to sport were from bots. This does not give an accurate depiction of how many impressions the tweets themselves made on human users, but is a large volume regardless.
Pew used Botometer, a tool that detects automated posts developed by researchers at the University of Southern California and the Center for Complex Networks and Systems Research at Indiana University.
A spokesperson for Twitter said it was not possible to accurately determine the difference between a human-run or bot account. Pew noted this error margin in its research, referring to the bots analysed as “suspected bots”.
Clarifying the study aims
The researchers issued some clarifying statements along with their work, citing “certain caveats in interpreting the findings of this analysis”. They noted that the study only examines major media outlets as measured by the number of shares they receive on Twitter.
The study also does not examine the “truthfulness (or lack thereof) of the content shared by humans and the content shared by bots”. Researchers focused on overall sharing rates, adding that the research “does not account for the subsequent shares or engagement of human users”.
Stefan Wojcik, lead author of the study, explained that the research was not aiming to fact-check tweets or unearth political interference. “We can’t say from this study whether the content shared by automated accounts is truthful information or not, or the extent to which users interact with content shared by suspected bots.”
Director of research at Pew, Aaron Smith, issued a response to Nieman Lab following some criticism of the study methodology on Twitter: “I’m a survey research guy by training and an analogy I sometimes use to talk about this is: if you pick any individual person out of the population, you may find someone who has views that are wildly divergent with the bulk of public opinion. But, if you collect this bulk of responses using known and tested methodologies, you find something that largely conforms with observed reality, even if you have outliers and extreme cases.”
The bot problem is inescapable
Pew conducted a number of separate tests as well as using Botometer to cross-check the bot classifications it had made, including manually classifying accounts. Findings with tweets from verified accounts removed were also examined, with no significant change reported in these results.
Twitter CEO Jack Dorsey addressed the bot problem in a March tweet:
We have witnessed abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers. We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough.
— jack (@jack) March 1, 2018
The company has also recently rolled out a series of strict restrictions to try and curb this type of activity on its platform.