Researchers have analysed hundreds of thousands of bots on social media, showing that it’s becoming more difficult to point out what’s AI and what’s human.
In the US, Silicon Valley’s social media giants are gearing up for a battle that will pit them against a wave of fake or automated accounts designed to disrupt the upcoming 2020 presidential elections. However, researchers from the University of Southern California have warned that these bots have now evolved to the point where it’s much harder to differentiate them from a human.
In the journal First Monday, the researchers examined online behaviour during the US elections held in 2018 versus the 2016 elections. Analysing almost 250,000 active social media users, they were able to detect more than 30,000 bots.
Looking further into their behaviour, it was shown that in 2016 these bots were primarily focused on retweets and high volumes of tweets to spread the same message. However, by 2018, this had all changed.
Instead, research showed that bad actors were more likely to employ a multi-bot approach as if to mimic authentic human engagement around an idea. This included bots trying to establish voice, add to dialogue and engage users through the use of polls. This was attempting to mimic reputable news agencies and pollsters in a bid to fabricate authenticity to their posts.
Highlighting one example, the researchers identified a bot that posted a Twitter poll asking if federal elections should require voters to show ID at the polls. It then asked Twitter users to vote and retweet.
“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies,” said the study’s lead author, Emilio Ferrara.
“Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected.”