Researchers working to improve AI for bot detectors on social media claim their latest discovery can identify a clear ‘human signature’.
With the management of human content reviewers increasingly under scrutiny and more posts to review than ever before, social media companies are turning to AI to sniff out potentially harmful bots. These bots are engaged in a number of activities, ranging from trolling to election manipulation.
However, it can often be difficult for AI to determine which accounts are bots and which are humans. Now, researchers publishing to Frontiers in Physics claim to have discovered short-term behavioural trends in humans – absent in bots – that make it much easier for AI to detect a clear ‘human signature’.
“Remarkably, bots continuously improve to mimic more and more of the behaviour humans typically exhibit on social media,” said the study’s co-author, Emilio Ferrara of the University of California’s Information Sciences Institute.
“Every time we identify a characteristic we think is prerogative of human behaviour, such as sentiment of topics of interest, we soon discover that newly-developed open-source bots can now capture those aspects.”
The researchers studied how the behaviour of humans and bots changed over the course of time using a large Twitter dataset associated with recent political events. Over time, the researchers measured various factors to capture user behaviour, including the likelihood to engage in social interactions and the amount of produced content. This was then compared with results between bots and humans.
‘Bots are constantly evolving’
To study the behaviour of bots and humans, the researchers focused on the number of retweets, replies, mentions and the length of the tweet published. The machine learning classification system developed from this research was able to find something uniquely human in the posts.
Humans showed an increase in the amount of social interaction over time, shown by an increase in the fraction of retweets, replies and number of mentions contained in a tweet.
Humans also produced less, shown by a decreasing trend in average tweet length. This, the researchers said, is likely due to the fact that as a conversation progresses, human users grow tired and are less likely to compose original content. Testing between two classifiers, one trained on these findings and another not, showed the former significantly outperformed the latter.
“Bots are constantly evolving,” Ferrara said: “With fast-paced advancements in AI, it’s possible to create ever-increasingly realistic bots that can mimic more and more how we talk and interact in online platforms.
“We are continuously trying to identify dimensions that are particular to the behaviour of humans on social media that can in turn be used to develop more sophisticated toolkits to detect bots.”