Bot versus bot: An online AI battle will soon rage over fake news

13 Jun 201726 Shares

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Fake news? Image: MichaelJayBerlin/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

When it comes to tackling the spread of fake news on social media, a new artificially intelligent bot might be the truth’s secret weapon.

What is and isn’t fake news online can be open to interpretation depending on who you speak to. However, new technologies being implemented by major social networks such as Facebook and Twitter are attempting to use artificial intelligence (AI) to find truth among the lies.

One of these latest efforts is currently ongoing at the University of Texas at Arlington, where a team of engineers have developed a complex algorithm that they believe could be our best shot of tackling fake news bots online.

Entitled ‘Bot versus bot: Automated detection of fake news bots’, the team’s project examines fake news in the context of Twitter, where bots relentlessly publish and forward content to thousands of people.

These bots even leave comments and follow other people in order to trick unsuspecting users into thinking they are talking with a human, rather than an algorithm.

By understanding the complexity and design of these bots, the research team’s own AI could be implemented to counteract any fake news actions – but this is a lot more difficult than it seems.

“Even if a bot uses high-end AI and massive processing power, an extremely simple detection technique may be enough if the bot always posts at the same time of day or has some other trait that makes it easy to distinguish the bot from humans,” said co-principal investigator on the project, Christoph Csallner.

Challenge of state-sponsored bots

However, rather than creating an AI to prevent other AI from propagating false political news, this project will focus on what the engineers deem to be the most dangerous kind: state-sponsored bots created to destabilise national security.

This means, in many cases, that the bots are incredibly complex and difficult to crack, with investigator Mark Tremayne giving this example: “You might find that a bot takes a piece of real and true information, then adds an element that isn’t true. So, in the end, you have different levels of fake news.”

While only at a seed funding stage, the team hopes that its bot will evolve to become the force to be reckoned with in the years to come.

“We will conduct experiments to better understand the interaction between bots and news-consumption behaviours and effects,” said investigator Zhiqiang Lin.

“By putting together a team of computer scientists and social science scholars, this project seeks to advance our understanding of fake-news bots and our capability of countering it.”

Fake news? Image: MichaelJayBerlin/Shutterstock

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com