In collaboration with the FBI, the social media platforms have identified accounts associated with Russia’s Internet Research Agency, which was linked to interference in the 2016 US presidential election.
Social media platforms Facebook and Twitter have taken action on accounts believed to be attempting to influence users in the US ahead of the country’s next presidential election.
Working with the FBI’s Foreign Influence Task Force, the companies found that accounts associated with the website Peace Data were part of an early-stage campaign linked to the Internet Research Agency, the Russian organisation known to have used Facebook and Twitter to influence US users in the 2016 US presidential election.
Peace Data publishes content in English and Arabic with a left-wing slant and claims to be a non-profit news source. A statement on its website denies an association with the Internet Research Agency.
‘This activity focused primarily on the US, UK, Algeria and Egypt’
Facebook removed 13 accounts and two pages linked to individuals associated with past activity by the Russian Internet Research Agency. “This activity focused primarily on the US, UK, Algeria and Egypt, in addition to other English-speaking countries and countries in the Middle East and North Africa,” said the company in its latest report on coordinated inauthentic behaviour.
“We began this investigation based on information about this network’s off-platform activity from the FBI. Our internal investigation revealed the full scope of this network on Facebook,” the company added.
According to BBC News, around 14,000 accounts followed one or more of the removed Facebook pages, with the English-language page having around 200 followers.
Twitter suspended five accounts linked to Peace Data and Russian state actors under its policy on platform manipulation. It will also block further links from Peace Data being shared on its platform, while existing links will be “de-amplified”.
Twitter emphasised that the campaign had been stopped at its early stages, and involved “low-quality and spammy” tweets that had received few, if any, likes or retweets.
“The accounts achieved little impact on Twitter and were identified and removed quickly,” the company tweeted.
‘This is the first time we have observed known [Internet Research Agency-linked] accounts use AI-generated avatars’
Twitter also acknowledged that some of the content published by Peace Data was created by real freelancers recruited to write for the website. However, it was found that associated accounts used fake names and profile pictures.
Online campaigns aiming to manipulate users and spread disinformation have often used stock photos or images of other people to build fake profiles. However, social-media monitoring company Graphika, which performed independent analysis on Facebook’s investigation, said that the Peace Data network used AI-generated profile pictures across Facebook, Twitter and LinkedIn in an attempt to appear more convincing.
“This is the first time we have observed known [Internet Research Agency-linked] accounts use AI-generated avatars,” wrote the Graphika team.
As deepfake technology using artificial intelligence to create fake content such as this makes online disinformation more difficult to detect, Microsoft has announced new tools to combat it.
Microsoft Video Authenticator will be able to analyse a photo or video and give it a confidence score, evaluating the likelihood that it was artificially manipulated.
“In the case of a video, it can provide this percentage in real time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye,” Microsoft explained.