The rising risk of AI scams and disinformation

26 Sep 2023

Blackbird.AI co-founder and CEO Wasim Khaled. Image: Blackbird.AI

Wasim Khaled of Blackbird.AI discusses the threat that AI poses in destroying trust online, flooding discussions with fake content and spreading disinformation to influence politics.

One of the concerns that has been raised around AI is its potential to create dangerous or misleading content.

Over the past year in particular, AI has shown itself to be capable of creating convincing text, images and audio that can appear realistic. A study published in July showed that these advanced systems can trick people into believing false information better than humans can.

Companies and governments have taken some measures to tackle this risk associated with AI. In May, Google revealed new tools to help users spot misleading or AI-generated images, in a bid to tackle misinformation and give users “the full story”. Meanwhile, Ireland set up a working group earlier this year to create a national anti-disinformation strategy.

Wasim Khaled is the co-founder and CEO of Blackbird.AI, a company that provides intelligence services to protect enterprises from attacks based on disinformation.

He said concerns around deep fakes and misleading AI-generated content were “overblown” in the past, but that generative AI has made those fears a reality. This is due to the fact that generative AI can make content that is “low cost, easy access, easy to use and just highly realistic and believable”.

“All of those things combined with the existing misinformation [and] disinformation landscape, the gas that threw on the fire has created a media ecosystem that is there right now, the work realities are here,” Khaled said. “It’s something we all kind of have to contend with in different ways, with varying levels of effectiveness currently.”

Khaled said his company was founded on the belief that in the future, “AI-driven computational propaganda” would make the information and media ecosystems “almost untenable” for any kind of real understanding.

In March, a report compiled by Europol experts claimed that AI chatbots such as ChatGPT could be used to exacerbate problems of disinformation, fraud and cybercrime.

Denial-of-trust attacks

AI has various applications and unfortunately, this appears to be true for malicious groups too. Examples have been shown of AI-generated images being used to spread fake stories of explosions at the US Pentagon, and of large language models being used to develop malware.

In April, Julian Hayes of Veneto Privacy Services said his client had experienced a “very sophisticated, targeted phishing attack” that utilised AI and deepfake technology.

Khaled believes one of the biggest threats around AI comes from the ability of some of these systems to create false “text-based narratives” for the purpose of “narrative attacks”.

“When we talk about narrative attacks, we are talking about any kind of activity in the information ecosystem that creates and asserts a shift in perception about a person, place or thing,” Khaled said.

With generative AI systems, Khaled said groups are able to quickly generate various “conflicting or polarising viewpoints” on a certain narrative, which can be used to spread disinformation.

Various mainstream large language models – like ChatGPT – have certain guardrails to prevent it from being used maliciously, though Khaled claimed there are ways to get past guardrails. There are also certain AI models that advertise the fact they have no guardrails, such as WormGPT and FraudGPT.

“Now you have very powerful tools that can help generate much more content to test more efficiently at a low cost, very much like a marketing company would do,” Khaled said.  “Which harmful narrative sticks the most, then you double down on that and you have large language models to help.”

Blackbird.AI claims this type of technology is being used to help push “distributed denial-of-trust” attacks, which are designed to drown out “any notion of trust” associated with a particular entity or narrative by “pulling a lot of other things into it”, such as conspiracy theories.

Solving this type of attack appears to be difficult, but Khaled suggest gaining awareness and understanding around these type of attacks is useful.

“Blackbird has a narrative and risk intelligence platform that looks at a varying number of signals about the content and how it essentially shifts perceptions in the public space,” Khaled said.

“Being able to understand the nature of these narrative attacks, the motives and the way that they propagate and drive harm is really what we’re providing.”

Social media disinformation

Blackbird.AI has also raised concerns about how narrative attacks can be used to influence politics, such as for the 2024 election in the US. While various social media platforms have issues with the spread of disinformation, Khaled said TikTok in particular could be used to influence opinions, as a lot of younger people use it as a form of search engine.

Blackbird.AI also references TikTok’s links to Chinese-headquartered company ByteDance as a potential risk, claiming that the country’s government could potentially use its influence to “promote narratives that align with their interests and suppress dissenting voices on the platform”.

To date, multiple countries have raised concerns about TikTok’s connections with the Chinese government and whether it could access user data. The US state of Montana plans to ban the app entirely, though TikTok has taken this issue to court.

TikTok has denied claims that it manipulates content in a way that benefits the Chinese government, or that the government can access TikTok user data. But certain reports in recent years claim TikTok has engaged in censorship on certain political topics and accessed the data of journalists.

“All the social media platforms have responsibilities here, and they’re all suffering from the same issues and those issues are challenging,” Khaled said. “There’s a lot of contention around what moderation versus censorship is. So, I don’t envy that decision, that last decision of like, what do we do even if we know it’s happening.”

Earlier this month, a report by Microsoft claimed state-sponsored hackers in China are using generative AI as a way to spread false information and influence US voters.

Meanwhile, Google recently revealed plans to tackle AI-generated ads being used in political campaigns. The company aims to force advertisers to clearly disclose if their content is AI generated.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com