WormGPT and FraudGPT: The dark side of generative AI

14 Aug 2023

Image: © markuk97/Stock.adobe.com

Hackers are turning to ChatGPT alternatives that have no restrictions as a way to create their own forms of malware.

There always seems to be a mix of positive and negative applications with new technology and generative AI is no exception.

While not exactly a new concept, generative AI has surged in both popularity and power this year following the sudden success of ChatGPT. Since then, there have been endless examples of how this type of AI can benefit workplaces, companies and the services they provide.

In simple terms, generative AI is a form of machine learning that can generate text, images and other types of content based on text prompts. The capabilities of chatbots like ChatGPT have been praised by many users.

While these AI models present various benefits, there are also ways that this technology can be used for nefarious purposes. Earlier this year, there was evidence that criminals were using ChatGPT to develop their own forms of malicious software.

In one example, a user claimed to develop an encryption tool using ChatGPT, despite having no coding experience themselves.

OpenAI – the company behind ChatGPT – has tried to limit the type of negative content its AI model can produce. But this has led to a rise in similar, copycat models that lack these types of restrictions.

WormGPT

One example of this is WormGPT, which was advertised as an alternative to ChatGPT that lets users do “all sorts of illegal stuff”.

An analysis by cybersecurity company SlashNext suggests WormGPT was first advertised on a hacker forum as a blackhat alternative to ChatGPT.

“Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home,” a poster said on the forum post.

SlashNext researcher Daniel Kelley asked this generative AI tool to create a business email compromise – or BEC – attack. This is a type of phishing attack that aims to “pressure an unsuspecting account manager into paying a fraudulent invoice”, according to Kelley.

“The results were unsettling,” Kelley said in a blogpost. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”

WormGPT has its own website and can be purchased as a monthly plan, costing €60 a month or €700 for a yearly plan. Surprisingly, the website attempts to distance itself from the malicious activity it was initially advertised for.

“We do not condone or advise criminal activities with the tool and we are mainly based towards security researchers so they can test and try out malware and help their systems defend against potential AI malware,” the WormGPT website says.

FraudGPT

While WormGPT is making a (weak) attempt to be portrayed as a positive form of AI, another tool is being sold that is showing no attempt at hiding its malicious purpose.

The model – aptly named FraudGPT – is being sold on the dark web as a way to benefit fraudsters, hackers and spammers. Telegram advertisements for this tool were discovered and shared by researchers at cloud data analytics platform Netenrich.

The advertisement claims FraudGPT has no limitations, rules or boundaries and can be used to write malicious code, create “undetectable malware”, make phishing pages and much more.

“As evidenced by the promoted content, a threat actor can draft an email that, with a high-level of confidence, will entice recipients to click on the supplied malicious link,” Netenrich said in a blogpost. “This craftiness would play a vital role in business email compromise phishing campaigns on organisations.”

Similar to WormGPT, FraudGPT appears to be sold on a monthly subscription plan, costing €200 a month or €1700 for a full year purchase.

The rise of these forms of AI models confirms bleak predictions experts gave earlier this year. In one prediction, Immanuel Chavoya of SonicWall said new software will give threat actors the ability to quickly exploit vulnerabilities and reduce the technical expertise required “down to a five-year-old level”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com