New worm can propagate through generative AI, researchers warn

4 Mar 2024

Image: © asamask92/Stock.adobe.com

The researchers claim their malware is able to target generative AI ‘ecosystems’ and spread using self-replicating prompts to steal data and send spam emails.

Researchers claim to have created a computer worm that can target generative AI-powered applications, in a bid to raise awareness on this potential threat.

The researchers tested this worm against popular AI models Gemini Pro, ChatGPT and LLaVA to demonstrate its potential malicious uses.

The team said this worm – called Morris II – was able to spam users with emails and steal data without the user clicking any malicious link – this is known as zero-click malware. The worm was named after the first worm that was launched on the internet in 1988.

Worms are a type of malware that operate independently and spread across computer networks, “often without requiring any user interaction” according to the researchers behind this study.

The team claims their worm can target generative AI “ecosystems” through the use of self-replicating prompts. With jailbreaking and machine learning techniques, the researchers claim this worm can exploit the connectivity between generative AI systems to spread malware or spam emails.

The study suggests that attackers can use this type of worm to prompt AI models to replicate malicious inputs and then engage in malicious activities. In the study, the AI worm attacked  generative AI email assistants to steal email data and send spam.

The researchers said the worm is able to use a text sent in an email to “poison” the database of certain email application clients, which can then jailbreak models such as ChatGPT and Gemini to “replicate itself and exfiltrate sensitive user data extracted from the context”.

“This research is intended to serve as a whistleblower to the possibility of creating GenAI worms in order to prevent their appearance,” the researchers said. “Due to the automatic inference performed by the GenAI service (which automatically triggers the worm), the user does not have to click on anything to trigger the malicious activity of the worm or to cause it to propagate.”

The researchers said they contacted OpenAI and Google about the worm but added that this is not their responsibility, as “the worm exploits bad architecture design for the GenAI ecosystem and is not a vulnerability in the GenAI service”.

Various experts have spoken about the dangers of generative AI in the context of cybersecurity and some have warned that the adoption of AI technology will lead to a rise in advanced social engineering attacks.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com