Europol warns ChatGPT in the wrong hands can worsen crime

28 Mar 2023

Image: © Tobias Arhelger/Stock.adobe.com

A new report compiled by Europol experts has found that AI chatbots such as ChatGPT can exacerbate problems of disinformation, fraud and cybercrime.

ChatGPT may be all the rage right now, but law enforcement experts in Europe are worried AI chatbots such as this are susceptible to misuse in the hands of tech-savvy criminals.

In a report published yesterday (27 March), Europol has issued a stark warning against large language models – the code behind text-based generative AI such as ChatGPT – and their potential to help criminals get better at defrauding people and spreading disinformation.

“As the capabilities of large language models such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook,” Europol wrote in a statement on its website.

“As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse.”

Europol experts identified three key areas in which criminals can exploit chatbots such as ChatGPT to proliferate their illicit activities: fraud and social engineering, disinformation and cybercrime.

The EU law enforcement agency said that because ChatGPT can draft highly realistic text and impersonate the style of speech of specific individuals or groups, it has strong potential to be misused for phishing at scale.

This ability of large language models, including ChatGPT rivals such as Google’s Bard and Anthropic’s Claude, to produce authentic-sounding text at speed and scale can also be used to spread disinformation.

“This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort,” the statement reads.

What’s more, ChatGPT’s ability to write code can help even non-technical criminals take advantage and produce malicious code that can further entrench cybercrime globally.

Rachel Jones, CEO of SnapDragon Monitoring, thinks that technologies such as ChatGPT, when in the wrong hands, can become “a cyber weapon of severe destruction”.

“When it comes to protecting internet users, businesses must do more to communicate with their customers on the threat posed by ChatGPT,” Jones said.

“Warn them about email scams and phishing and take steps to proactively monitor for fake versions of websites being published online. AI tools can help spot these fake domains and then work to have them removed before they cause harm.”

At a personal level, Jones recommends treating all emails requesting personal and financial information with scepticism.

“Avoid clicking on links in emails and instead visit the site directly. If you do receive an email urgently requesting information, call the organisation instead. No security-conscious business will see this as a nuisance, and it could end up saving you from significant financial losses.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com