ChatGPT-4 can potentially make hackers’ lives easier, research finds

20 Mar 2023

Image: © sofirinaja/Stock.adobe.com

According to Check Point Research, ChatGPT-4 can even be exploited by non-technical bad actors to engage in cybercrime.

Despite safety improvements in ChatGPT-4 over previous iterations of the AI chatbot, security experts have found that it can still help hackers streamline their cybercrime activities.

Cybersecurity intelligence provider Check Point Research (CPR) has published an initial security analysis of ChatGPT-4 that highlights five scenarios in which the OpenAI chatbot can be used by cybercriminals to improve on existing methods.

The scenarios include the creation of C++ malware that collects PDF files and sends them to victims, impersonating a bank, sending malicious emails to employees and even making a PHP reverse shell that targets a system’s vulnerabilities to access victims’ computers.

Research conducted by CPR also found that non-technical actors can also use ChatGPT-4 to create harmful tools “as if the process of coding, constructing and packaging is a simple recipe”.

Restrictions put in place can be easily circumvented, allowing hackers to conduct their crimes more efficiently, CPR found.

“While the new platform clearly improved on many levels, we can, however, report that there are potential scenarios where bad actors can accelerate cybercrime in ChatGPT-4,” said Oded Vanunu, head of products vulnerabilities research at Check Point Software.

“ChatGPT-4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity. Bad actors can also use ChatGPT-4’s quick responses to overcome technical challenges in developing malware.

“As AI plays a significant and growing role in cyberattacks and defence, we expect this platform to be used by hackers as well, and we will spend the following days [trying] to better understand how.”

OpenAI revealed GPT-4 last week. It is the Microsoft-backed company’s latest large language model which it claims to be its most reliable AI system to date and can understand both text and image inputs and is able to “solve difficult problems with greater accuracy”.

The model is available for ChatGPT premium subscribers, while a waitlist has been set up for developers that want to dive into the model.

OpenAI said GPT-4 has been in development behind the scenes for months now, with the assistance of feedback from those using ChatGPT – which runs on GPT-3.5.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com