Genie out of the bottle: ChatGPT has shaken up the AI sector

14 Feb 2023

Image: © New Africa/Stock.adobe.com

From fuelling an AI arms race to helping hackers create malware, ChatGPT has had a wide-ranging impact in a very short timeframe.

Click here to view the full AI and Analytics Week series.

ChatGPT has rapidly grown to become one of the most popular topics in the AI sector, drawing the attention of the public, tech giants and various industries.

First launched last November, the advanced chatbot quickly grew into one of the most popular pieces of software ever released. One study estimated that ChatGPT reached 100m monthly active users last month, Reuters reports.

The success has also been significant for the company behind the software, OpenAI, which received 672m website visits last month, according to analysis by Digital Adoption.

The capabilities of the chatbot has been praised by many users, causing a shake-up in the AI sector as tech giants race to put its capabilities to use in their own systems.

But the rapid rise of ChatGPT has also led to concerns about ways this software can be misused, with examples popping up of misinformation and malware being spurred on by people using the AI model.

Rebecca Wettemann, CEO of analyst firm Valoir, believes those who are hyping up the benefits of ChatGPT and those who are concerned about negative consequences are “equally right”.

“The technology will amplify the speed and automation of responses that AI delivers over humans in both directions,” Wettemann said.

A boost for malware

Users of ChatGPT have shared endless examples of the software providing answers on a variety of topics, from history and general knowledge to coding and scientific theories.

Unfortunately, some criminals have also revealed the AI system’s ability to assist in developing malicious software.

Researchers at cybersecurity company Check Point claim to have found multiple examples of criminals sharing malware created with the help of ChatGPT on hacker forums.

In one shared example, a thread called “ChatGPT – benefits of malware” was shared on a hacking forum. The creator of this thread shared code for an info stealer they made using ChatGPT, which is able to copy certain files into a zip folder to send across the web.

Another example posted on a hacker forum was of an encryption tool, which the creator claimed was the first script they had ever created.

Check Point threat intelligence group manager, Sergey Shykevich, said ChatGPT has the potential to speed up the process for hackers by “giving them a good starting point”.

“Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes,” Shykevich said. “Although the tools that we analyze in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools.”

In recent AI predictions for 2023, experts such as Immanuel Chavoya of SonicWall said new software will give threat actors the ability to quickly exploit vulnerabilities and reduce the technical expertise required “down to a five-year-old level”.

Risks of misinformation and bias

Concerns have also been raised about the potential these AI systems have in rapidly spreading misinformation that is hard to detect.

Despite the high praise the AI model has received since its launch, examples have been shared online of the chatbot giving wrong answers or providing racist content.

Other examples show ChatGPT having certain biases, such as having a particular viewpoint when asked questions about political topics, though some of these examples are disputed by other users.

Wettemann said this potential bias raises concerns around the ethical use of data and whether these types of AI models can be trained for “nefarious purposes”.

“ChatGPT on its own can’t identify right from wrong or truth from fiction – it only reflects the training it’s been given,” Wettemann said. “The ready availability and usability of ChatGPT highlights the importance of understanding the bias in AI models and their training.”

Last December – at around the time ChatGPT crossed 1m users – a Q&A site for programmers issued a temporary ban on answers created by the chatbot.

The site, Stack Overflow, claimed the number of correct answers created by ChatGPT is “too low” and that posting answers made by the AI could be “substantially harmful” to the site and its users.

A similar issue occurred when Meta released a demo for Galactica AI, its science-focused tool that could create its own scientific papers or Wikipedia-style articles.

The demo was pulled after three days following extensive criticism from users, who said the model generated wrong or biased content that “sounded right and authoritative”.

Henrik Roth is a co-founder of Neuroflash, a content generation platform that uses OpenAI technology along with its own software.

Roth told SiliconRepublic.com that systems like ChatGPT will likely have a limitation due to their ability to make creative content.

“That’s a plus point,” Roth said. “But the negative point is that sometimes it messes up with two different [pieces of] information, and then the whole context is wrong.”

Roth said Neuroflash is currently working on a fact-checking feature, as many users are requesting the sources behind answers provided by ChatGPT. He also believes that this type of software will grow as generative AI becomes more commonplace.

“So the possibility to generate any content will be for free basically,” Roth said. “But then the next phase will be about [creating] factual, right content with AI.”

OpenAI itself recently launched a tool that can detect if a piece of text has been written by a human or AI – including ChatGPT.

The spark for the AI arms race

The success of ChatGPT has sent shockwaves across the AI sector, as many companies have taken steps to implement its capabilities into their own offerings.

It could be argued that Microsoft began this new AI arms race when it revealed plans to give its search engine Bing a boost by integrating ChatGPT. This was done to give Bing an edge over Google, which holds a dominant share in the search engine market.

Google made a counter-play when it announced Bard, its own advanced chatbot that it plans to integrate into Google Search.

Beyond search engines, both tech giants have revealed plans to incorporate AI into their services, such as Microsoft adding OpenAI software into its Azure cloud services.

The rapid adoption of AI systems by these companies is a change from a more cautious approach last year, particularly from Microsoft. Last June, the company limited access to parts of its facial recognition technology and removed certain capabilities of the software.

The move was part of a broader push by Microsoft to tighten the usage of its AI products. The company also updated its Responsible AI Standard document, which sets out requirements for accountability in AI systems and their impact on society.

The AI arms race has extended beyond Microsoft and Google, however. There are reports that Chinese tech companies such as Alibaba are working on their own chatbot models in an attempt to challenge ChatGPT’s dominance.

The success of ChatGPT also had a knock-on effect for various smaller companies in the AI sector. Roth said Neuroflash and competitor companies all noted an uptick in users as ChatGPT grew in popularity.

Concerns in academia

Meanwhile, many education institutions have raised issues with ChatGPT, due to students using the chatbot to help write essays and complete quizzes.

A survey from Study.com of 1,000 students found that nearly half of them had used ChatGPT to complete a test or quiz at home, while 53pc used the software to help write an essay.

The ability for students to potentially have entire essays written by AI has caused some academics to call for a full ban of ChatGPT’s use.

Others have called for a more accepting approach, such as adjusting how student grades are assessed. Sam Illingworth, an associate professor at Edinburgh Napier University, recently said that ChatGPT has made him reconsider how to make assessments more authentic and tailored toward students.

Prof Alex Lawrence of Weber State University has spoken positively of the opportunities ChatGPT presents for academia. He described AI systems like ChatGPT as “world changing” and that he hasn’t been impacted by something so fast since “the internet itself”.

“ChatGPT and its peers will revolutionise business in many ways,” Lawrence said. “My students have to know how to use this technology for their ultimate benefit…and not just as a better way to cheat the system.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com