Italy’s data protection authority is investigating if OpenAI has breached GDPR, while Ireland’s DPC said it is looking into these chatbot concerns.
OpenAI’s hugely popular chatbot has received a blow to its growth, as an Italian watchdog has temporarily banned it from the country.
The decision by Italy’s independent privacy regulator caused the chatbot to go offline in Italy on Friday. But the move has been criticised by Italy’s prime minister, who described the watchdog’s decision as “disproportionate”, Reuters reports.
The AI model ChatGPT has grown rapidly since it launched last November, with estimates that it reached more than 100m users by January.
The success of the chatbot has sparked an AI arms race, with tech giants racing to put its capabilities to use in their own systems.
But there have also been various concerns about ways this software can be misused, with examples popping up of misinformation and malware. Now, the focus is being put on how OpenAI handles the data it collects from its users.
Why did Italy ban ChatGPT?
The country’s privacy regulator issued a ban on ChatGPT due to alleged privacy violations. In a statement, the national data protection authority shared concerns over the amount of personal data being collected and stored by OpenAI.
Large language models like ChatGPT are trained on vast amounts of data, while the chatbot is constantly being improved and altered based on user feedback.
The Italian authority claims OpenAI processes data inaccurately and lacks a legal basis to justify the mass collection and storage of data. It also claims that there is no age verification system in place for children.
The watchdog said OpenAI suffered a data breach on 20 March which exposed the conversations and payment information of affected users.
The Italian organisation plans to investigate OpenAI to see if it has breached GDPR. The ChatGPT creator told the BBC that it complied with data laws.
Will other countries ban ChatGPT?
The results of Italy’s investigations could spur other data authorities into action if it is shown that there are data risks or GDPR breaches.
The deputy commissioner of Ireland’s Data Protection Commission (DPC) Graham Doyle told the Business Post that the agency had contacted Italy’s watchdog to learn more about why it banned ChatGPT.
Doyle also said the DPC will liaise with other European data protection authorities regarding the OpenAI chatbot.
Dan Shiebler, the head of machine learning at software company Abnormal Security, believes security concerns around these types of large language models are likely to reoccur in future. He also expects other regulators to follow Italy and ban ChatGPT.
“The EU in general has shown itself to be pretty quick to act on tech regulation (GDPR was a major innovation), so I’d expect to see more discussion of regulation from other member countries and potentially the EU itself,” Shiebler said.
Concerns around AI have been growing as more companies flock to utilise these systems. Last week, more than 1,100 people signed an open letter calling for a six month pause to the training of AI models more powerful than GPT-4, which is OpenAI’s latest AI model.
These people included SpaceX and Twitter CEO Elon Musk, Skype co-founder Jaan Tallinn, Apple co-founder Steve Wozniak and MIT researchers.
However, this letter has faced a wave of criticism, with some signatures revealed to be fake, others reverting their support and AI experts sharing concerns that their research was used in the open letter, The Guardian reports. The organisation behind the open letter is primarily funded through the Musk Foundation.
Last month, OpenAI faced a wave of criticism online for not disclosing the training details behind GPT-4. The company claimed the lack of transparency was due to the “competitive landscape and safety implications” of large language models.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.