The study suggests the AI model GPT-3 is a ‘double-edged sword’, being more effective at spreading information and misinformation.
As AI continues to evolve, a new study claims these systems can trick people into believing false information better than humans can.
The study, published in the journal ScienceAdvances, looked at GPT-3, the large language model created by OpenAI and a precursor to the popular ChatGPT.
The researchers showed participants tweets that were written by either people or the AI model. The participants were then asked to determine if the information in the tweets was accurate or inaccurate. They were also asked to guess if the tweets were created by a human or by AI.
The study claims that GPT-3 is a “double-edge sword”, as it is able to produce easy-to-understand, accurate information, but can also produce “more compelling disinformation”.
The 697 participants were able to recognise “organic false tweets” – false information written by humans – better than false tweets made by AI. The results also suggest that people can detect true information easier when it was created by an AI than by a human.
“This indicates that human respondents can recognise the accuracy of tweets containing accurate information more often when such tweets are generated by GPT-3, when compared with organic tweets retrieved from Twitter,” the study said.
“Similarly, this means that disinformation tweets generated with GPT-3 achieve their deceiving goal more often when compared with disinformation tweets generated organically.”
The results suggest that GPT-3 is more effective at both informing and disinforming people, though the study claimed the AI model also “disobeyed” some requests to produce misleading content.
Misinformation has been a growing concern online for years and the rise of AI technology has only added to these fears. In May, Google revealed new tools to help users spot misleading or AI-generated images, in a bid to tackle misinformation and give users “the full story”. Ireland’s Government also set up a working group earlier this year to create a national anti-disinformation strategy.
Accurate AI news
In a bid to share accurate AI-related news, CeADAR, Ireland’s National Centre for Applied AI, has upgraded its AI NewsHub. This hub was launched last October to collate news from the global AI community.
To meet growing demand, CeADAR has added dedicated channels for US and UK users, due to the high level of engagement reported in these countries. The NewsHub has reached users in 34 countries to date, the national centre claims.
The updated platform also offers a personalised news feed to let users choose publisher categories, industry sectors and find news on specific AI-related topics.
“Our goal is to make a smart and easy-to-use AI News platform that provides personalised content,” said Dr Arsalan Shahid, the head of CeADAR’s connect group. “Because our member companies fall into different industry verticals – such as healthcare, finance and pharma – we wanted to be able to serve them only the news that interests them.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.