Why claiming AI will cause human extinction is a bit of a stretch

1 Jun 2023

Image: Barry O'Sullivan

Prof Barry O’Sullivan of UCC told SiliconRepublic.com that the recent ‘scaremongering’ around AI distracts us from the real issues at hand.

Earlier this week, the Center for AI Safety had a bold proposition.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” a one-liner statement by the San Francisco-based organisation read.

While the statement itself is a heavy one, the names in the signatories list are heavier. They include the chief executives of some of the world’s leading AI companies: Google’s DeepMind, Anthropic, and of course, ChatGPT developer OpenAI.

With a global AI gold rush now in full swing, many – including those behind the technology – are expressing concerns around the possibility of detrimental en masse effects on all aspects of human life, from work and education to health.

Just weeks ago, OpenAI CEO Sam Altman stunned many in the US Congress when he urged lawmakers to regulate the burgeoning AI sector while it is still in its early stages in order to prevent any potential damage.

“We believe it is essential to develop regulations that incentivise AI safety while ensuring that people are able to access the technology’s many benefits,” Altman said at the time. “It is also essential that a technology as powerful as AI is developed with democratic values in mind.”

Some others are much more perturbed by the nascent technology.

Last month, Geoffrey Hinton, a pioneer in AI research, left his job at Google to speak about the dangers of artificial intelligence. Hinton shared concerns about AI being used for nefarious purposes, with a potential risk to the job market and the current arms race between tech giants.

‘Irresponsible scaremongering’

Despite all the global concern, there is a cohort of AI experts who believe it should be warranted, measured and not blown out of proportion.

Barry O’Sullivan, a leading AI expert in Ireland and a professor at the School of Computer Science in University College Cork, thinks that this so-called existential threat narrative is “at best, irresponsible”.

“It distracts from the important and real issues around the deployment of AI technologies,” he told SiliconRepublic.com.

“These include how AI systems can amplify human bias to the detriment of individuals and society, as well as ensuring that AI-enabled decision-making is fair, transparent and accountable.”

O’Sullivan, who was elected a fellow of the Association for the Advancement of Artificial Intelligence a year ago, has made significant contributions to the field of constraint programming and is an outspoken academic within the international AI community.

He is a recipient of one of the top computer science prizes in the world, the Nerode Prize. O’Sullivan also helped launch a free online course called Elements of AI at Silicon Republic’s Future Human event in 2020.

And now, he thinks that the latest brigade of AI worriers are overstating the potential impact of the technology by comparing it to pandemics and nuclear war. In a recent tweet, he called this narrative “irresponsible scaremongering” because the idea of a ‘superhuman AI’ doesn’t exist yet.

“The humanity-level risks we need to be concerned about are clearly set out under the UN’s Sustainable Development Goals, especially climate change, global poverty, and the protection of human rights,” he told SiliconRepublic.com.

“In this respect statements about the existential threats to humanity posed by AI are tone-deaf.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com