Dr Abeba Birhane spoke to SiliconRepublic.com about the dangers of confusing predictive AI systems with actual human consciousness.

While the advances in generative AI can – and have – given people an unnerving feeling that technology can really understand them, it’s important to know that there is no consciousness there.

What these systems are actually doing is taking huge volumes of data and ‘learning’ a pattern to help them predict and generate new sentences. The danger comes when their capabilities are greatly exaggerated, according to Dr Abeba Birhane.

“If something sounds too good to be true, it absolutely is too good to be true, so when we hear about these kinds of claims, we have to dig a little deeper to actually verify if the claim is true,” she said.

“Meta’s model Galactica was found to generate harmful and inaccurate stuff, so it was taken down just three days after its release. It would, for example, generate a scientific paper on the health benefits of eating crushed glass, it would give you the nutrition breakdown of this amount of crushed glass per day or whatever.”

She added that there is so much exaggeration about these so-called amazing new technologies that people can forget what’s actually happening under the hood.

“All they do is just predict the next word based on the patterns that they have seen,” she said. “There is no consciousness, there is no actual understanding so to speak.”


Words by Jenny Darmody