Glue on pizza? Google’s AI Overview shares wacky answers

24 May 2024

Image: © idelotama/Stock.adobe.com

Google’s AI boost for Search has had a rocky start, as many examples are being shared of the feature giving confusing and dangerous answers to user queries.

Whether you love or hate generative AI, it can’t be denied that these models are prone to making mistakes – or ‘hallucinations’.

That is the polite industry term for when one of these powerful systems gives a response that is wrong, with the mistakes ranging from mildly confusing to dangerous and disturbing.

There are a lot of examples of AI chatbots making mistakes like these over the years, but now Google is making its own mark with a chaotic launch of its AI Overviews feature.

Announced at Google I/O last week, this feature is designed to give AI-boosted answers for Search queries, with generated summaries, tips and links to referenced sites. Google launched the experimental feature in the US and has an ambitious goal of bringing it to more than 1bn people by the end of 2024.

But the launch of AI Overviews has not gone well, as many users are sharing examples of the AI-generated answers being strange, to say the least.

One user claimed Google’s AI Overviews answered that former US president Andrew Johnson earned 14 degrees, graduating multiple times between 1947 and 2012. Johnson died in 1875.

Another example being widely shared is AI Overviews claiming that non-toxic glue can be added to pizza sauce to “give it more tackiness”. This appears to be based on a post from a Reddit user 11 years ago.

One answer from AI Overviews claims that parrots are able to do “a variety of jobs”, including housekeeping, engineering and “prison inmate”.

Other examples being shared move to the more dangerous and disturbing category. One user shared a post of Google AI Overviews claiming that adding more oil to a cooking oil fire “can help put it out”.

Google spokesperson Meghann Farnsworth told The Verge that these mistakes came from “generally very uncommon queries, and aren’t representative of most people’s experiences”.

While many of the responses are funny and (hopefully) not being taken seriously by users, it does highlight one issue surrounding generative AI systems – the ability to spread misinformation.

When Meta’s AI tool Galactica was being tested – shortly before it was shut down for making too many errors – testers claimed it produced biased content that looked real but was essentially “pseudo-science”.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com