Can AI really plan your meals?

4 Oct 2023

Image: © ActionGP/Stock.adobe.com

A wave of AI meal planners are flooding the market, but there are examples of these bots giving suggestions that range from merely flavourless to downright dangerous.

Consumers are being bombarded with endless potential applications for artificial intelligence (AI), but its important to remember that this is a developing technology.

Supporters of the AI wave being witnessed this year point to the power of these systems, which are able to generate photorealistic imagery, answer various questions and enhance existing tools such as search engines.

But on the flip side, these machines are prone to bugging out and making errors. Even the mainstream, powerful ChatGPT has various examples of it sharing factually incorrect and – in some cases – potentially defamatory information.

People already use technology to assist with their personal health, such as smartwatches that can track their weekly exercises and monitor their heart rate. So its perhaps unsurprising that the market is being flooded this year with AI-powered apps that plan meals.

There are a wide variety of AI-powered apps offering meal-planning services, with some being advertised to make grocery shopping more efficient, while others are being offered as a way to bring in personalised meal plans for improving fitness or losing weight.

Even larger tech companies are looking into this field, with Samsung releasing its own AI-powered food and recipe platform in 104 countries at the end of August. Meanwhile, ChatGPT creator OpenAI advertised the idea of meal-planning in its latest update, which gave the chatbot the ability to “see, hear, and speak”.

“Show ChatGPT one or more images,” the company said on X. “Troubleshoot why your grill won’t start, explore the contents of your fridge to plan a meal or analyse a complex graph for work-related data.”

Open the fridge doors, Hal

But while the idea of using AI to make planning meals seems appealing, these chatbots aren’t always reliable.

One New Zealand supermarket – Pak‘n’Save – has its own AI app called Savey Meal-bot that gained notoriety in August for creating strange and – in some cases – dangerous meal suggestions. The supermarket claims its app is powered by OpenAI technology.

In August, The Guardian reported that one of its recipes for a “non-alcoholic beverage” contained ingredients that would produce chlorine gas. The bot’s terms and conditions make it clear that the supermarket would accept no responsibility if the bot’s recipes cause harm.

“The output is generated by an AI tool and is not generated by or reviewed by a human being,” the app’s terms and conditions state. “Due to the nature of generative AI tools, the output may be inaccurate or include errors.”

The Guardian reported that the app had a warning notice that stated the company did not guarantee that any recipe would be complete, balanced, or “suitable for consumption”.

While not every example is as deadly as chlorine gas, some chatbots have been criticised for the type of meal plans they provide. An article in Health earlier this year claims ChatGPT was able to provide a technically nutritious and healthy meal plan, but that it lacked variety and flavour.

In May, Chloe Gray wrote in Women’s Health about using ChatGPT to prepare a meal plan to support a varied exercise routine.

This meal plan would have cut her calories by more than a third, despite the fact it was designed to support intense workouts and that weight loss was “never listed” as a personal goal.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com