How many calories am I eating? Google might know…

3 Jun 2015

Google’s AI team is working on an application that can count the calories of your food by looking at a picture, measuring it inaccurately and letting you fix it. Cheers?

“If it only works 30pc of the time, it’s enough that people will start using it, we’ll collect data, and it’ll get better over time.”

Those are the heart-warming words of Google research scientist Kevin Murphy, who recently unveiled his company’s latest AI project, Im2Calories, which lets you upload a snap of your beans on toast before giving you an estimate of the number of calories it contains.

Sure, it’s most likely wrong. Sure, it’s focusing on a measurement invariably inaccurate even in ‘listed’ results. Sure you, the no-doubt nutritional expert, do all the work. But hey, it’s calories!

Google’s acquisition of DeepMind early last year started the ball in motion for projects such as this. In January 2014 the company beat Facebook to the deal, nabbing a company that created algorithms for simulations, e-commerce and games.

DeepMind’s expertise in analytical learning with big data, now a prerequisite for effective AI, allowed for a whole new world of ideas.

How many calories am I eating?

As reported in Popular Science, Murphy announced Im2Calories last week at an event in Boston.

“To me it’s obvious that people really want this and this is really useful,” said Murphy. “Ok fine, maybe we get the calories off by 20pc. It doesn’t matter. We’re going to average over a week or a month or a year.

“And now we can start to potentially join information from multiple people and start to do population-level statistics. I have colleagues in epidemiology and public health, and they really want this stuff.”

It sounds a bit loose, to be honest, until Murphy gets to the crux of the idea, which is far more wide ranging than food.

We’ve already seen numerous examples of advances in image recognition software come on stream quite regularly. Indeed last November, Google itself revealed its attempts at AI captioning of images.

Cars, so obvious!

“We’ve developed a machine-learning system that can automatically produce captions to accurately describe images the first time it sees them,” claimed Google in a blog post at the time.

“This kind of system could eventually help visually-impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images.”

Now, with Im2Calories, Murphy uses AI cars, an area of huge interest for Google (and Apple) as a fine example of where this could lead us.

“Suppose we did street-scene analysis. We don’t want to just say there are cars in this intersection. That’s boring,” he said.

“We want to do things like localise cars, count the cars, get attributes of the cars, which way are they facing. Then we can do things like traffic-scene analysis, predict where the most likely parking spot is.”

Now your car knowing which street will most likely be clear to drive through, or where you can park, ahead of time, that would be a killer app.

Triple decker sandwich image, via Shutterstock

Gordon Hunt was a journalist with Silicon Republic

editorial@siliconrepublic.com