Google opens its AI Test Kitchen to give users a taste of new tech

31 Aug 2022

Image: © anyaberkut/Stock.adobe.com

The app lets users try out demos of emerging AI technology being developed by Google, such as the LaMDA chatbot.

Google is launching the AI Test Kitchen app, where users can try out experimental AI systems such as the LaMDA chatbot.

The tech giant first announced the app at its developer conference earlier this year and said it will let users learn about emerging AI technology and give feedback on developments.

The app is being rolled out gradually, initially available to small groups of users in the US on Android, with an iOS version planned in the coming weeks. People can register their interest on the app website to join the waitlist.

The app will offer rotating demos on novel technologies being developed by Google, with the initial focus being on generative language models.

These models, such as LaMDA, can be trained on large volumes of text, paying attention to how the words relate to one another and then predicting what words will come next.

Google’s first three AI Test Kitchen demos will let users test LaMDA’s capabilities.

The first lets users name a place, with the AI offering paths to “explore your imagination”. The second lets users name a goal or topic, then LaMDA will break it down into multiple lists of subtasks. Finally, users can have a conversation with LaMDA about dogs, to test the chatbot’s ability to stay on topic even when the user tries to veer the conversation off course.

“We’ve been testing LaMDA internally over the last year, which has produced significant quality improvements,” Google said in a recent blogpost. “More recently, we’ve run dedicated rounds of adversarial testing to find additional flaws in the model.”

Google said it has added multiple layers of protection to the AI Test Kitchen to minimise the risks that can crop up with a system like LaMDA, such as biases or toxic responses.

It has designed its systems to automatically detect and filter out words that violate its policies, prohibiting users from generating content that is sexually explicit, hateful, offensive, violence, dangerous or illegal.

As AI language models learn from training data, there can be issues in their responses. For example, Meta released its most advanced AI chatbot earlier this month, but it soon made a splash with responses such as saying Donald Trump is still the US president.

LaMDA made headlines in June when a Google software engineer claimed that the chatbot is sentient. This claim was disputed by both Google and AI experts.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com