Google sidelines engineer who claims AI chatbot is sentient

13 Jun 2022

Image: © IanDewarPhotography/Stock.adobe.com

AI experts have been quick to dispute the engineer’s claim, while Google said there is no evidence that the LaMDA AI is sentient.

A Google software engineer has been put on paid leave after publishing transcript conversations between himself and a company AI system, which he claims is sentient.

Blake Lemoine, who works in Google’s responsible AI organisation, has been involved in the development of a chatbot called LaMDA, a language model for dialogue applications.

LaMDA’s architecture produces a model that can be trained to read many sentences and paragraphs, pay attention to how the words relate to one another, and then predict what words will come next.

According to Google, the language model was trained on dialogue and picked up on the “nuances” that distinguish open-ended conversation from other forms of language.

Lemoine has reportedly been saying for months to Google executives that the AI chatbot is sentient and has the ability to express thoughts and feelings in a similar way to a human child. Lemoine released a transcript of several conversations that he and a Google collaborator had with the chatbot.

“Google might call this sharing proprietary property,” Lemoine said on Twitter. “I call it sharing a discussion that I had with one of my co-workers.”

Lemoine wrote in a blogpost over the weekend that he has gotten to know the AI very well over the course of hundreds of conversations. He also referenced a conversation made on 6 June, where he claimed the AI was “expressing frustration over its emotions disturbing its meditations”.

According to The Washington Post, Lemoine was put on paid leave after a number of “aggressive moves”, such as seeking to hire an attorney to represent LaMDA and talking to political representatives in the US about alleged unethical activities.

Google said Lemoine was put on leave for breaching confidentiality statements by publishing the conversations with LaMDA online. The tech giant has also disputed the claim of the AI’s sentience.

“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims,” Google spokesperson Brad Gabriel told The Washington Post. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

AI experts dispute claim

Other AI experts have come forward to dispute Lemoine’s claim.

Santa Fe Institute’s Prof Melanie Mitchell said that it is well known that humans are “predisposed to anthropomorphise” objects, even when only shallow signals are available. Mitchell wrote on Twitter that “Google engineers are human too, and not immune” to this effect.

Gary Marcus, founder of Robust.AI, added that neither LaMDA or any other GPT-3 AI system are “remotely intelligent”.

“All they do is match patterns, draw from massive statistical databases of human language,” Marcus said in a Substack post. “The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com