Research claims AI designed to read emotions fails miserably at spotting liars

5 Sep 2019

Image: © phoenix021/Stock.adobe.com

As AI is increasingly used to detect facial emotions and expressions, researchers have pointed to some algorithms’ failure at knowing when someone is lying.

A team from the University of Southern California (USC) revealed new findings at an international conference in the UK, warning about AI’s ability to detect misrepresentation, otherwise called lying.

In its research, the team said that algorithms fail basic tests as truth detectors after comparing a pair of studies using science that undermines popular psychology and AI expression understanding techniques. Both of these techniques assume that a person’s thoughts are seen through facial expressions.

“Both people and so-called ‘emotion reading’ algorithms rely on a folk wisdom that our emotions are written on our face,” said Jonathan Gratch, director for virtual human research at USC’s Institute for Creative Technologies.

“This is far from the truth. People smile when they are angry or upset, they mask their true feelings, and many expressions have nothing to do with inner feelings, but reflect conversational or cultural conventions.”

‘These techniques have simplistic assumptions built into them’

While the concept of a ‘poker face’ that masks a person’s true feelings is not new, algorithms aren’t so good at catching duplicity, even as machines are increasingly deployed to read human emotions and inform life-changing decisions.

As part of this latest research, the USC team – in conjunction with researchers from the University of Oxford – wanted to examine spontaneous facial expressions in social situations. In one study, the team create a game where 700 people’s faces were tracked when playing for money. Afterwards, the participants were asked to review their behaviour and how they leveraged their faces to gain an advantage.

The results showed that smiles were the only expressions consistently provoked, regardless of the reward or fairness of outcomes. Additionally, the participants were mostly unable or poor at judging facial emotion, suggesting people smile for lots of reasons, not just happiness.

“These discoveries emphasise the limits of technology use to predict feelings and intentions,” Gratch said.

“When companies and governments claim these capabilities, the buyer should beware because often these techniques have simplistic assumptions built into them that have not been tested scientifically.”

Amazon recently announced an update to its Rekognition AI software, which is designed to detect facial expressions, to allow it to see ‘fear’. Civil rights groups responded strongly to the news, with one claiming the company was “going to get someone killed”.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com