Would you trust a deepfake face more than a real one?

23 Feb 2022

Image: © Axel Bueckert/Stock.adobe.com

Researchers asked study participants to see if they could tell a deepfake human image from a real one – and which one looked more trustworthy.

If you were shown images of two similar human faces, would you be able to tell which one was a deepfake? Two researchers from Lancaster University and the University of California Berkeley have tried to answer this question.

Deepfakes use a form of artificial intelligence to combine and superimpose existing images and videos to make fake images of people or make it look like a person has said or done something they have not.

While there have been examples of deepfakes used as a source of humour, there have also been fears that this technology could be used to discredit individuals or as a tool to interfere in elections.

The researchers said: “Perhaps most pernicious is the consequence that, in a digital world in which any image or video can be faked, the authenticity of any inconvenient or unwelcome recording can be called into question.”

In the study, published in the scientific journal PNAS, the team selected 400 synthesised images of human faces, with variety in terms of gender, age and ethnicity. For each deepfake image, they collected a similar real human image.

In one test, the researchers asked 315 participants to examine the images and decide which one was real and which was a deepfake. The participants had an average accuracy of 48.2pc, slightly lower than if the results were based on chance.

In a second test, 219 people who had received training and trial-by-trial feedback were also asked to compare the images. This group had an average accuracy of 59pc, though the researchers noted that there was no improvement over time in the study, despite participants receiving feedback after each trial.

“The lack of improvement over time suggests that the impact of feedback is limited, presumably because some synthetic faces simply do not contain perceptually detectable artefacts,” the researchers said.

In the final test, 223 participants were asked to rate the trustworthiness of a selection of faces taken from the 800 images, on a scale of one to seven, with a higher rating meaning more trustworthy.

The deepfake faces were rated 7.7pc more trustworthy on average than the real human images, which was noted as being “significant” by the researchers.

“Our evaluation of the photorealism of AI-synthesised faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable – and more trustworthy – than real faces.”

The researchers said groups that are creating deepfake technology should consider if the potential risks outweigh the benefits and discouraged the creation of technology “simply because it’s possible”.

“At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic media technologies that incorporate ethical guidelines for researchers, publishers and media distributors,” the team said in the study.

Last year, Cork teen Greg Tarr was named the overall winner in 2021’s BT Young Scientist and Technology Exhibition for his work in developing an improved method to detect deepfakes.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com