AI ‘mirror’ that ranks attractiveness reveals tech’s major flaw

24 Jul 2018711 Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Image: bybamv/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

A new AI ‘mirror’ designed to determine a person’s attractiveness and personality just by looking at a photo reveals some major flaws.

Recent attendees of Inspirefest 2018 would have heard all about the issues of algorithmic bias in artificial intelligence (AI) from Alexa Gorman of SAP.io Fund and Foundry, who explained how a machine becomes unconsciously biased because of the social perceptions within its programmer.

Now, a new study conducted by a team of researchers from the University of Melbourne has looked at AI bias using a new system called Biometric Mirror.

The AI was designed to detect and display a person’s personality traits and physical attractiveness based solely on a photo of their face.

When a person stands in front of the AI’s gaze, the system is able to detect their facial characteristics in seconds, then comparing them to thousands of facial photos evaluated by a group of crowdsourced responders.

A total of 14 characteristics are included, with familiar ones including age and gender, but also more advanced ones such as ‘weirdness’ and emotional stability.

Because Biometric Mirror is limited to using a public perception of facial appearance, it is simply incapable of determining a person’s real personality.

And yet, it will still attempt to do so.

Biometric mirror in action

Biometric Mirror uses an open dataset of thousands of facial images and crowdsourced evaluations. Image: Sarah Fisher/University of Melbourne

A slippery slope

If this sounds somewhat worrying, it is because this AI’s ability to determine someone’s personality based on their appearance is described by its creators as a worrying sign of things to come.

“With the rise of AI and big data, government and businesses will increasingly use CCTV cameras and interactive advertising to detect emotions, age, gender and demographics of people passing by,” said Dr Niels Wouters, who led the project.

“Our study aims to provoke challenging questions about the boundaries of AI. It shows users how easy it is to implement AI that discriminates in unethical or problematic ways, which could have societal consequences.”

He continued: “The use of AI is a slippery slope that extends beyond the realm of shopping and advertising.

“Imagine having no control over an algorithm that wrongfully considers you unfit for management positions, ineligible for university degrees, or shares your photo publicly without your consent.”

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com