Microsoft says it has reduced racial bias in its facial recognition tools

27 Jun 2018

Facial recognition software can be controversial. Image: pixinoo/Shutterstock

Microsoft says it has improved its facial recognition technology, which was called out in a study for racial bias.

Facial recognition is an area that is evolving rapidly, but it seems as if elements of the technology have some catching up to do.

Earlier in 2018, researchers at MIT found that facial recognition tools from IBM, Microsoft and Chinese firm Megvii were much more accurate in identifying light-skinned men in comparison to darker-skinned women. The Azure-based Face API from Microsoft had an error rate as high as 20.8pc when attempting to identify the gender of people of colour, particularly women with darker skin.

The MIT study – entitled Gender Shades – examined 1,270 images in order to create a benchmark for the gender classification performance test. While the overall accuracy rates were high, error rates between different groups were noted. Overall, all companies recognised male faces at a higher rate and they all performed worse when recognising darker-skinned female faces. 93.6pc of faces misgendered by Microsoft’s Face API were those of darker-skinned subjects.

A major challenge for the industry

Microsoft addressed these issues in a blogpost published this week. It said that the error rates have been reduced by as much as 20 times for men with darker skin and by nine times for all women. The company said: “The higher error rates on females with darker skin highlights an industry-wide challenge: artificial intelligence technologies are only as good as the data used to train them. If a facial recognition system is to perform well across all people, the training dataset needs to represent a diversity of skin tones as well as factors such as hairstyle, jewellery and eyewear.”

At the time of the MIT study’s publication, the research authors said: “Automated systems are not inherently neutral. They reflect the priorities, preferences and prejudices – the coded gaze – of those who have the power to mould artificial intelligence.

“We risk losing the gains made with the civil rights movement and women’s movement under the false assumption of machine neutrality. We must demand increased transparency and accountability.”

Facial recognition risks and controversy

While Microsoft has successfully reduced some evident bias within its facial recognition system, there are still doubts remaining about the technology and its uses, particularly with the notably stringent immigration procedures currently upheld by ICE in the US.

Earlier in June, the American Civil Liberties Union delivered a petition to Amazon’s Seattle headquarters that had been signed by more than 150,000 people. It requested that the company cease the provision of facial recognition technology to government authorities. It read: “Amazon’s product, Rekognition, has the power to identify people in real time, in photos of large groups of people, and in crowded events and public places.

“At a time when we’re joining public protests at unprecedented levels, and discriminatory policing continues to terrorise communities of colour, handing this surveillance technology over to the government threatens our civil rights and liberties.”

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com