Twitter investigating possible racial bias in photo previews

21 Sep 2020

Image: © wachiwit/Stock.adobe.com

Twitter has confirmed it is looking into the discovery of what could be racial bias in its photo preview algorithm.

A number of Twitter users claimed to have stumbled on what appeared to be racial bias in the social network’s photo preview algorithm that favours cropping out people of colour. User Colin Madland was one of those to have flagged that side-by-side images of him and a black colleague repeatedly cropped the latter out for the tweet’s preview image.

Now, Mashable has reported that Twitter has caught wind of the potential bias in its algorithms and is investigating.

“Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing,” a spokesperson said. “But it’s clear from these examples that we’ve got more analysis to do. We’re looking into this and will continue to share what we learn and what actions we take.”

Other Twitter employees have also commented on the platform about the potential issue, including its chief design officer Dantley Davis and chief technology officer Parag Agrawal.

Until Twitter can fully test whether its systems are indeed racially biased, the claims that are being made by users cannot be verified. Two years ago, Twitter published a blog post discussing how its photo preview algorithms decide what to crop out.

‘An interesting finding’

It revealed that the AI is designed to focus on “salient” image regions, meaning people or objects featuring high contrast.

Davis commented in a tweet that the issue affecting Madland may be to do with his facial hair in the picture “because of the contrast with his skin”. Once removed, Davis said, Madland’s colleague appeared in the preview.

“This is an interesting finding and we’ll dig into other problems with the model,” Davis said.

Issues surrounding biases in image datasets have been highlighted in recent months, most notably the much-cited ‘80 Million Tiny Images’ dataset that may have contaminated AI systems with racist and misogynistic terms as well as other slurs.

Abeba Birhane of University College Dublin and the SFI software research centre Lero said linking images to slurs and offensive language infuses prejudice and bias into AI and machine learning models. Following its discovery, MIT researchers apologised and withdrew the dataset from use.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com