Object recognition AI from the world’s biggest tech companies really struggles at identifying goods from poorer countries versus wealthier ones.
One of the biggest criticisms of artificial intelligence (AI) development right now surrounds implicit biases being built into visual recognition systems, either intentionally or unintentionally. For example, MIT graduate student Joy Buolamwini previously showed how the AI she was working on couldn’t recognise her face because her fellow programmers forgot to input the ability to read various skin tones and facial structures.
Now, Facebook’s AI research lab has published findings that show such biases also creep into some of the world’s most popular visual object recognition systems. As reported in The Verge, the study involved algorithms from Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Rekognition and IBM Watson, which were tasked with identifying images of common household items from a large global dataset.
The dataset covers 117 categories of different household items and documents the average monthly income of households from various countries across the world, ranging from $27 in Burundi to $10,098 in China.
When the algorithms were shown the same product but from different parts of the world, the researchers found that there was a 10pc increase in chance they would fail to identify items from a household earning less than $50 versus one making more than $3,500 a month.
Severely undersampled poorer nations
When taking into account the absolute difference in accuracy, this increased further as the algorithms were as much as 20pc more likely to identify an object from a wealthy country compared with nations at the lower end of the earnings scale. One of the examples documented in the study showed the algorithms thought a collection of bars of soap were food items on top of a dirty surface, versus a plastic liquid soap pump in a tiled bathroom.
Another stark example of this bias was seen in the fact that there was an accuracy difference of 40pc between wealthier and poorer living rooms. The researchers attributed this to the lack of consumer goods in homes in poorer nations versus those seen in wealthy countries.
In trying to explain the errors, the researchers said one likely reason is that training data for these popular algorithms are largely taken from wealthier parts of the world and “severely undersample[s] visual scenes in a range of geographical regions with large populations, in particular in Africa, India, China and south-east Asia.”
When discussing potential remedies, Facebook AI scientist Laurens van der Maaten said: “The most important step in combating this sort of bias is being much more careful about setting up the collection process for the training data that is used to train the system.”