A study that claimed AI could distinguish a person’s sexuality based on scanning their face has been vilified, with calls on Stanford University to clamp down on ‘dangerous and flawed research’.
The ability of an artificial intelligence (AI) to determine facial features has seen research go down troubling paths in the past. One example was noted when MIT graduate student Joy Buolamwini found that an AI couldn’t read her face because it wasn’t trained to identify non-white people.
Now, the LGBTQ community are labelling a new study from Stanford University researchers, which claims that AI can identify a person’s sexual orientation based just on their facial features, as “dangerous and flawed research”.
The original study claimed that an algorithm was presented with more than 14,000 photos of white Americans obtained from various dating websites, and it was able to distinguish between a straight and gay person.
It suggested that it was able to determine a man’s sexuality 81pc of the time, with a rate of 71pc for women, and that a lesbian could be identified because they tend to have a “strong jaw”.
As news of the study began to spread online, outrage erupted, with the targets being both Stanford University for publishing the study, and a number of news outlets for not highlighting the limitations of the study.
People criticised the fact that it focused only on white people and refused to consider different races, non-binary sexual orientations and the elderly.
In a joint statement condemning the study, the Gay and Lesbian Alliance Against Defamation (GLAAD) and the Human Rights Campaign (HRC) issued a joint statement calling on the media to “expose dangerous and flawed research” when it is publicised.
GLAAD’s chief digital officer, Jim Halloran, said: “At a time where minority groups are being targeted, these reckless findings could serve as weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous.”
HRC’s director of public education and research, Ashland Johnson, added that Stanford University needs to make it clear that it does not stand behind the peer-reviewed study.
“Stanford should distance itself from such junk science rather than lending its name and credibility to research that is dangerously flawed and leaves the world – and this case, millions of people’s lives – worse and less safe than before.”
Authors issue strong response
The two researchers involved in the study – Prof Michael Kosinski and Yilun Wang – have issued a lengthy response, saying the reaction was a “smear campaign” against them and science as part of a “knee-jerk dismissal”.
“They dismissed our paper as ‘junk science’ based on the opinion of a lawyer and a marketer, who don’t have training in science,” it said.
“They spend their donors’ money on a PR firm that calls journalists who covered this story, to bully them into including untruthful allegations against the paper.”
In response to the criticism that it only focused on white people, the pair said: “Non-white individuals were not represented in sufficiently large numbers in our dataset. We hope that other studies will look at faces of people of other ethnicities in the future.”
They also said that they hope their study is wrong, and are now calling on further research to prove that they might have “sounded a false alarm”.