‘People of colour aren’t empowered to make changes they’re brought in to make’

2 Sep 2020

Inioluwa Deborah Raji was included in the Top Innovators Under 35 list by MIT Technology Review in 2020. Image: Inioluwa Deborah Raji

Inioluwa Deborah Raji of the AI Now Institute is one of a growing number of AI developers trying to bring awareness of racial bias in tech to the world.

The use of AI has accelerated in a time of Covid-19. No longer will cameras that can read your body temperature or track your location be limited to countries such as China, but all over the world as machines are being trusted to scan millions of people to limit and track the spread of the pandemic.

But as we’ve recently seen, serious underlying flaws have been found in the way many AI systems have been built, raising concerns about bias, racial profiling and a seemingly broken design premise in AI.

Abeba Birhane of University College Dublin and the SFI software research centre Lero, who previously featured on Siliconrepublic.com, recently helped uncover how the infamous ‘80 Million Tiny Images’ dataset may have contaminated AI systems with racist, misogynistic and other slurs.

While the issue has now been flagged and future use of the dataset has been discouraged, this is not the first time that alarm bells have been rung for problematic databases, as Inioluwa Deborah Raji can attest to.

An industry-wide problem

The 24-year-old University of Toronto graduate is now a tech fellow at the AI Now Institute and was recently named a visionary as part of MIT Technology Review’s Innovators Under 35 list for her research into racial bias in facial recognition technology and advocacy to change this in big tech.

Speaking with Siliconrepublic.com, Raji described how her eyes were opened up to the systemic problem as an intern at Clarifai, a computer vision and AI company that works with clients on facial recognition technology. While working on a model for a client that was supposed to flag inappropriate images, Raji couldn’t help but notice that many of these supposedly inappropriate photos being flagged featured more people of colour than white people.

As it turned out, the AI being trained to flag ‘not safe for work’ images was being taught that stock images made up of predominantly white people were safe. Meanwhile, images from porn –where actors are more racially diverse – were flagged as bad, leading to racial biases.

“And [biased AI] wasn’t just at Clarifai, it was a very industry-wide thing,” Raji said. “I became a little bit alarmed, especially when I started training my own models and realising that there were racial disparities there.”

A sea of white guys

In fact, she described her first conference appearance with Clarifai – at one of the biggest global AI and machine learning events, NeurIPS, in 2017 – and feeling very overwhelmed.

“There were about 8,000 people at this conference and maybe less than 100 black people and not many women at all, so it was very overwhelming,” Raji said.

“I remember this particular scene of walking through the hall and seeing this sea of white guys as I’m trying to get to the women’s washroom and they’re walking in the opposite direction. I think they were also looking at me weirdly as I stood out so much, and I’ve had this experience at other conferences as well.”

As it turned out, standing out helped her get spotted by Timnit Gebru, a research scientist and technical co-lead of Google’s Ethical Artificial Intelligence Team, who invited Raji to a group she founded called Black in AI.

Raji said that deciding to change her flight and stay an extra day at the conference turned out to be an important decision in her working life.

“I told Timnit I’m not sure that I would have gone into the [AI ethics] field if I hadn’t gone to that event and met other black people in the space,” she said about the Black in AI group, as she previously felt she wouldn’t be welcome in an overwhelmingly white, male industry.

“I met every single person there because there were that many people there and they’ve all been incredibly supportive,” Raji said. “But it was also really inspiring because they had all somehow built these incredible careers.”

After leaving Clarifai in 2018, Raji teamed up with the Algorithmic Justice League at the MIT Media Lab to build on the work she had done in a project led by computer scientist Joy Buolamwini called Gender Shades. The project had found that some facial recognition systems were misclassifying black women as men.

After developing a dataset that could put into figures how racially biased a facial recognition system was, Raji went on to conduct another study to see what impact the Gender Shades project had on facial recognition technology developed by major corporations such as Microsoft and Amazon.

As it turned out, the companies that had taken the findings of Gender Shades on board saw a dramatic improvement in weeding out bias versus those who weren’t originally audited. Now, some companies have publicly distanced themselves from – or put a pause on – using facial recognition technology in highly controversial areas, such as police enforcement.

Something to celebrate, you may say? Not exactly.

‘How is this possible?’

“No one is doing enough,” Raji said. “The companies have a very limited, if non-existent, obligation to affected communities and prioritise the wellbeing of their customers at best.

“It’s very easy for companies – because of the fact their customers are not the affected population – to only care to the extent that it doesn’t actively challenge their bottom line or challenge their responsibility to their customers who are not the affected population. For that reason, policy development and regulation is super, super important.”

To make matters more challenging, the issue of a lack of diversity at the companies that are developing these technologies continues to have an impact on those it most affects.

“I am one of those people that looks at the diversity numbers every year for all the major tech companies. And I am just, like, how is this possible?” Raji said. “A lot of people of colour leave the [AI development] field and they’re not empowered to actually make the changes that they’re brought in to make.”

This is why she wants to widen the scope of the algorithmic auditing and AI ethics advocacy work that she’s been doing so far.

“I think a lot of the general population sometimes struggles to understand the reality of what it means for an algorithm to make a decision about you,” Raji said.

“There’s so many ways in which those decisions are disguised. I think revealing what those decisions are and how those decisions affect you or affect individuals can provide an arsenal for advocates and policy makers to better understand the issues.”

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com