‘The biases of today must not be baked into the tech of tomorrow’

17 Mar 2021

Toju Duke. Image: Steve Langan/City Headshots Dublin

AI expert Toju Duke spoke to Siliconrepublic.com about the challenges of building ethical AI and how she gives back to the tech community.

Toju Duke works as a responsible AI programme manager at Google Research in Dublin, where she focuses on scaling its smart practices, tools and processes for Google’s EMEA product teams.

While she quickly fell in love with the world of AI and the social good this tech can do, Duke also found that the old saying ‘all that glitters is not gold’ holds true – especially when it comes to bias and ethical challenges.

“Learning that people have been wrongfully accused by a facial detection algorithm because it’s been built on insufficient data which misrepresents society, or recruitment software that favours male CVs over female CVs set alarm bells ringing in my head,” she said.

“It’s quite troubling that the systemic injustices and inequality we’ve been fighting against for so many years has made its way into technology in ways we may not be able to control.”

‘It’s crucially important that we develop these systems in ethical and responsible ways’
– TOJU DUKE

Duke said there are several challenges affecting the building and adoption of AI algorithms that are fully ethical.

“Depending on the method in which the algorithm was built, for example supervised versus unsupervised learning, the algorithm could come up with outputs that are unexplainable by the organisation that developed it or the world at large,” she said.

“Take mortgage applications for instance. There’s evidence that some financial software was built using biased datasets that suggested that people from certain postcodes would not be able to afford mortgages, versus those from postcodes deemed to be in ‘wealthier’ areas. This leads to an increase in mortgage application rejections, or reduction in loan amounts directed at people from lower socioeconomic backgrounds.”

The second major challenge is an incorrect ground truth – that is, the algorithm was programmed with flaws. “These datasets are labelled by humans which sometimes have incorrect labels, and as humans have inherent biases, these biases more often than not creep their way into the technologies we build,” said Duke.

“For instance, Tiny Images, a dataset that was built by MIT and NYU, was recalled last year because it was found to contain a range of racist, sexist and other offensive labels.”

The third major challenge when it comes to ethical AI is building trust. “The European Commission is currently working to build a framework for AI innovation that will create trust in machine learning-based systems and guide the ethical development and use of this widely applicable technology,” she said.

“Google is supportive of these efforts to build trust in AI through responsible innovation and thoughtful regulation. It’s also why we have our own AI principles to guide our ethical development and use of AI, and it’s why we open-source many of our learnings for the benefit of the wider developer community.”

Duke calls AI “the oxygen of tech” because it’s present in virtually all the devices we interact with, from phones and audio speakers to cars and thermostats.

“It also has incredible potential to transform our lives in fields such as healthcare – in the future, AI could power new medical diagnostic techniques, which potentially allow skilled medical practitioners to offer more accurate diagnoses, earlier interventions, and better patient outcomes,” she added.

“It’s crucially important that we develop these systems in ethical and responsible ways, so that the biases of today are not baked into the technologies of tomorrow.”

Outside of her work in Google, Duke volunteered to join Women in AI Ireland, where she currently manages several projects. She also leads a group of professionals trying to solve problems in AI that affect people of colour, and it was this work that brought her to her current role.

“Building my knowledge base and these experiences within the field gave me the confidence to reach out to the responsible AI team within Google, requesting to join the team while working as an EMEA product lead for travel. After a couple of conversations, I started working on the team in a 20pc capacity and I’ve now transitioned full time on to the team, working on a six-month assignment.”

Duke is also the founder of Refyne, which was originally set up as a marketing coaching business targeted at start-ups. “At some stage, it’d be pivoted to focus more on ethical AI, helping organisations build ethical AI frameworks, which should increase their profitability and reputation in the long run, while protecting humanity from further harm.”

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

Jenny Darmody is the editor of Silicon Republic

editorial@siliconrepublic.com