Is it possible to take the toxicity out of AI?


5 Dec 2022

Image: © kirill_makarov/Stock.adobe.com

In the latest episode of For Tech’s Sake, Abeba Birhane explains why so much AI produces problematic results – and it’s all in the data.

The acceleration of AI in recent months has been nothing short of remarkable, and 2023 is set to be a milestone year for this technology.

But for all the fun of playing around with text-to-image generators, there’s the risk they pose to artistic copyright and even sexual consent. For all the craic that can be had in conversation with a highly advanced chatbot, there’s the conundrum of explaining how even a software engineer can be so wowed by tech trickery that they will start to believe the magic is real.

Claims of sentient AI have largely been debunked by experts, but there are still plenty of unsettling aspects of this rapidly advancing technology that raise ethical questions.

Often, developments in AI are as astounding as they are flawed. And if their application races ahead of regulation and even fundamental understanding of this technology, we are going to find ourselves overwhelmed by decisions made by problematic, unexplainable and impenetrable systems. More often than not, this will impact those least represented in AI development the most.

Back in 2018, this was revealed in the Gender Shades audit by Joy Buolamwini and Timnit Gebru, which showed how facial recognition systems failed to recognise the faces of women of colour, while performing brilliantly when it came to identifying white men. Buolamwini and Gebru’s early work in AI ethics is seen as a key starting point in a movement towards AI that is responsible and inclusive, not just progress for progress’ sake.

In the latest episode of For Tech’s Sake, a co-production from Silicon Republic and The HeadStuff Podcast Network, hosts Elaine Burke and Jenny Darmody speak to Abeba Birhane, an expert in both AI and cognitive science.

Birhane is a PhD candidate at University College Dublin as well as senior fellow in trustworthy AI at Mozilla, the company behind the Firefox browser. She herself has contributed to comprehensive and influential audits of datasets underpinning this technology.

The podcast explores how bad AI can be unhelpful and annoying at best, but prejudiced, powerful and completely opaque at its worst. These issues are particularly apparent when the building blocks of AI – its training datasets – contain toxic materials.

“Datasets are the backbone of AI systems,” Birhane explained. “There are huge amounts of datasets available because of the internet. And any dataset you collect from the internet is always guaranteed to be problematic. It always has to be audited and assessed and has to be detoxified.”

Birhane said that every dataset she and her colleagues have audited has included “content that shouldn’t be there”, whether that’s illegal, offensive or problematic content that “encodes really negative and tired, clichéd stereotypes of cultures, individuals and groups of people”.

“This is what you find in datasets because it comes from the internet,” Birhane explained. “And any AI system that’s trained on these datasets, you can guarantee it bears the downstream effects.”

And so, as we face into a future more and more likely to be governed by AI – where a bot selects you for a job, determines your visa status, or is deployed by law enforcement – For Tech’s Sake asks the expert: Can we build AI that is actually detoxified?

Birhane is cautiously optimistic. “You will always have biases based on whose perspective you are trying to look at things from, but this doesn’t mean we should just give up on the idea of debiasing,” she said.

“It doesn’t make sense to think of one completely clean dataset that AI can be trained on. That’s just unrealistic. But we can try to build datasets that are as representative of multiple perspectives as possible. That’s possible.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.