As we pass the one-year anniversary of ChatGPT, Dr Abeba Birhane sat down with Jenny Darmody to discuss how the present dangers and limitations of generative AI could affect our future.
This time last year, ChatGPT was a term that had just entered the public vernacular and within weeks it had taken over many discussions both in and outside the world of tech.
While it was far from the first iteration of generative AI – the tech that uses algorithms to generate images, text and more – ChatGPT, created by OpenAI, was the match set the trend alight in a whole new way.
But as major tech players pumped billions of dollars into generative AI, a new announcement seemed to come out almost on a weekly basis. Countless AI start-ups sprang up promising the world and tool after tool was released with big plans to revolutionise virtually every industry.
There’s no denying that the tech is powerful – albeit the definition of power may differ depending on who you ask – and the tools can have their uses. But in the flurry of newness and excitement, combined with the accessibility that has come with platforms such as ChatGPT, we must question whether we are falling victim to the hype machine instead of asking the important, critical questions about what this technology truly means for our future society.
Dr Abeba Birhane is a cognitive scientist who wears many hats, making her an expert in the field of artificial intelligence. She is a senior adviser in AI accountability at the Mozilla Foundation, an adjunct assistant professor at the School of Computer Science and Statistics in Trinity College Dublin and most recently, she was appointed to the UN’s advisory body for AI governance. She has also been named by Time Magazine as one of the most influential people in AI.
In conversation with SiliconRepublic.com editor Jenny Darmody, Birhane said the media reporting around the topic of generative AI tends to be “extremely super-hyped” when it comes to technology’s abilities.
“You hear very little about its problems, the limitations or the drawbacks,” she said. “If we are talking about the technology itself, it tends to be unreliable, it suffers from what’s known as hallucination…it fabricates text that seems factual, but it’s either non-existent or it’s just made up, so you have various issues like that.”
‘People that have darker skin tone tend to pay the high price for the negative consequences of these systems’
She added that when it comes to image generators like DALL-E or Midjourney, they can often boil any idea down to its very basic stereotype caricatures, which in turn will exacerbate those stereotypes in reality. “Because the data that these models are trained [on] comes from the internet…and because we are using that kind of data without proper detoxification, without proper care, then the models tend to really just encode what exists within the web and they exaggerate it.”
While it can be easy to talk about this technology in a very nebulous way, the real-life ramifications can be extremely serious and they’re already happening. When AI, large language models and algorithms like this are deployed in settings such as education, law enforcement and medicine, real people are impacted.
“There is a long history, a line of research, showing that face recognition systems perform differently depending on skin tone and this has been research going for about 10 years now,” said Birhane. “People that have darker skin tone tend to pay the high price for the negative consequences of these systems.” Just one example of this comes from law enforcement in the US, where six people that we know of have reported being falsely accused of a crime following a facial recognition match – all six people are black.
While there is a lot of movement on the technology side of the house, there is also a lot of movement on the regulatory side. The EU AI Act was passed earlier this year, which was followed by an executive order from US president Joe Biden designed to create AI safeguards.
But while governments are actively trying to regulate the evolving sector, Birhane said there’s a flaw in this area when it comes to who is actually at the table. “From Microsoft to OpenAI to Meta, you find that whether it’s the EU AI Act, or even in the UK AI Summit a couple of weeks ago, they really are immersed at various capacities,” she said.
“These bodies are shaping the kind of regulatory draft that are being developed and this should be a conflict of interest, it’s like handing over cancer research to tobacco companies, it’s in their interest to hide the problems.”
But while the industry is marred with corporate greed, which could negatively impact the role AI plays in our future, Birhane did highlight the possibilities if the tech is handled – and reported on – correctly.
“In New Zealand, you have the Māori community where they are creating their own speech technology without any outside involvement. They collected their own voice data, they labelled their own data, they built their own system, they created their own benchmark to evaluate and test the model. And the very objective of their speech technology was to retrieve their language that has been dying due to British colonisation where their grandparents were forced to stop speaking the language,” she said.
“In order for AI to be used for good, first of all, we have to flip [the] objective. We have to aim for, not profit maximisation, not cost cutting, not for creating efficient machines, but the objective has to be serving people, helping people.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.