We can’t let ethical AI fall by the wayside


26 Jul 2023

Image: Dr Patricia Scanlon

In the latest episode of For Tech’s Sake, Ireland’s first AI ambassador spoke about what needs to be done to build better AI and we were left wondering why it isn’t already the done thing.

The rise of interest in AI – and generative AI specifically – is far from slowing down. In fact, you need to only glance at this week’s earnings reports in Big Tech to see what is boosting their revenues right now.

But as the dust from the initial rush settles and the final wording in the landmark EU AI Act has been settled, some of the problems, challenges and concerns around this explosive technology have also crept in.

Earlier this month, the UN Security Council held its first-ever talk on AI safety, thousands of authors have been pushing back on AI companies for using copyrighted material and just last week a deputy commissioner of the Irish Data Protection Commission (DPC) issued a stark warning to companies developing AI tech about building products trained on public data.

It seems the overwhelming message now is to please build AI responsibly, but this is often a difficult thing to do once the genie is out of the bottle, so why hasn’t it been done from the start?

To discuss this further, For Tech’s Sake hosts Elaine Burke and Jenny Darmody were joined by Dr Patricia Scanlon, Ireland’s first AI ambassador.

She sees it as her duty to start a national conversation on AI, especially ethical AI. Adding weight to her credentials is the fact that she is the founder of SoapBox Labs, which builds kid-focused speech recognition technology and was designed using ethical AI from the get-go.

In the conversation, Scanlon said the term ethical AI can often garner an eye roll from people who believe having to bake this in from the start will stifle innovation, which she said is not true.

“The thousands of people, tens of thousands of people working on these, it’s perfectly within their capabilities to do some kind of rein-in, checking for safety, making sure that people who are using it aren’t using it for bad uses.”

She added that while there’s a limit to what you can do in this area, there’s “a hell of a lot you can do when you control the foundation models,” that is, the large machine learning models trained on a vast quantity of data, such as ChatGPT.

“There are not that many companies that do control them and it’s about their willingness to be regulated now and not in 10 years’ time.”

As AI continues to grow, so too do the extreme conversations around whether AI is the best thing to happen to humankind or whether it poses an existential threat.

In fact, Geoffrey Hinton, the ‘godfather of AI’ quit Google earlier this year to speak out about his concerns around the dangers of AI and while others in the AI space have said that the extreme warnings act as a scaremongering distraction, Scanlon said she wants a more balanced approach to the conversation

“Nobody knows. So, the people who say, ‘Oh, don’t worry about it, it’s fine. It’ll be only do good for humanity and why wouldn’t you want that?’ – they don’t know,” she said.

“Equally, the people who are saying, ‘It definitely is the end of the world, we should stop using it immediately and shut it down, they don’t know either, and I think it’d be much more helpful if everybody would just say: ‘We don’t know’.”

Check out the full episode with Dr Patricia Scanlon and subscribe for more.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.