‘We need to be specific to address the issues of AI ethics’

31 Mar 2021

Image: Catherine Breslin

AI consultant Catherine Breslin discusses the ‘broad umbrella’ of AI ethics and some of the major trends occurring in the voice-tech industry.

Machine learning scientist Dr Catherine Breslin specialises in research and development in the area of voice and language technology.

She founded Kingfisher Labs, an AI consulting company based in Cambridge, and works with companies that are building voice and language technology, including areas such as speech-to-text, natural language processing and human-computer dialogue.

But one of the big topics on her radar right now is addressing ethical issues in AI.

‘There’s always the question of whether a particular technology should be built or not’

While plenty of stakeholders, from the EU, to UNESCO to individual companies, have been examining how AI can be developed and deployed in an ethical way, the area of voice technology presents specific challenges. It is an area of assistive AI that is still just developing, and has seen acceleration in the past year due to the pandemic.

“As voice technology has become more prevalent, it’s right that ethical issues are brought to the forefront. AI ethics is a broad umbrella though, and we need to be specific in order to address the issues,” Breslin told Siliconrepublic.com.

“One topic under this umbrella is data handling, and this one area where regulation exists. Biometric information like voice is personal, and so it’s important to handle speech data with care.”

AI bias is another significant issue that has arisen in recent years, and a growing number of AI developers are trying to raise awareness of racial bias in tech. In the case of voice tech, Breslin said it can be about recognising some accents more than others.

“Then, on top of these specific topics, there’s always the question of whether a particular technology should be built or not, and how it gets used in the world,” she added.

Breslin completed her engineering PhD at the University of Cambridge in 2008. Since then, her career has spanned both academic and commercial areas. Most notably, she worked on speech and language technology for Amazon Alexa, helping the assistive tech system to understand humans.

“My team and I worked on the challenges that came with scaling up Alexa. We built some of the underlying speech recognition and language understanding models, but also worked on enabling new features, new languages, and taking Alexa to new devices,” she explained.

“While it was interesting to work on a well-known product like Alexa, there are many more exciting applications of voice and language technology in the world.”

Breslin said one of the biggest trends she has noticed in the AI industry is the growing access to open-source tools, models or inexpensive APIs that can be used for the basis of development.

“This means that more and more people are able to build AI products using these starting points, rather than have to build everything themselves from scratch. It opens up the possibility of being able to quickly build different products without needing deep AI expertise,” she said.

“On the other hand, there’s a trend for underlying machine learning models to be larger, require expensive computational resources to build, and to be trained on more and more data. This makes it expensive for small companies to develop their own competitive underlying technology.”

‘Language is full of ambiguity and nuance, so there are always tricky problems to solve’

In voice tech, one of the biggest movements that consumers would be familiar with is the boom in conversational systems and voice assistants over the last decade, with the launch of Siri, Alexa and Google Assistant. “Since then, the technology has spread not just into the personal voice assistants, like Siri and Alexa, but also into more specific voice assistants that are tailored for different scenarios like finance, legal or customer service,” Breslin said.

And while much work has been done in this sector in recent years, there is still more to do.

“The challenges involved in understanding language are really interesting,” she added. “There are so many different ways of expressing yourself, and language is full of ambiguity and nuance, so there are always tricky problems to solve. If I could change anything, it’d be to broaden even further the places that voice technology is used and the people who build it.

“Technology that works in all languages as well as it does in English, for those who have non-standard speech, or that performs well in different verticals, will go a long way towards bringing its benefits to everyone.”

Having been in the industry for more than 20 years, Breslin said she’s happy to see the conversation around diversity and inclusion within tech become more public.

“Yet, if you look at the figures, meaningful structural change is very slow,” she said.

“I’d like to see more organisations leading from the top. Hiring and promoting women into senior positions and acknowledging that sexism is intertwined with racism and other forms of discrimination. Only then, I think, will we start to see the structural change that is needed.”

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

Jenny Darmody is the editor of Silicon Republic