The importance of having accountability in AI ethics

1 Jul 2021

Joanna J Bryson. Image: Wouter van Vooren

AI ethics expert Joanna J Bryson spoke to Siliconrepublic.com about the challenges of regulating AI and why more work needs to be done.

As AI becomes a bigger part of society, the ethics around the technology require more discussion, with everything from privacy and discrimination to human safety needing consideration.

There have been several examples in recent years highlighting ethical problems with AI, including an MIT image library to train AI that contained racist and misogynistic terms and the controversial credit score system in China.

In recent years, the EU has made conscious steps towards addressing some of these issues, laying the groundwork for proper regulation for the technology.

Its most recent proposals revealed plans to classify different AI applications depending on their risks. Restrictions are set to be introduced on uses of the technology that are identified as high-risk, with potential fines for violations. Fines could be up to 6pc of global turnover or €30m, depending on which is higher.

But policing AI systems can be a complicated arena.

Joanna J Bryson is professor of ethics and technology at the Hertie School of Governance in Berlin, whose research focuses on the impact of technology on human cooperation as well as AI and ICT governance. She is also a speaker at EmTech Europe 2021, which is currently taking place in Belfast as well as online.

Bryson holds degrees in psychology and artificial intelligence from the University of Chicago, the University of Edinburgh and MIT. It was during her time at MIT in the 90s that she really started to pick up on the ethics around AI.

“I just noticed that people were being really weird around robots. They thought that if the robot was shaped like a person that they had a moral obligation to it. And this was at MIT and the robot didn’t even work and I was like, ‘What is going on?’”

This led Bryson to write her first publication on AI ethics in 1998, which was the first of many. As the years rolled on and people started to release they needed to think about the ethics of AI, Bryson had already been writing about it, and she has now become a prominent figure in the AI ethics community.

In 2010, she co-authored the UK Research and Innovation principles of robotics, the country’s first national-level AI ethics policy. Since July 2020, she has been one of nine experts nominated by Germany to the Global Partnership for Artificial Intelligence.

Having previously worked as a programmer, Bryson said she found herself with a seat at the table and was able to address specific questions because of her background in computer science.

“I’ve always been interdisciplinary, and I don’t think computer science is the most interesting discipline I do,” she laughed.

The challenges around AI regulation

Over the years, Bryson said she has watched how digital ethics has evolved from everyone simply being concerned about the AI itself to trying to solve much wider societal problems.

“One of the big things that happened was the shift from even talking about ethics to talking about human rights.”

The Universal Declaration of Human Rights is more than 70 years old and, while Bryson said some may believe that bringing digital ethics into the conversation could be seen as complicating matters, she added that it’s important to also consider digital ethics when discussing human rights because “ethics is culturally specific.”

She said that while self-governance and self-regulation are important parts of regulating AI, technology needs to be regulated on an independent level as well. “Those two things don’t contradict each other,” she said.

‘I want to be able to go in and say, who was it that put this code in?’
– JOANNA J BRYSON

“The most important shift has been the shift where we’re dealing with the lawyers and actually making legislation and I’m super excited about what’s happening in Europe right now.”

However, Bryson said there are several challenges to this, including what exactly will be required of companies in order to provide transparency and how regulation will be policed, adding that it will be interesting to see “how well the EU does at putting this stuff together”.

In particular, she spoke about AI bias. “What I’m terrified about in the AI regulation is that they’ve gone back to this idea that perfect data doesn’t introduce bias. No, the world is biased.”

She said that since the spotlight was fully turned on the biases that exist within AI in 2017, there have been countless discussions around the topic of unconscious bias. But, four years on, she said it’s time to start considering this bias to be negligent rather than unconscious.

“It could have been innocent at some point in the past, you can’t know everything. But at this point, we understand this stuff,” she said. “I want to be able to go in and say, who was it that put this code in?”

She said there needs to be documentation to help trace back where biased code or algorithms were put in so that it’s possible to figure out who the responsible party is.

“And if we can’t figure that out, then the company that has failed to document their software needs to be held liable and that is not happening right now.”

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

Jenny Darmody is the editor of Silicon Republic

editorial@siliconrepublic.com