Can we ever create a truly ethical artificial intelligence?

23 Mar 2017

A sad robot. Image: Saundra Castaneda/Flickr (CC BY 2.0)

Artificial intelligence is increasingly being used as an unbiased judge, for matters ranging from insurance to economic efficiency. But can it ever truly be unbiased?

When Remy Descartes first wrote the phrase cogito ergo sum –‘I think therefore I am’– in the 1600s, he could not have been aware of the philosophical questioning that would erupt with the onset of artificial intelligence (AI) in the 20th and 21st century.

Every Google search, every video suggested on YouTube and every Siri recommendation is built on machine learning algorithms designed to learn everything about your online habits, in a bid to offer targeted content that you might like.

Even outside of consumer-level decisions, AI and algorithms are increasingly being used to root out hidden meanings in billions of lines of genetic code, in the hope of finding a cure to a disease or building machines that can talk for themselves.

Aside from allowing researchers and computer scientists to crunch numbers an awful lot faster, AI and the algorithms behind them are many times propped up as neutral judicators in decision-making, unbiased by any human.

But is this really possible?

‘Coded gaze’

Anecdotal examples over the past few years from computer scientists and policymakers have shown that this is far from the case.

Last year, MIT graduate student Joy Buolamwini was developing facial recognition software with a group of colleagues and came across one major issue: it couldn’t read her face.

As it turns out, it couldn’t do so because her fellow programmers forgot to input the ability to read various skin tones and facial structures.

A similar and dangerous warning sign was noted last year, when AI used to predict future crimes in the US was proven be biased against black people. The algorithm’s developer vehemently denied this was racist.

This phenomenon – which Buolamwini refers to as ‘coded gaze’ – is not just a small issue, but one prevalent throughout computer science. It could even be detrimental to your career.

Accusing the creator

It has now gotten to the point that agencies and policymakers are attempting to stem any further bias before algorithms potentially cause someone to lose their job.

In fact, this already happened. As far back as 2011, teacher Sarah Wysocki lost her job because an algorithm said so.

A UK agency called Sense about Science is on the case. The charity’s campaigns and policy officer, Dr Stephanie Mathisen, stood in front of a House of Commons committee, warning that algorithms are replacing humans as decision-makers at an unprecedented rate.

“The lack of transparency around algorithms is a serious issue,” Mathisen wrote in a piece published to Public Technology.

“If we don’t know exactly how they arrive at decisions, how can we judge the quality of those decisions? Or challenge them when we disagree?”

“Furthermore, how ‘good’ an algorithm is depends entirely on its creator’s intended outcome and what success looks like to them, with which you, me or anyone else may not agree.

“Algorithms are also only as unbiased as the data they draw on.”

Transparency a good place to start

This was the opinion held by the EU when drawing up the new legislation behind the General Data Protection Regulation (GDPR), set to enter into law in 2018.

As we previously discussed with Prof Barry O’Sullivan, deputy president of the European Association for Artificial Intelligence (EAAI), algorithms that decide what content we see will legally have to explain how the AI came to that conclusion.

“The GDPR will give users the power to ask about the data and the reasoning that goes behind an algorithmic decision that impacts them. That’s a good thing,” O’Sullivan said at the time.

Looking back on the UK government – now in the process of leaving the EU but still signed up to GDPR – a suggested code of conduct proposed to the government offers one potential solution to the murky issue of accountability for algorithms.

Suggesting the possibility of an ombudsman for such disputes, developers should aim for five factors to achieve a ‘good algorithm’: responsibility, explainability, accuracy, auditability and fairness.

Robot with woman

Future of the workplace? Image: YAKOBCHUK VIACHESLAV/Shutterstock

The problem with Tay

Looking at AI over the past few years, developers have certainly not followed these guidelines.

Perhaps the most famous instance was the disastrous Tay chatbot from Microsoft last year, which quickly learned to be racist and misogynistic from social media conversations online not just once, but twice.

So how do we ensure that the Tay chatbot, other chatbots and AI in the future are free from the flaws that we do wish not to pass on?

Civil rights for bots?

One solution has been to form organisations such as the Partnership on AI, which brings together ethics and legal experts to lay the foundations and financial support for AI to make algorithms ethical, as well as fair and inclusive.

However, this simply lays the groundwork for AI developers to work from, and is arguably not as extensive as what MEP Mady Delvaux is attempting within the EU.

Last January, the Luxembourgish politician proposed a detailed and potentially groundbreaking piece of legislation that would effectively give AI a civil rights bill.

As part of the legislation Delvaux put forward, autonomous robots would be referred to as “electronic persons” that would be entitled to certain rights, but would also be held accountable if they were to act unethically, or in a manner conflicting with the designer’s intentions.

Fleshing out this idea, Delvaux said: “One could be to give robots a limited ‘e-personality’ [comparable to ‘corporate personality’, a legal status which enables firms to sue or be sued] at least where compensation is concerned.”

The next steps

However, she admitted that this is further down the line – perhaps as many as 15 years from now – and that in the meantime, some responsibility for what AI decides has to lie with its us, its creator.

“According to the principle of strict liability, it should be the manufacturer who is liable [when an AI behaves unethically],” Delvaux said, “because he is best placed to limit the damage and deal with providers.”

Aside from asking important questions about AI and its role in our workplaces and homes, this is one of the first attempts to set ethical boundaries and limitations in a legal framework.

Having received the necessary backers within the European Parliament, eyes will now turn to the US, whose government has only begun to have a conversation surrounding ethics in AI in the past few months.

Throughout one of the reports released by the White House, the phrase “more research is needed” stands out as a short answer to a long and complicated question.

Perhaps by the time that we do come to a conclusion to this problem, AI might have decided for itself.

A sad robot. Image: Saundra Castaneda/Flickr (CC BY 2.0)

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com