Will current AI be illegal in the EU in 2018?

26 Oct 2016110 Shares

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Series of colourful algorithms on a screen. Image: McIek/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Silicon Valley and the EU have never really seen eye to eye, but a recent regulation – that means citizens have the right to demand an explanation to an algorithmic decision by 2018 – could drive the two to legal war.

You only have to look to see how often many of the largest tech companies – but particularly Google – find themselves in front of EU courts, over their applications and how the average user interacts with them.

Its search results wield enormous power and leverage online, not only in terms of what content appears on the first page of search results, but also with incredibly lucrative e-commerce business. This is where it has previously been accused of favouring its own products.

What the vast majority of Google users do not realise – or any user of a search engine for that matter – is that their results are defined by months or even years of harvested data. Every search result, every click is analysed by an AI algorithm to decide what it wants to show you.

Protecting data of EU citizens

So if we are presented with manufactured search results that might not reflect reality, do we not have the right to be told what algorithms are deciding what we are seeing, and why they are deciding it?

This was the conclusion drawn by the European Parliament back in April of this year, as it adopted the General Data Protection Regulation (GDPR) to replace the older, less specific 1995 directive.

With estimates of nearly 2.5 exabytes of data generated every day on the planet, the EU quite rightly realised that it would need to have much greater definitions for a citizen’s right to data protection.

The purpose of the new regulation is to ensure that companies in Silicon Valley – or any others based outside the EU – can be held accountable for any data privacy issues regarding information it collects from EU citizens.

Asking an algorithm to explain itself

One interesting requirement inserted into the GDPR, regarding the world of AI, certainly caught the attention of computer scientists working in the field.

In describing the context of the rule, a recent research paper outlined how it would restrict companies that use predictive algorithms that “significantly affect” users, with EU citizens having the right to understand why an algorithm has made such a choice.

After all, aside from accusations of bias made by Google’s algorithms on search results, Facebook’s news feed is based on an algorithm that curates information it thinks we are likely to favour over other content.

Therefore, if AI algorithms and data management are kept as they are in two years’ time, Silicon Valley and other global tech hubs are facing the reality that AI could soon be ‘illegal’.

Rather than being viewed as a negative, many computer scientists are revelling in the details of the GDPR as a major boon to their field.

Creates some good problems for computer scientists

One such computer scientist is Prof Barry O’Sullivan, deputy president of the European Association for Artificial Intelligence (EAAI), who spoke to Siliconrepublic.com.

While he admits that it will provide enormous challenges on many fronts, including technical, ethical, legal, and commercial; the end result of receiving an algorithmic explanation will be better for the average citizen.

“It creates several problems, but as an academic and a researcher, the GDPR creates problems that are very good ones to have,” he said.

“I’ve done work on explanation in AI, on and off, for over a decade … and [this] regulation gets to the heart of the issues around data collection; profiling of people through the data they generate on social networks, communications systems, and their online presence. These are important issues.”

So what will an explained algorithm look like? Will it just become another ‘cookies’ button that users click and move on without even reading?

More than just a cookie

O’Sullivan sees it as going “far beyond” just a dialogue box to be clicked away on a website.

“[EU web users] are not told what the consequences of accepting a cookie might be, how that data will be used to make decisions about them, which adverts will they see in the future, and what profiling is being done about them.

“Just being told that the AI system has a complex mathematical model that ‘just says so’ won’t cut it!

“The GDPR will give users the power to ask about the data and the reasoning that goes behind an algorithmic decision that impacts them. That’s a good thing,” O’Sullivan said.

Barry O'Sullivan

Prof Barry O’Sullivan, deputy president of the EAAI. Image: Tomas Tyner/UCC

But are we taking all parties into consideration here when we discuss having explainable AI? What about the AI itself?

In the research paper cited earlier, written by Bryce Goodman and Seth Flaxman, there exists the philosophical question that if AI has to explain its decisions for collecting and curating data, is it really an AI at all?

Can it not be allowed to think for itself and draw its own conclusions?

According to O’Sullivan, AI should not be allowed to be a data-harvesting monster guiding your each and every whim online.

True hallmark of intelligence is ability to explain

“For me, the true hallmark of intelligence is the ability to explain,” he said. “There are lots of tasks that can be automated by a technology that can do the job very well, but it doesn’t really understand what it’s doing, or doesn’t understand why it did something in a particular setting.”

It is also crucial that we prevent bias and hold an AI’s methods to account, as an AI designed with bias will only release bias. That’s where the GDPR steps in.

This doesn’t necessarily mean that tech companies like Google and Facebook set out to push biased algorithms intentionally, but the clock is ticking fast for them to have an explainable algorithm in place for the May 2018 deadline.

To make matters more challenging, many of the latest AI advancements have been in deep learning that is not geared towards explaining its decision, unlike others previously developed.

Research funding has been increasing in the explainable algorithm field however, with DARPA in the US being one research centre looking into how to create such a platform that presents a “major challenge”, according to O’Sullivan.

With less than two years to go before GDPR becomes law in the 28 EU member states, the world’s tech superpowers will need to act fast to prevent a future ‘AI-ageddon’.

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com