UK government report calls for no direct regulation of AI technology

17 Oct 201722 Shares

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Image: ktsdesign/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

The UK government’s new report, compiled by more than 100 experts, has advised the creation of an AI council rather than direct regulation.

The age of artificial intelligence (AI) poses as many questions as it does possibilities, and now the UK government has in its hands a new report advising it on how to navigate legal challenges in the future.

Led by professors Wendy Hall and Jérôme Pesenti, the report entitled Growing the Artificial Intelligence Industry in the UK has dismissed the suggestion of AI being ruled by direct regulation, instead saying there should be oversight from a new AI council.

The findings were compiled from the opinions of more than 100 experts within the field of AI. Along with the creation of a new council, there were 17 other recommendations, covering such topics as improved access to data and an increase in AI research.

To lead this new focus, the report said, the Alan Turing Institute – founded in 2015 to promote data science research – should be made the national institute for AI research.

Greater transparency for algorithms

The report explained that the AI council would operate as “a strategic oversight group, establishing an open and non-competitive forum for coordination and collaboration between industry, the public sector and academia”.

It would work closely with the Alan Turing Institute and would look to seek a “champion” in the UK government to put forward its objectives.

Given the issues of legality over algorithms with the onset of GDPR next year, the report also recommends creating a process that would enable developers to explain why their AI is behaving the way it is.

This, the report said, should be jointly developed by the Information Commissioner’s Office and the Alan Turing Institute.

Another key component of the recommendations is to build data trusts that would make AI developers better informed on sharing data to prevent issues such as the one that occurred between the NHS and Google’s DeepMind earlier this year.

With greater trust and oversight from an AI council, research data could be made more readily available to algorithms to sift through and speed up the development process, while also supporting text and data mining as a standard and essential tool for research.

Could Ireland replicate it?

Time and again, the report refers to a “short supply” of AI researchers in the UK, which will mean the need for greater investment in academia and industry.

This would include an industry-funded master’s programme in AI as well as 200 more PhD places to attract the most diverse range of candidates.

“Diversity is particularly important for AI as the output quality of the algorithm depends on the assurance that the inherent bias of programmers does not transfer to code,” the report said. “A diverse group of programmers reduces the risk of bias embedding into the algorithm, and enables a fairer and higher quality output.”

Discussing the potential for Ireland to follow suit with its own battle plan for tackling an AI future, Prof Barry O’Sullivan, director of the Insight Centre for Data Analytics at University College Cork, said it would be wise for the Irish Government to take note.

“The UK AI report is excellent and sets a very clear strategy for them. Ireland has a great opportunity to do something similar and has many unique advantages that put us in a very strong position.

“The IDA, and particularly [chief technologist] Ken Finnegan, have been promoting Ireland as an ‘AI island’. There really is a great opportunity here for us to bring the various pieces together and have major impact.”

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com