Google-owned DeepMind will examine AI’s potential impact on society

4 Oct 2017

The consequences of AI are already becoming obvious in today’s world. Image: oneinchpunch/Shutterstock

DeepMind aims to tackle ethical and societal quandaries stemming from artificial intelligence.

AI is already beginning to have an impact on our daily lives, and London-based AI research lab DeepMind has today (4 October) announced the launch of an ethical unit to ensure the outcomes of AI are beneficial to society at large.

According to The Guardian, the Ethics and Society Unit isn’t the AI Ethics Board that was promised to DeepMind when it was acquired by Google in 2014. The board was convened in early 2016 but its members, and the subjects they discuss remain a mystery.

‘We want these systems in production to be our highest collective selves. We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last 60 years’
– MUSTAFA SULEYMAN

Grappling with the impact of AI

In a statement on its website, DeepMind explained its rationale behind the new venture: “This new unit will help us explore and understand the real-world impacts of AI. It has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.”

The ethical dilemmas dredged up by increased AI implementation are nothing new. Check out study of racism in criminal justice algorithms study of racism in criminal justice algorithms and any of many other studies on the wider consequences of AI for society at a broader, structural level.

There are a series of independent advisers involved in this new venture. Known as fellows, they will provide feedback and guidance to DeepMind’s research team. They include director of the Future of Humanity Institute and Oxford University professor Nick Bostrom, climate change expert Christiana Figueres, and chair of the McKinsey Global Institute James Manyika.

What does DeepMind actually want to achieve?

The five core principles guiding the research include ensuring research has a social benefit, is rigorous and evidence-based, transparent and open, diverse and interdisciplinary, and lastly, collaborative and inclusive.

DeepMind co-founder Mustafa Suleyman told Wired: “We want these systems in production to be our highest collective selves. We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last 60 years.”

There are some criticisms of DeepMind’s move, however. Natasha Lomas wrote in TechCrunch that the company should release the names of those on its ethics board, or at the very least explain why the names aren’t being made public.

She also noted a contradiction. DeepMind is a commercial entity, so is funding research into AI ethics as an entity seeking to profit from that same technology presenting a confusing message? Impartiality is key to any research of this kind, so how will DeepMind ensure this is the case? Lomas continued, questioning who is funding the unit as well as many other valid queries. At this stage, it seems like it might be difficult for the Google-owned firm to remain truly neutral.

Many have also criticised DeepMind for setting up this unit just months after it was revealed that London’s Royal Free Hospital (a DeepMind partner) provided personal data of some 1.6m patients to the AI firm without prior consent.

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com