UK NHS will anonymise private medical data for use in AI systems

2 Jul 2018439 Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

An NHS centre in Liverpool, UK. Image: Graeme Lamb/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

The UK NHS is enhancing the way it anonymises data used in its digital health ventures.

NHS Digital has announced that it is to reinforce the anonymisation of data used in its digital healthcare strategy. In a statement, the UK health body said: “The new de-identification process (known as De-ID) will protect patient privacy by de-identifying a person’s records in a consistent way.

“This will mean that when the right legal basis, controls and safeguards are in place, data can be linked across different care settings and geographic boundaries.”

The NHS is working together with software firm Privitar on the De-ID network. According to Engadget, NHS data director Tom Denwood said the body already anonymises data, but De-ID is a more standardised and consistent method of doing so in bulk.

Privacy worries

More than two years ago, the news of Google-owned AI firm DeepMind’s collaboration with the Royal Free NHS Trust caused consternation among privacy experts.

The Streams project – an app to help monitor patient records, analyse blood test results and detect kidney issues, among other things – was criticised by New Scientist. The publication suggested that the volume of data collected far outstripped the information required for the app’s purposes.

In 2017, the Information Commissioner’s Office (ICO) found that an NHS Trust broke the law by sharing sensitive patient data with DeepMind, in violation of the UK Data Protection Act 1998.

The ICO also separately ruled that 1.6m sets of patient data were not proportionate to what was required to test the Streams app.

Monopolising healthcare?

In a separate announcement, external reviewers appointed by DeepMind to report on its operations have flagged a series of risks and concerns in a document, including the potential for DeepMind Health to be able to “exert excessive monopoly power”.

According to the report, the FHIR (fast healthcare interoperability resource) used by DeepMind for the Streams app works with an open API, but the contract between DeepMind and the NHS Trust in question funnels connections via the AI firm’s own servers, banning connections to other FHIR servers.

This lack of interoperability has caused worry among the independent review panel, which wrote: “We do not want to see DeepMind Health putting itself in a position where clients such as hospitals find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position.”

DeepMind’s unclear business model

A lack of clarity around the DeepMind business model was also flagged by reviewers, who noted that the general public would suspect a hidden agenda or undisclosed profit motive – this would then be difficult to change once entrenched in people’s minds.

The vague nature of what is known about DeepMind’s work with parent company Alphabet was also raised in the audit. “To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?”

Many are calling on the UK government to implement a strong strategy to manage and regulate public sector data and deployment of AI in sensitive areas such as healthcare.

An NHS centre in Liverpool, UK. Image: Graeme Lamb/Shutterstock

Ellen Tannam is a writer covering all manner of business and tech subjects

editorial@siliconrepublic.com