Opinion
NHS data is a goldmine. It must be saved from big tech
Health datasets play a vital role in medical research. If the US has its way, the UK could lose a valuable public resource
by James MeadwayAs a society, we are finally acquiring a healthy scepticism about the use and abuse of our personal information. New polling conducted by YouGov for the Institute for Public Policy Research shows that 80% of the public want to see tighter rules applied to how the likes of Facebook and Amazon use their data. Over the weekend, it was revealed that US pharmaceutical companies have already been sold data relating to millions of NHS patients and that Amazon, incredibly, has been given free access to NHS data Hidden away in the secret US-UK trade papers, leaked and revealed by Labour in November, is perhaps the biggest single threat to public data yet seen.
The potential threat to the NHS from a post-Brexit US trade deal is clear, and has become a major election talking point. But alongside the well-known dangers of accelerating privatisation and drug price hikes, there are risks to one of the UK’s most prized publicly owned resources. The NHS has one of the planet’s most valuable repositories of data: primary care records that cover sometimes decades of consistent, high-quality, trusted data on 55 million individuals, potentially covering their entire health histories. On top of that, an estimated 23 million care records document episodic treatments when patients receive secondary or specialist care. Accountants Ernst & Young estimate its value at £9.6bn annually.
For pharmaceutical companies, such comprehensive data is considerably more valuable than any sample. Large, clean, consistent and trusted datasets such as the NHS’s are a goldmine. Already, medical researchers are deriving useful results from machine-learning techniques – for instance, in providing rapid diagnoses of cataracts and other common eye diseases. And with progress in medical research increasingly driven by such techniques, the value of NHS data will only increase over time. It is a glistening prize for major health and pharmaceutical providers – or, indeed, big tech companies looking to move into the field.
That is why, as the leaked documents say, “obtaining commitments on the free flow of data is a top priority” for the US (you can find this on page 22). Free flows of data, including removing barriers to “data localisation”, imply that very sensitive health data could be taken and placed on servers outside of UK domestic law. And as Alan Winters, director of the Trade Policy Observatory at Sussex University has explained, combined with the US insistence that copyright and patent law are strictly enforced under a future trade deal, this could mean a situation where the UK is unable to analyse its health data without paying a royalty to Silicon Valley to use an algorithm.
Leaving the EU on the terms of the current withdrawal agreement – which imply a sharp break with the EU, including its various protections on data transfer – would leave the UK significantly weakened in any future trade deal with the US. We could be handing over an extraordinarily valuable public resource before we had really begun to appreciate its value.
Some of the risks are already understood. In a widely publicised case, the Royal Free Hospital was found in breach of data protection law in November 2017 by the Information Commissioner’s Office (ICO). Records of 1.6 million patients were handed over to Google’s DeepMind to help create an app, Streams, intended to alert clinicians rapidly to potentially acute kidney injury. Following complaints, the ICO found that patients were not fully informed about how their data was being used, and requested that the Royal Free tighten up its own data protection procedures.
But beyond privacy concerns, the value of the data should also concern us. We could be developing extraordinary new diagnostic tools or deriving new treatments, all from the NHS dataset. It holds a huge potential economic value. But if that value is simply captured by private concerns, it will be significantly lost to the rest of us.
The current approach – of allowing free access to data, and strictly enforcing intellectual property on insights and techniques derived from it – needs to be turned on its head. Localisation of data – and therefore democratic control and data sovereignty – must always be treated as an option in trade deals, while the patenting and copyrighting of algorithms and data-derived insights (particularly when taken from public data sources) should be weakened. There is no robust relationship between the enforcement of intellectual property and innovation in AI research, but patents on AI are rising globally. Instead of the encroaching privatisation of publicly held data – taken, in this case, under our noses – we should be looking to create a “digital commons”, putting the value to be derived from publicly held data into the hands of the public who created it.
Two immediate needs arise. The first is to bar attempts to undermine data localisation in any post-Brexit trade deals – the “free flow of data” should not be taken as inherent good. The second is to move to a less restrictive patent and copyright regime in artificial intelligence, both in any future trade deals and when enforcing existing national laws. The public is already clear on the privacy and security implications of the data economy. We need now also to think about how best to maximise and protect its public value.
• James Meadway is an associate fellow of the Institute for Public Policy Research