NHS England disgracefully plans to share every patient’s medical histories with third parties from 1 July.
Data will be collected by NHS Digital, which runs the health service’s IT system. And it will be available for academic and commercial bodies to access.
NHS Digital publishes a monthly list of who it shares its data with. But “opaque” NHS commercial relationships make it harder to track where data goes.
Cori Crider, co‑founder of campaign group Foxglove, said the NHS is “completely silent” on who would be given access to the new data.
“Is it pharma companies? The health arm of Google Deepmind? If you ask patients whether they want details of their fertility treatment or abortion, or results of their colonoscopy shared with those companies, they’re not going to want that,” she said.
The data of some 55 million people will include details of sexual health, as well as physical and mental health and criminal records.
Being able to opt out of having personal data shared will be a challenge for many.
Opting out means filling out a form and taking it to a local GP before 23 June.
Missing the deadline means you can only stop future data from being added to the system.
Although data is supposed to be anonymised, the NHS will have secret “codes” to unlock identities if there is a “valid legal reason”.
Foxglove has questioned the plan’s legality under data protection laws—and on the limited time available to opt out of the proposals. And the MedConfidential group said, “If you do not act based on web pages on the NHS digital site and some YouTube videos and a few tweets, your entire GP history could have been scraped, never to be deleted.”
This is the second attempt at putting GP records on a central database. In 2013 the Care.data programme planned to scrape patient records.
Plans were abandoned in 2016 after confidentiality complaints.
The latest scandal highlights the intrusiveness of wide scale data collection and handling. Valid research does need to look at health data. But firstly there is the issue of consent.
People should be willing participants in data collection and its use for medical research.
But that isn’t happening here.
As a result, people’s trust in the NHS could be affected.
This could cause serious damage to other programmes the NHS is rolling out.
The second factor is the secrecy of the data handling. A lot of the sharing and analysing of data is done behind closed doors.
If the systems storing the data aren’t fully secure, breaches are bound to happen.
In 2011 over 4.5 million users of the American health care programme Tricare had patients’ data stolen due to an employee error.
And in the US last year, 155.8 million individuals were affected by data breaches.
Digital data collection can present new avenues for researchers to develop beneficial technologies.
But transparency is essential to maintain trust and safeguard people’s personal information.
The primary business objective of many leading tech companies increasingly is harvesting, analysing and selling the personal data of their users on a mass scale.
Free online services such as Google and Facebook come at a cost to the users’ privacy.
Google successfully extracted data from web searches, social media likes, emails and more.
This was to monitor online behaviour to personalise adverts and content.
As more online platforms rushed to sell their users’ data, its collection expanded into more intrusive techniques.
Smart cars, household appliances, gas and electric meters and more collect, use and sell user information to third parties.
This could see, for instance, the data of how someone drives sold to insurance companies to hike up prices.
Certain apps and platforms can also gather data, sometimes even when they’re not being actively used.
With access to real time location, marketers can tailor ads to prompt you to visit nearby businesses.
Location information can also be stored to monitor where you’re likely to spend your money.
The state also uses data collection to its advantage.
Lawyers representing victims in the ongoing undercover policing inquiry say groups, such as Black Lives Matter and Extinction Rebellion, shouldn’t assume their data is secure.
Groups defined as “oppositional” have always been under surveillance by the authorities.
This is the latest method.
In 2013, 12 US National Security Agency employees were caught using government surveillance programmes to infiltrate emails and calls of their former partners.
We cannot simply opt out of using sites such as Facebook as a way to end data collection.
The competitive nature of capitalism will see another firm take its place.
And social media is an important tool, especially during the global coronavirus pandemic when it kept people connected globally during mass movements.
But we should not shy away from fighting the infringement of people’s personal information. It’s right not to trust companies’ data collection systems.
Authoritarian governments can systematically oppress using data collection.
China uses an army of security personnel to compel ethnic minorities to submit data, such as the Uyghurs in Xinjiang province.
A system developed by the China Electronics Technology Corporation (CETC) is used to monitor and predict people’s behaviour patterns, to suppress protests.
A CETC engineer said the programme’s intentions were to, “Apply the ideas of military cyber systems to civilian public security.”
This system is part of the racist crackdown on Muslims, which has resulted in up to one million Uyghurs locked in internment camps.
Data surveillance is not exclusive to China.
In 2009 it was revealed that the British police had a huge database of protesters’ personal information including photographs, names and videos from demonstrations.
Many had never been arrested or charged.
The database was claimed to be used to catalogue criminal intelligence.
Police forces freely exchanged this data among themselves so they could monitor who was attending protests.
KPMG knowingly hid Carillion’s financial problems