Google's DeepMind and Patient Data Usage: The Saga Continues

Controversy about Google’s DeepMind and its Streams mobile app for healthcare erupted again last Wednesday when Bloomberg reported that Deep Mind’s co-founder Mustafa Suleyman has been placed on leave.

Streams App and the Royal Free Hospital Trust

If you haven’t followed closely, it all started 3 years ago when DeepMind and the Royal Free Hospital Trust announced a deal to develop and test Streams, an app designed to help doctors identify patients at risk of developing acute kidney injury. From the jump, the deal was roundly criticized by privacy advocates.

A March 2017 article titled “Google DeepMind and healthcare in an age of algorithms” by researcher Julia Powles and journalist Hal Hudson further fanned the flames by arguing that the collaboration “has suffered from a lack of clarity and openness, with issues of privacy and power emerging as potent challenges as the project has unfolded”.

In July 2017 the UK’s privacy regulator, the Information Commissioner’s Office (ICO), issued a report alleging that Stream’s partner, London’s Royal Free Hospital, had illegally given DeepMind access to over 1.6 million patient records.

The ICO’s Findings

As reported by The Guardian , “The ICO ruled that testing the app with real patient data went beyond Royal Free’s authority, particularly given how broad the scope of the data transfer was. More specifically, the ICO’s office stated:

A patient presenting at accident and emergency within the last five years to receive treatment or a person who engages with radiology services and who has had little or no prior engagement with the Trust would not reasonably expect their data to be accessible to a third party for the testing of a new mobile application, however positive the aims of that application may be.” (Emphasis is mine).

The Royal Free Trust was asked to commission a third-party audit of the trial following the ruling. It was also required to complete a privacy assessment, elaborate how it will better comply with its duties in future trials, and establish a proper legal basis for the DeepMind project.

Google Announces Absorption of Its DeepMind Unit

In November 2018, controversy erupted again when Google announced its intention to absorb its DeepMind unit into Google Health. Seeking to allay fears about patient data usage, Dominic King, leader of DeepMind’s Streams team told The Verge: “At this stage our contracts have not moved to Google and nothing has changed in terms of where the data we process is stored. Nothing changes until Trusts consent and undertake any necessary engagement, including with patients.

Concerns about Google’s control of DeepMind surfaced later when an independent review panel of its work in healthcare. Panel members were reportedly displeased by the lack of information access, authority and independence from Google.

Software as a Medical Device (SaMD)

The controversy goes beyond use of patient data, however. Although not widely reported, the intended use of Streams clearly brought it within the scope of Medical Device Directive. DeepMind hadn't sought permission from the UK’s Medicines and Healthcare Products Regulatory Agency (MHRA) to trial non-CE-marked Software as a Medical Device (SaMD).

Action in the Algorithmic Age

Dogged by controversy for over three years, it’s long been clear that missteps and misunderstanding have occurred on all sides. The lessons are many for health systems, tech companies, regulators and individuals alike.

When it comes to patient data, strong vigilance, scrutiny and monitoring of proposed data sharing partnerships are a must. As Powles and Hudson observed in their 2017 article, the transfer of population-derived data sets to “large private prospectors” raises critical questions for policymakers, industry and individuals.

The onus remains on all of us to respond at the “speed of relevance” in the algorithmic age.

Susan Ramonat