Artificial intelligence in healthcare : a catalyst for reform to the Personal Data (Privacy) Ordinance?

Introduction

Against a background of the emergence of artificial intelligence (AI) in the global healthcare industry, this article will examine the challenge for personal data protection in the application of AI technologies in healthcare. We also consider the possible implications on the future law reform agenda for the Personal Data (Privacy) Ordinance (PDPO) in Hong Kong with reference to the General Data Protection Regulation (GDPR) introduced by the European Union.

AI in Healthcare

AI is yet to be a well-defined technology and no universally agreed definition exists. AI generally refers to the capacity of a computer program to perform tasks or reasoning processes that we usually associate with intelligence in a human being[1]. AI in healthcare can support better care outcomes, patient experience and access to healthcare services, while improving the productivity and efficiency of care delivery[2]. AI certainly plays a significant role in fighting against the COVID-19 pandemic.

Whilst AI technology has presented significant opportunities in the context of healthcare with diagnosis and screening being the most common uses of AI[3], it requires massive volumes of personal and non-personal data. This includes health data extracted from medical files or research participants’ results, in order to assist drawing inferences for health risk alerts and health outcome predictions. The immense volume of data would be used to train the machine learning models to recognize and learn the patterns in the data, thus leading to more comprehensive and accurate interpretation of the diagnosis of individual patients.  

Data protection

However, acquiring the necessary data may come at the cost of patient privacy as can be seen from high-profile incidents of data breach in recent years. In the UK, in a collaboration between Google Health and the Royal Free London NHS Foundation Trust (RFL) launched in November 2016, RFL transferred personal data of 1.6 million patients to DeepMind, a subsidiary of Google Health. The data transfer which aimed to create the healthcare app, Streams, an alert, diagnosis and detection system for acute kidney injury was found to be in breach of the Data Protection Act. Patients were not adequately informed that their data would be used for purposes that the patients would not have reasonably expected, i.e. testing for the app, other than the purpose of direct care being the initial legal basis for the data transfer. Following an investigation in 2017, the Information Commissioner’s Office procured an undertaking from RFL to fulfil a number of requirements and implement the third party audit recommendations.

Risk of high profile data breaches, together with the black-box nature of AI[4], have caused the public to become reluctant to provide access to their personal data. AI technology poses an unprecedented challenge to privacy with the need to collect a massive volume of personal data for analysis driven by machine learning that enables accurate predictions or decisions of the AI systems. As David Kaye, UN Special Rapporteur states in his Report to UN General Assembly, August 2018, “Artificial intelligence challenges traditional notions of consent, purpose and use limitation, transparency and accountability – the pillars upon which international data protection standards rest”. Therefore, an updated regulatory framework aimed at setting the right directions for the development and application of AI technology is much needed.

GDPR

Through the adoption of its GDPR in May 2018, the European Union was the first to attempt to regulate AI through personal data protection legislation. The GDPR does not refer to AI specifically, but by way of regulating the processing of personal data regardless of the technology used, any technology that is designed to process personal data, including AI, is captured by the regime. Personal data like health, genetic data, and biometric data are considered highly sensitive and these types of sensitive personal data are assigned a more protective framework by the GDPR than that applicable to other types of personal data[5].

Under the GDPR, in order to process personal data, a data controller (e.g. healthcare provider) must comply with the data protection principles. Also, the data controller must have a legal basis for processing the personal data, one of which is obtaining a free and explicit consent from the data subject (i.e. the patient) and additional specific conditions must be met for processing sensitive personal data[6]. Apart from setting out the limitations on personal data processing, the GDPR provides that the data subject (e.g. the patient) has the right to restriction of processing[7], the right to data access, rectification and erasure[8] as well as the right to be informed of the logic of the AI if automated decision-making is involved[9]. The regime also sets out the responsibilities and liabilities of both the data controller and data processor to encourage them to take a preventative approach to personal data protection. By emphasizing accountability, fairness and transparency, the regime provides for a robust and comprehensive protection of personal data in the age of AI.

Personal Data (Privacy) Ordinance (PDPO)

A review of the PDPO in Hong Kong has been long waited after many years of discussion.  Unfortunately, the recent PDPO amendments, which came into effect on 8 October 2021, focus only on tackling the issue of doxing[10], without updates that aim at improving the overall personal data protection in the age of AI. It is noted that in January 2020, the HK Government had proposed amendments to the PDPO in its Review Paper to Legislative Council. The goal being to bring the ordinance in line with the international standards of coping with the new challenges to personal data protection posed by the rapid development of information technology and technological advancement in artificial intelligence[11]. The proposed amendments in the PDPO Review Paper focus on widening the scope of protection of personal data by imposing more onerous obligations over the data, directly regulating the data processors and increasing the sanctioning power of Privacy Commissioner for Personal Data (PCPD). Among all the key areas as identified, the clarification on the definition of “personal data”[12] and the proposed direct regulation of data processors[13] are of significance. 

“Personal data” under the PDPO is defined by reference to information that relates to an “identified” natural person. It is under consideration that the definition is to be expanded to include data that is practicable to ascertain an identity directly or indirectly related to an “identifiable” natural person. This proposed expansion of the definition of “personal data” is meant to strengthen data protection as “big data” analytics can involve processing of large datasets of information that do not include the specific identities of any of the individuals concerned. These datasets may readily be combined with publicly available information to establish the identity of the data subject, raising personal data protection concerns.

Currently, under the PDPO, only data users are held ultimately responsible to the PCPD and data subjects for any breach of the PDPO, but not the data processor. From the high-profile data breaches in recent years, it is noted that very often data breaches arise at the data processor level, especially during the outsourced data processing. As a matter of fact, data user such as healthcare providers could have limited control over their outsourced data processor and the protection mainly originates from the contractual terms with the data processor. To ensure greater protection to the healthcare provider, in addition to easing the concerns of the public about data security, the proposed direct regulation on data processors could render the data processor accountable for personal data retention and responsible for notifications of data breach incidents.

Despite including proposals for some important amendments to the PDPO, we consider the Review Paper should have gone further to address issues which are of great concern globally in the age of AI, such as sensitive personal data, automated data processing, and cross-border personal data transfer which are now regulated under the GDPR. To avoid sacrificing personal data privacy during the increasing application of AI, the future PDPO needs to provide for conditions on data processing, especially of sensitive personal data, and mandate privacy impact assessments to push data user and processor to adopt appropriate technical safeguards in their data systems.

PCPD Guidance on the Ethical Development and Use of AI

Despite the lack of comprehensive legislative reform, in keeping up with ongoing AI application and developments, the PCPD has published its ‘Guidance on the Ethical Development and Use of AI’ (Ethics Guidance). Based on seven principles that include accountability, fairness, and data privacy, the Ethics Guidance suggests that a risk-based approach should be taken to the development and use of AI systems. Responsible practices are recommended to ensure that data is collected, used and stored for the development and use of AI systems in accordance with the existing requirements under the PDPO[14]. Further, it provides that the data used to train the AI models must be accurate and complete without unjust bias and unlawful discriminations. The proposed amendments in the PDPO Review Paper, together with the Ethics Guidance would serve as a good starting point for further discussion on substantial data protection reform in the future.

Comment

Opportunities that AI offer, specifically in the field of healthcare, have yet to be fully seized due to significant data protection challenges. As the benefits that we expect to gain from AI largely depend on large volumes of data , a robust and trustworthy regulatory framework is of paramount importance. In this respect, protection of patients’ data privacy is as important as that of the interests of healthcare providers and their data processors. It remains to be seen whether further progress will be made in reform to the PDPO in light of the ongoing utilization of AI in different areas, notably in the fight against the COVID-19 pandemic.

Read more items in Hong Kong Medical Law Brief - December 2021 edition

 

[1] A definition proposed within the European Commission’s Communication on AI is that “AI refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”, p. 1

[2] A study entitled “Transforming healthcare with AI – The impact on the workforce and organisation” conducted by EIT Heath and Mckinsey & Company reported in March 2020

[3] Report entitled “Artificial Intelligence: How to get it right” by NHSX, October 2019, p. 19

[4] The black-box nature of AI refers to the inability of humans to understand the reasoning process for the predictions or decisions made by AI .

[5] Article 9, GDPR

[6] ibid.

[7] Article 18, GDPR

[8] Article 15, 16, 17, GDPR

[9] Article 13(2)(f), GDPR

[10] Doxing refers to the gathering of the personal data of target person(s) or related person(s) (such as family members, relatives or friends) through online search engines, social platforms and discussion forums, public registers, anonymous reports, etc., and disclosure of the personal data on the Internet, social media or other open platforms (such as public places).

[11] Review of the Personal Data (Privacy) Ordinance(LC Paper No. CB(2)512/19-20(03))

[12]ibid., p.8

[13]ibid., p.7

[14] Guidance on the Ethical Development and Use of Artificial Intelligence, p.23-25

Locations