Healthtech and its impact on medical malpractice claims in Australia and Hong Kong

Foreword

Over the last decade we have seen a transformation in the delivery of healthcare with telemedicine, virtual consultations, virtual wards, use of artificial intelligence, wearables, genomic medicine. All transforming the way healthcare practitioners make a diagnosis and the delivery of healthcare.

These technological advances will no doubt continue bringing opportunities and benefits. The challenge for healthcare providers globally is to ensure that the legal and regulatory framework is fit for purpose and appropriate indemnity is in place should there be any claims as a consequence of the transformation of healthcare services.

A panel of our global healthcare lawyers in Australia and Hong Kong, Canada and Chile, and England, France, Ireland, and Spain, recently explored healthtech in these jurisdictions through the lens of medical malpractice claims.

In this series of articles, we provide an overview of key areas of discussion. We recently held our annual 2024 global healthcare conference programme, which has been recorded for you to watch.

Christopher Malla, Global Head of Healthcare

Australia


Virtual emergency department (VEDs) – a public health service which triages and treats patients with non-life-threatening conditions virtually through a video consultation – are in use in different states in Australia.

Potential impact on claims

The major benefits of VEDs include cost reductions, improved quality of care and avoiding unnecessary transfer to the emergency department (ED) and reducing waiting times in the ED.

However, potential risks and impact on claims include:

  • Technical difficulties experienced by VED users.
  • Practitioners may not accurately assess a patient’s situation given the limitations of a video consult compared to an in-person consult and the physical assessments that can be conducted. This could also happen during the nursing triage part of the virtual consultation.
  • The service also depends on people assessing themselves accurately as having a non-life-threatening condition.
  • A potential issue is the system of follow up following discharge from the VED and the potential exposure for practitioners.

Wearables

There are some notable advances in the area of wearables devices in Australia, but they are still very much in the research stage.

In terms of potential risks – who is responsible if the monitoring of wearables fails or is not correctly interpreted? There are potentially causation issues arising in the event there is an adverse outcome for the patient and whether there is a misinterpretation by a clinician via telehealth, the wearable was not properly worn by the patient or there has been a failure of the equipment.

Claims could be brought against healthcare practitioners in relation to their failure to properly interpret data and intervene quickly when the data is irregular.

Artificial intelligence (AI)

AI in healthcare is also in its early stages in Australia.

In January 2024, the Australian government released its interim response to the Safe and Responsible AI consultation. The government is working with industries to develop mandatory guardrails for AI development and deployment, including a voluntary AI Safety Standard, voluntary labelling of AI-generated materials, and establishing an expert advisory group.

Contact: Maya Parbhoo

Hong Kong


Whilst the use of technology in the Hong Kong healthcare market is not as mature as in the UK or other jurisdictions, it is becoming more established.

The application of artificial intelligence (AI) in products and tools such as those which assist pulmonary evaluations, assessments of infection risks, triage decisions, and patient deterioration monitoring are increasingly in use in Hong Kong.

New legislation specifically addressing the unique risk of using new technology in managing patients is pending.

In the meantime, the liability of healthcare providers and individual physicians is determined in accordance with common law, and existing legislation which regulates medical devices and personal data, as well as contracts entered into by the relevant parties. Physicians who recommend and advise patients to use new technology should not only discharge the common law duty, they must observe the Code of Professional Conduct issued by the Medical Council, especially when providing treatments or virtual consultations for patients who are located in different jurisdictions.

Pending legislative control, the Hong Kong government has been exercising administrative control over the development and use of AI in various sectors. In August 2023, the government published ‘Guidance on the Ethical Development and Use of Artificial Intelligence (2021) & Ethical Artificial Intelligence Framework (2023)’ to help with setting a common approach and structure to govern the development and deployment of AI applications.

There are currently no reported claims involving use of new technology in the delivery of healthcare in Hong Kong. However, some key common risks and claims specific to the use of wearables, AI technology and tele-robotic surgery would include:

  • Failure to keep software/AI program updated leading to malfunctioning of AI system or medical device;
  • Failure to respond to or act upon abnormal data transmitted from wearable devices in a timely manner;
  • Failure to provide adequate training or conduct regular testing to ensure the accuracy of the medical device or system;
  • Failure to exercise independent judgement in using AI system for diagnosis and treatment, which did not correlate with the patient’s clinical features; and
  • Failure to provide adequate information to patients, especially any unique risk or limitation, in considering the use of new technology compared with traditional treatment.

Contact: Sandy Cho

Related items