Artificial intelligence (AI) is rapidly transforming healthcare and medicine and its evolution is exponential. Whilst its transformative nature may be overestimated in the short term, it is almost certainly under-estimated in the long term. AI-driven diagnostic tools are already demonstrating the ability to improve accuracy, efficiency, and patient outcomes.
A shift in patients’ expectations of clinicians?
As AI becomes increasingly integrated into the delivery of healthcare will we see a shift in patients’ expectations of clinicians regarding their professional responsibilities and standards of care? Are we moving towards a position where failing to use AI in diagnostics, for example, could be considered as negligent?
The adage “doctor knows best” in respect of patient assessment and treatment was subject to challenge in Montgomery v Lanarkshire Health Board [2015] where patients’ autonomy to choose and the right to be informed of alternative treatments was established. It was subsequently held in McCulloch v Forth Valley Health Board [2023] that any alternative treatment to be counselled did not include those the treating doctor did not deem reasonable and that clinical judgement is supported by a responsible body of medical opinion.
But what of the rapid evolution of AI? Can it be said doctors will continue to know best when developments in the application of this technology are providing evidence to support findings in certain use cases that the technology is capable of outperforming human judgment?
Should, in fact, a doctor have a duty to utilise AI when treating patients, much like they have a duty to have knowledge of current and accepted practices? Will a duty be imposed if AI technology is available in their setting? Will a two-tier system evolve in terms of AI availability?
Direction of travel
Whilst there is currently no duty to utilise AI in the treatment of patients, Lord Darzi’s comprehensive report commissioned in 2024 highlighted a critical need for AI integration in healthcare and sets out a clear direction of travel. Lord Darzi observing “we are on the precipice of an artificial intelligence (AI) revolution that could transform care for patients”.
The revolution has, in fact, arrived. In July 2023 OpenEvidence announced that its OpenEvidence AI large language model became the first healthcare tool to score over 90% in the United States Medical Licensing Examination, and DeepMind’s breast cancer detection tool is proven to be more sensitive than human interpretation.
The impact of AI and the desire of the UK government to tap into its limitless potential cannot be overlooked. Crucially, among the benefits that the application of AI in the delivery of healthcare can bring are as follows:
- Diagnostic accuracy: processing of voluminous medical literature, patient records, applying mathematical reasoning, and cross referencing patient data pools.
- Consistency: in contrast to human clinicians, the technology is not adversely affected by fatigue, pressures, stress, or variations in training
- Efficiency: rapid analysis of even the most complex cases.
Comment
It is not a great leap to consider a scenario where, in retrospect, a patient’s diagnosis or treatment would have been different and the outcome improved with the adoption of AI. An erroneous diagnosis, a missed tumour, an overlooked fracture, a suboptimal drug regime. The list is endless.
It follows that it would not be a great leap for the duty of care upon a clinician - which currently only requires them to act in accordance with a responsible body of logical medical opinion - to broaden to include a duty to utilise, or even defer to, AI where it has been shown to be effective. At the very least it would seem likely the clinician’s duty, in accordance with Montgomery principles, could in future be extended to an obligation to counsel a patient as to AI options for diagnosis and treatment. Will the traditional “doctor knows best” adage one day become “AI knows best”?
The shift to AI-assisted healthcare is well underway and gathering pace. In discharging their duty to be abreast of medical developments in their field, clinicians are well advised to familiarise themselves with relevant AI advancements.
Whilst offering many opportunities for its application in healthcare, the potential risks associated with its use will necessitate the implementation of certain safeguards. One significant safeguard being the role of the clinician and other healthcare practitioners as the ‘human in the loop’.
Related items:
- Understanding the risks of healthtech – Kennedys’ guide looks to ensure new technology works for patients and healthcare providers
- Which alternative treatment options must a doctor discuss with a patient?
- Healthtech's ramifications for liability insurers
- How healthtech is changing the claims landscape
- Healthtech in the future – The legal ramifications (Second edition)