Skip to main content
Technology

The relentless healthcare AI rollout continues; whose fault is it anyway?

When AI medical diagnostic and treatment systems become mostly accurate, will medical practitioners grow complacent and fail to maintain their skill?

There are no solutions, there are only trade-offs; and you try to get the best trade-off you can get, that's all you can hope for – Thomas Sowell

In June 2023, the Department of Health and Social Care announced that NHS trusts can bid for funding to accelerate the deployment of AI tools to help with winter demand with a ring-fenced public fund of £21m.

In July 2023, an All Party Parliamentary Group for Radiotherapy report recommended an immediate £4 million investment (£15-40 per patient) in AI software. The Microsoft/Addenbrooke’s hospital AI program is able to calculate where to direct therapeutic radiation beams two and a half times quicker than a clinician can.

Also in July 2023, East Kent Hospitals University NHS Foundation Trust announced that its clinicians are using AI software to help check chest x-rays for patients in its emergency departments.

The potential benefits of AI in the health sector are undeniable, hence the aggressive rollout. However, as Naik et al argue – and in recognition of the trade-off – AI systems can fail unexpectedly and drastically. AI can go from being extremely intelligent to extremely naive in an instant. The human decision-maker must be aware of the system's limitations, and the system must be designed so that it fits the demands of the human decision-maker. When a medical diagnostic and treatment system is mostly accurate, medical practitioners who use it may grow complacent and fail to maintain their skill.

Hedderich et al suggest that to be on the safe side, physicians should make sure to always follow the standard of care and ensure AI currently functions more as a tool in clinical practice to confirm medical decisions rather than a tool that improves care by challenging the standard of care.

A tipping point will come when the AI accuracy clearly outperforms clinician decision-making and where a “black box” algorithm allows for no interpretability and transparency of the particular output/result. If it goes wrong, the medico-legal one of the many arguments will be whether the AI’s output was because it was improperly used/programmed by the clinician/hospital or whether the defect was the product itself. There will be a tension over protecting clinicians so that they are encouraged to adopt AI technology; as against manufacturers who may feel stifled from innovating if they are to take on all the risk.

While there remains no medico-legal determination at present on how this situation will be resolved, the increasing and rapid adoption of AI will no doubt result in cases reaching court soon. Initial case law is likely to suggest that liability rests with the clinical workforce, irrespective of an AI output, and that a doctor will have a duty of care to the patient to conduct their own inquiry into the reasonableness of the AI output . As time moves on and there is an increasing comfort from clinicians relying on AI outputs, more radical solutions might be indicated, e.g. AI being denoted with a classification of “personhood” having an independent duty of care to patients. We are likely to be some time away from the latter.

Insurers will need to watch the AI healthcare space carefully to calibrate risk between care provider and AI manufacturer.

To learn more about how AI could affect the healtcare sector or for advice regarding medical malpractice claims, contact our experienced specialist medical malpractice lawyers.

Sectors and Services featured in this article