Skip to main content
Innovation

Diagnosis of sepsis: AI to reduce risk?

Can Artificial Intelligence really be used to diagnose sepsis and if so, what are the medico-legal considerations?

Global problem 

A Canadian study published in February 2023 reviewed 162 sepsis/infection medico-legal claims closed between 2011 – 2020. The review found that in “81% of cases, failing to consider sepsis or not reassessing the patient prior to discharge, contributed to injury/loss.” The paper concludes “sepsis continues to be a challenging diagnosis for clinicians”.

This issue is not confined to the Canadian jurisdiction. In its 2015 paper, NHS England acknowledged “that sepsis is extremely difficult to identify for professionals…the National Confidential Enquiry into Patient Outcome and Death (NCEPOD) has recently found that in many cases, diagnosis of sepsis was delayed because clinicians did not record basic vital signs.” In March 2022, an NHS Resolution report into Emergency Department high value claims noted 86% of diagnoses were incorrect and/or delayed and of commonly missed diagnoses, sepsis is one, despite the implementation of the NEWS system.

AI to help?

In winter 2022, Johns Hopkins University announced that it had developed a new artificial intelligence algorithm that diagnoses sepsis hours earlier than traditional methods, claiming that its success rate translates to a 20% reduction in deaths associated.

The Targeted Real-Time Early Warning System (“TRTEWS”) combines a patient's medical history with current symptoms and lab results to determine when someone is at risk for sepsis and then suggest treatment protocols, such as starting antibiotics. On average, the AI detected sepsis an average of nearly six hours earlier than traditional methods.

The medico-legal considerations

Pertinent to the medico-legal analysis is confirmation from the developers at John Hopkins that the system allows doctors to see why the tool is making specific recommendations. This therefore avoids the more difficult - albeit hypothetical - legal issues concerning deep learning AI; ‘the black box problem’.

Briefly, weak AI is so-called because of its reliance on predefined rules and algorithms and is not capable of adapting or learning beyond its initial programming. Therefore, the technology can be seen as an assistive tool rather than displacing the clinician function but will inevitably lead to various medico-legal considerations such as:

  • A likely duty of care to ensure clinicians or other hospital staff are adequately trained when using and inputting data into the AI system. Those inputting the patient data into the system may be clinical support staff and any errors in entering such data may arguably be inherently negligent (i.e. the data input is either correct or incorrect);
  • A likely duty of care to regularly stress test and validate the AI systems on an ongoing basis with hypothetical scenarios;
  • How disputes between clinician and AI outputs are resolved in the clinical space. We anticipate claims will be brought any time a clinician did not heed the AI output which is subsequently then proved correct. While this ought not to impact the application of the Bolam/Bolitho legal tests, inherent litigation risk is inevitable where the AI has been proven correct.
  • The potential natural progression of over-reliance on the technology leading to a deskilled medical profession. This would be the vice versa situation to the above in that clinicians following an AI output which is subsequently proven incorrect are likely to face claims.

Summary

The recently published Canadian study is a reminder of the challenges in diagnosing and treating sepsis in a timely manner which continues to be a rich source of medico-legal claims globally.

The (“TRTEWS”) may have the capability of reducing claim volume in relation to delay in diagnosis of sepsis cases.

However, in adopting (“TRTEWS”) if indeed it occurs, hospitals will have to be cognisant that its staff are well trained in using the new systems as mistakes in data entry are likely to be viewed dimly. More pertinently, where there are discrepancies between AI outputs and clinical judgment, a claim is likely to be inevitable for either failing to follow the AI output subsequently proven correct, or following an AI output when it is proven wrong.

We have specialist medical malpractice lawyers with vast experience in handling medical malpractice claims and the interplay with healthcare regulatory matters such as inquests and professional disciplinary issues

Sectors and Services featured in this article