Doctors are rigorously trained to become diagnosticians and to prescribe only the most effective treatments. It is only in the last two decades that we have also focussed on doing these tasks safely. Recent estimates see up to 10% of patients experiencing some sort of treatment related harm during each hospital admission [1]. Iatrogenic harm is thus a major challenge in healthcare, and progress in reducing rates of such harm remains painfully slow.

When we think of AI, we are naturally drawn to its power to transform diagnosis and treatment planning, and weigh up its potential by comparing AI capabilities to those of humans. We have yet however to look at AI seriously through the lens of patient safety.

What new risks do these technologies bring to patients, alongside their obvious potential for benefit? Further, how do we mitigate these risks once we identify them, so we can all have confidence the AI is helping and not hindering patient care?

Is it ok to design an AI that decides on withdrawal of care? Equally, is it right for an AI to avoid that choice and prescribe the continuation of painful and futile care?

A good place to start might be Asimov’s classic laws of robotics. They stipulate first and foremost that an AI should not injure a human nor allow them to be injured. They also require AI to not harm humanity in general, and to always do what it is told as long as it does not transgress these other rules.

The problem for healthcare is that it is very easy to imagine scenarios in which the right answer to a problem is to break Asimov’s rules.

Consider for example a patient at the end of their life. The right decision might be to withdraw care and allow a patient to die, avoiding prolonged suffering and unnecessary further treatment. Is it ok to design an AI that decides on withdrawal of care? Equally, is it right for an AI to avoid that choice and prescribe the continuation of painful and futile care?

The potential to delegate some or all of such decisions to an AI simply adds another new layer to the ethical challenge

How would Asimov’s rules handle situations like battlefield or disaster triage, where snap decisions are made about who gets treatment and who does not?

Clearly these are deep questions of ethics, and humans already struggle with the right answers to such questions. The potential to delegate some or all of such decisions to an AI simply adds another new layer to the ethical challenge.

Developing clear guidelines on the right ethical stance for such challenging questions is thus a priority – especially since these questions arise every day in healthcare.

Alongside the ethical risks, AI comes with inbuilt patient safety risks that need to be understood and mitigated.

If training data is skewed because it contains an unbalanced sample of a population, this can easily translate into biased decisions

Firstly, it is well known that if an AI system is created by applying machine learning methods to data, then the quality of its decisions is strongly dependent on that data. If training data is skewed because it contains an unbalanced sample of a population, this can easily translate into biased decisions [2]. Perhaps as a result AI will overdiagnose a condition, or miss rare conditions, because these are the distorted lessons it learnt from the training data. It might recommend treatments, not because they are right for a patient, but because they are the most common treatment the AI has seen for similar conditions.

It is also easy for poor human practices to be captured in data and then codified through machine learning into de facto guidelines for the management of new patients. Just because we have done things one way in the past does not mean they are necessarily right for future patients.

A different set of safety problems lurk in the way humans and AIs interact with each other and influence each other’s behaviour.

In particular, there is a real risk that humans will become over-reliant on AI, trusting and uncritically following its recommendations.

Known as automation bias [3], this willingness to suspend human effort in verifying if a computer’s recommendation is appropriate is already well documented in current generation decision support systems and can affect crucial tasks such as medication prescription [4].

The remedies to automation bias are currently imperfect and depend on training humans to remain vigilant as they use technology to support their decisions.

Staying ‘in the loop’ is especially important in real time situations such as surgery and patient management in ICU, because it is difficult to respond rapidly to a crisis if humans have not been keeping their mental model of the patient up to date.

In one study of the same physician order entry system, there was as much as 40-65% difference in clinical outcomes at different sites

Risks to patient safety often arise because of unexpected variations in the way care is delivered, perhaps because of human workarounds, or simply a failure to check for events because we were not expecting humans to behave in a certain way.

This variation in behaviour in different settings is also a feature of information technology [5]. We for example see the very same technology, when implemented in different settings, achieves different outcomes. In one study of the same physician order entry system, there was as much as 40-65% difference in clinical outcomes at different sites [6].

This variation arises for many reasons stemming from the way a specific technology must integrate into different environments, and thus reflects the diversity in the technology ecosystem of different sites, as well as differences in human practices and patient characteristics.

We will see exactly the same variation in outcome with AI. We should expect the same algorithm, making the same recommendations for the same data, would achieve substantially different outcomes in different settings. These variations will arise for many reasons including differences in patient disease profiles, human culture and decision processes, and the way AI is integrated into workflows.

As we have seen repeatedly over the last two decades, making care safe is not optional. Unsafe care and the resultant patient harm are not just disasters for patients

These are just some of the patient safety challenges we face as AI becomes more embedded in clinical decision making. Dealing with them will require educational programs for all clinicians to explicitly cover not just patient safety, but the risks of information technology and AI. It will require monitoring of AI behaviour and clinical outcomes, and a rigorous safety-first approach to AI design, build and integration.

We are at a moment in time when our enthusiasm for AI is high, and our desire to reap early benefits from it is strong. Making AI safe can seem in such circumstances to be an impediment to innovation and the delivery of improved patient care.

However, as we have seen repeatedly over the last two decades, making care safe is not optional. Unsafe care and the resultant patient harm are not just disasters for patients. Unnecessary patient harm can quickly escalate into damage to individual clinician’s careers, as well as harm healthcare providers and technology developers.

There is no more sure-fire way to see AI face a backlash than to have its use harm patients. It is because of our desire to see healthcare improved through AI that we must make AI safety our primary concern.

We believe in changing healthcare one connection at a time. If you are interested in the opinions in this piece, in connecting with the author, or the opportunity to submit an article, let us know. We love to help bring people together! [email protected] 

References

  1. Braithwaite J, Coiera E. Beyond patient safety Flatland. J R Soc Med 2010;103(6):219-25. doi: 10.1258/jrsm.2010.100032
  2. Cabitza F, Rasoini R, Gensini GF. Unintended consequences of machine learning in medicine. Jama 2017;318(6):517-18.
  3. Lyell D, Coiera E. Automation bias and verification complexity: a systematic review. Journal of the American Medical Informatics Association 2016;24(2):423-31.
  4. Lyell D, Magrabi F, Raban MZ, et al. Automation bias in electronic prescribing. BMC Medical Informatics and Decision Making 2017;17(1):28. doi: 10.1186/s12911-017-0425-5
  5. Kim MO, Coiera E, Magrabi F. Problems with health information technology and their effects on care delivery and patient outcomes: a systematic review. Journal of the American Medical Informatics Association 2017;24(2):246-50.
  6. Metzger J, Welebob E, Bates DW, et al. Mixed results in the safety performance of computerized physician order entry. Health Affairs 2010;29(4):655-63.

 

safety medical artificial intelligence medicineBy Enrico Coiera, PhD

Professor Coiera is an internationally recognised research leader in digital health and health systems science. He has a long reputation for opening up new avenues of research in his field, allowing others to follow and extend his work. He first made his reputation in the mid 1990s when he was arguably the first scientist in his field internationally to identify the huge potential of the world wide web for health service transformation, through a series of seminal papers in the British Medical Journal. His ground-breaking research into clinical communication for the first time outlined the interruptive, multitasking nature of clinical work and its implications for patient safety and technology design. He is a co-author of the seminal paper in digital health safety, published in the lead journal JAMIA in 2003, and which now is the highest cited paper in that journal of all time. His 30 year career includes a decade in the Hewlett-Packard Research Labs in Bristol, where he led research and development programs in clinical communication, intelligent patient monitoring, and the anaesthesia workstation project.