A few weeks ago, AIMed reported physicians’ dilemma in using artificial intelligence (AI) for end-of-life conversations. Some physicians confessed, at times, they would be surprised by the algorithm flagging a particular patient who appears to be healthy but is at high risk of dying. Other times, they would be perturbed when the continuously flagged patient expresses to carry on treatments. The challenge has now extended to other medical practice.

There has been a growing number of medical institutions in the US which rely on AI driven decision support tools to predict whether patients will deteriorate or at risk of complications over their course of treatment and if they are allowed to be discharged without the possibility of being readmitted. While the efficacy of some of these tools remain questionable, the biggest concern stamped from this trend is patients and their families are not likely to be informed that care was partly influenced by algorithms.

A topic that hasn’t been widely discussed

Glenn Cohen, Professor at Harvard Law School had written an article recently to shed light on this scanty noticed topic. He told Stat that hospitals and clinicians often operate “under the assumption that you do not disclose, and that’s not really something that has been defended or really thought about”. That’s why although AI adoption in clinical setting is growing, the matter hasn’t received much attention even in medical literature.

Yet, when and what should the patients be informed? Clinicians are worried if they bring up the topic, they may divert patients’ attention from practical plans that boost their health conditions and quality of life. Some physicians also believe they are the ones who make the final decisions, not AI, so it’s not necessary to have such “AI talk”.

Some healthcare systems are aware that deploying AI-driven tools in clinical setting is still relatively new, so they regard it as a way to make hospitals more efficient rather than part of the routine care. As such, patients have given their consent by virtue when they were admitted and since AI models are used as part of the hospital operations, there is no need to explicitly inform the patients too.

On the other hand, some hospitals do inform their patients and explain to them why and how AI algorithms are being used as part of the clinical procedures. Patients are also required to sign on an agreement. Usually, as respective hospitals purchased the AI models from fellow developers, they are the ones who decide on the parameters as well as legal and ethical guidelines. But because AI-driven tools are still lying in the regulatory grey zone in general, not many hospitals are willing to take the risk.

When decisions to withhold information backfires

Regardless of the approach, silencing is never ideal. Like many medical procedures out there, using AI-driven tools in clinical setting also carries certain level of risks. Early on, AIMed cited a deterioration index generated by electronic health records (EHRs) provider, Epic. The index gives clinicians a quick overview of the risks each patient faces and is now used by dozens of hospitals in the country to forecast which COVID-19 patient will turn for the worst. However, these healthcare institutions are arriving at different conclusions if the tool is effective and accurate.

At the moment, there is an absent of defined legal guidelines to account for who should be responsible – AI model, physician, developer – shall a mistake or false recommendation takes place. Moreover, most algorithms are still fueled with biases, which may result in denying patients the kind of care they deserved.

Some patients thought their rights are undermined if they have not been informed of all these risks in the first place. Cohen suggests thoughtful discussions among healthcare institutions, AI developers, and patients. He believes if this is not done early, the lack of transparency and communication may initiate a sense of distrust towards the system in the long run and it will eventually impact AI’s progress.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.