Recently, some hospitals have started deploying artificial intelligence (AI) to decide when patients should begin palliative care. These machines screen patients’ medical records to determine their chances of survival in the coming year. They will then notify their physician-in-charge via emails and urge them to start briefing patients and their families on the kind of care and goal they will like to have shall the condition continues to deteriorate.

The approach is now being questioned by some physicians. They are unsure whether algorithms of such nature are reliable and sensitive enough for them to embark on a delicate conversation and make choices that may have significant implications. To find out more, Stat spoke with 15 clinicians, researchers and AI developers based in three different medical institutions where AI models are being rolled out for the purpose of end-of-life care.

A tug of war between professional judgements and AI predictions

Although medical professionals feel that weaving in new algorithms is challenging their already complex and hectic workflow, early data revealed that AI does help to trigger the conversation before it’s too late. It appears to be a technology badly needed by the healthcare system to prevent resources from stretching too thin and assist healthcare workers who do not often receive sufficient training to speak with seriously ill patients on the option of palliative care.

Some physicians added that they have been doing advanced care planning for long and the AI is helping them to make sharper judgements and make them realise their blind spots. For example, selecting the right patients who will benefit from the conversations. In the past, ad hoc programs initiated by respective hospitals are the ones responsible for picking up the patients. It was less systematic and prone to biases. This means at any point in time, only some patients will find the conversation beneficial.

AI is standardizing the process but it does not disclose any probability or calculation to explain why it thinks a patient will die in the next 12 months. Most hospitals having the system in place will also remind their fellow physicians that they should not mention to patients they have been identified by an AI. Most importantly, physicians find themselves in a struggle as they are in a tug of war between their own professional judgements and an AI prediction. It’s very hard for them to bring up the palliative care option especially if a patient is not critically ill.

Absent of a centralized system

Some physicians confessed that sometimes, they are surprised by the patient flagged by the algorithm. Other times, physicians have to decide on what to do when they disagree with the AI. Even if they do agree with the algorithm, physicians will still have to make up their minds on when is the best time to introduce the topic to the patients and their families. A new struggle will start again when patients express their will to continue medical treatment and not palliative care.

Besides, there is no centralized AI system at the moment. Some institutions use the models on cancer patients to identify those who will have at least a 10% chance of dying in the next six months. Some make comparisons with other patients with similar conditions in the same data base and flag to physicians the top 1% or 2% who will die in the next 30 days. Some will not flag more than six patients at once or they will remove predictions that may result in confusions.

Nevertheless, this is not regarded as a major hurdle because these predictions brought up patients with elevated risks of dying, not if they will surely die. Still, these AI designs need to undergo more rigorous testing, comparing outcomes on randomly assigned patients and not comparing outcomes at a given institution before and after the model was implemented. At the end of the day, they will be used to address which patient will benefit most from an end-of-life conversation and palliative care, not when the odds of them dying.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.