We have collaborated with a large hospital in the Pacific Northwest to operationalize and deploy an all-cause risk of 30-day readmission predictive tool using an ML platform. This tool is currently embedded in the clinical workflow of several outpatient internal medicine clinic nurses. The clinic nurses utilize the tool when evaluating members of their panels or other cohorts that have recently been, or will be, discharged from the hospital. In our model, a color-coded risk score ranging from 0-1 is calculated for each patient. Feedback from end-users and other providers demonstrates the preference for interpretable models, particularly when making care decisions. The clinical staff describes an inclination to understand the insights that compose the patient-specific risk score.

Model trust is integral for clinicians or other hospital staff to fully commit to integrating machine learning (ML) into their workflow and decision making. Often, clinicians are frustrated by the lack of transparency in model results and would prefer a way to compare the factors that influence the machine learning model with their own clinical thinking processes. Additionally, it can be clinically useful to know which contributing factors are actionable or modifiable. A class of ML models which includes Bayesian models or Local Interpretable Model-Agnostic Explanations (LIME) offers explanations for prediction results in ML by explaining the rationale behind individual predictions. These models surface an explanation that is humanly interpretable around the specific instance being predicted. In a clinical problem like reducing 30-day hospital readmissions, these models can be particularly useful. We plan to incorporate explanation based models in our current predictive tool in order to surface to the end-users the humanly interpretable reasons for the patient’s score. We hypothesize this may lead to more effective discharge planning, coordination of care, and ultimately, patient care


Author: Muhammad Ahmad

Coauthor(s): Carly M Eckert, MD MPH Greg McKelvey, MD MPH Ankur Teredesai, PhD

Status: Work In Progress