Deep learning is providing exciting solutions for medical image analysis problems. The hottest AI category for venture capital deals, healthcare startups have raised over $1.8 billion across 270 deals since 2012. Of the ones that raised capital after January 2015, over one-third are working on imaging & diagnostics solutions (CB Insights). As medical and healthcare professionals increasingly rely on deep learning-based imaging and diagnostics systems, it is important to know what the model does not know. Despite careful pre-training and testing, it is impossible to prove that the models will work correctly for future unseen images with unfamiliar features. A deep learning system that encounters patient cases outside of its original data distribution could result in incorrect diagnoses and a higher number of false positives or false negatives. When diagnosing illnesses such as pneumonia or diseases such as breast cancer, this inaccuracy could lead to ineffective treatment plans if physicians do not realize that their deep learning system made a mistake. Our mission is to protect end users such as doctors from such runtime errors (e.g. incorrect analysis or diagnosis) and unintended consequences (e.g. death). A fully autonomous monitoring tool that startups and research groups can integrate into their products, SafeguardAI quantifies the model’s uncertainty by the number of out-of-distribution data (i.e. surprises) it detects at runtime. To offer more transparency to end users using DL-based imaging systems, SafeguardAI can identify when and what the model observes is unfamiliar, and communicate, “I’m not sure if I am correct”. Given a measure of uncertainty, medical professionals will be able to know when they should closely re-examine their patient’s medical images for a second human expert opinion. Capturing the discrepancies between the current observation and the model’s trained knowledge, SafeguardAI’s intelligent agents compare the runtime response of the DL network against its learnt response during training. SafeguardAI enables systems to autonomously self-monitor themselves for runtime errors without any additional engineering overhead to alter their current models. Improving the quality of future datasets for retraining, SafeguardAI automatically labels unfamiliar data and trends as “surprises” and data with highly correlated features as “redundant” to prevent any potential model bias. Currently, SafeguardAI only handles uncertainty related to out-of-distribution data and epistemic uncertainty; however, we are further developing SafeguardAI to also detect and explain uncertainty in model parameters that best explain the observed data, structure uncertainty, and aleatoric uncertainty.

SafeguardAI is being developed by EpiSci, a R&D company that works on artificial intelligence solutions for commercial and defense applications. For more information, please visit and


Author: Epiphany Ryu

Coauthor(s): Bo Ryu, PhD., Nadeesha Ranasinghe, PhD., Wei-Min Shen, PhD.

Status: Work In Progress