Problem

Radiologists are visual people. We’re the antithesis of a hematologist – if we can’t see it macroscopically, it probably doesn’t exist – certainly biased, but a not entirely unreasonable opinion given the strength of current image-based diagnosis.

ai visual vizualization radiology radiologists medicine artificial intelligence healthcare

Zeiler and Fergus’  2014 ECCV paper Visualizing and Understanding Convolutional Networks(1)

However, Radiology is probably in the earliest stages of its biggest change since the introduction of CT and MRI in the 1980’s. It is the impending entry of narrow Artificial Intelligence algorithms, namely Convolutional Neural Network Classifiers into radiologic practice, for both lesion detection and workflow modification. This causes equal doses of euphoria or terror among medical imagers. Euphoria at the possibility of enhancing our diagnostic skills to ‘super-human’ levels, and terror at replacement by a computer.  Both possibilities are extreme, and neither is imminent. The future ten years plus from now probably lies somewhere in the middle, but I remain more towards the optimistic side, which is a world where radiologists are augmented by AI tools that help them practice better than they could on their own.

ai visual vizualization radiology radiologists medicine artificial intelligence healthcare

Saliency Maps have also been proposed for use by Simonyan

My radiologist colleagues are visual people, and for them, seeing is believing. Its widely (and inaccurately) perpetuated that the ‘black box’ of AI is impossible to understand, creating a dilemma for most Radiologists. How will they be expected to alter their practice based upon something they can’t understand and can’t see?

Fortunately, both assertions are false. Radiologists can understand deep learning!  But knowing how to train a classifier in PyTorch should not be a necessary hurdle just to use a narrow AI, and let’s face it, it isn’t the same thing as being a full-stack ‘dev’. It is a developer’s responsibility to organize the AI’s output in ways Radiologists, and other physicians, can understand.

 

Solutions

The first efforts toward visualization were by Zeiler, Fergus, and Simonyan.  Zeiler and Fergus’  2014 ECCV paper Visualizing and Understanding Convolutional Networks(1) uses a deconvolutional model and occludes portions of the input image, monitoring the classifier output. These occlusions, when made over the image, can yield insight into which portions of the images are necessary for correct classification.

ai visual vizualization radiology radiologists medicine artificial intelligence healthcare

The grad-CAM was popularized by the CheXNet paper from Stanford and is probably the most commonly used method of visualization at this time

Saliency Maps have also been proposed for use by Simonyan. In a Saliency Map, the gradient of the output category with respect to the input image is used.  Positive values in the gradients highlight the importance of those pixels.2 The use of a saliency map is it shows which pixels are important for the classification of that object, as in the above well-located image of a chihuahua.

At present, the most commonly used method of visualization is the grad-CAM class activation map. The last convolutional layer is transformed into a heatmap which is added to the underlying image, giving a fairly good representation of what the convolutional network model is activating on for that class interpretation (i.e. what it thinks makes it THAT). The grad-CAM was popularized by the CheXNet paper from Stanford and is probably the most commonly used method of visualization at this time.3

Recently, Fei-Fei Li and Zhe at proposed a more sensitive version of grad-CAM based upon segmenting the original images into multiple smaller patches using a YOLO (You Only Look Once) model with a two-part custom loss function.  Thereby the grad-CAM detection model can operate on much smaller volumes than the entire image.4

Conclusion

ai visual vizualization radiology radiologists medicine artificial intelligence healthcare

The grad-CAM detection model can operate on much smaller volumes than the entire image

Developers, simply outputting a graph on a jupyter notebook is not going to satisfy the needs of a radiologist.  In imaging, time is money and the patient’s life, and a well-established ecosystem exists consisting of the Picture Archiving and Communication System, or PACS, which interfaces with the Radiology Informatics System, or RIS, which is tied to the Electronic Medical Record System, or EMR, that most hospitals run off of.  You’ll have to find a way to use DICOM, or the Digital Imaging and Communications in Medicine Standard, to communicate with the PACS system and your algorithm.  And Radiologists are unforgiving about UI/UX issues, so make sure you have a good one!

Yes, Radiologists are a demanding bunch, but when you see the speed of an experienced radiologist manipulating images with a critical eye, even though she might spend less than a second on each one, you will understand how seriously we take our charge and how that plays into patient care.  We will be your partners, but only for good value propositions where we can see and understand how narrow AI improves our practice.  Implementing the above techniques with your model can show us the value of your AI in terms we can understand – visual ones.

For a Deep Dive on AI in medical imaging see the AIMed Magazine: www.ai-med.io/magazine

 

ai visual vizualization radiology radiologists medicine artificial intelligence healthcare

By Stephen Borstelmann MD 

Dr Borstelmann is an interventional radiologist and deep learner in Boca Raton, FL and a graduate of Columbia University.  He wishes he had paid more attention in AP Computer Science.  His research interests lie in Convolutional Neural Networks, Data Augmentation, and Rare Diseases.  He blogs at www.ai-imaging.org.

 

 

 

References

  1. Zeiler M, Fergus R: Visualizing and Understanding Convolutional Networks ECCV 2014, Part 1, LNCS 8689, pp. 818-833, 2014
  2. Simonyan K, Vedaldi A, Zisserman A: Deep inside Convolutional Networks: Visualizing Image Classification Models and Saliency Maps arXiv:1312.6034v2
  3. Rajpurkar P, Irvin J, Kaylie Z, et. al. : CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. Arxiv:1711.05225v3
  4. Zhe L, Chong W, Mei H, et al. : Thoracic Disease Identification and Localization with Limited Supervision. ArXiv:1711.06373v2