We recognize that healthcare needs to provide greater value. However, we cannot do this at the expense of efficiency. To be successful at this dual purpose, we need to utilize the power of information technology (IT) to augment our processes. Over the past several years, the need for IT to assist in driving value in healthcare has been recognized, and with that there has been a substantial capital investment into medical AI applications. In radiology, many of the applications were developed using the available data and an incomplete understanding of what radiologists do. With the idea that a radiologist is an image interpreter and with some large image sets globally available, akin to creating a CAD v2, most AI algorithms targeted image interpretation – specifically pathology detection. However, image detection is only a small component of what a radiologist does.

To fully utilize the power of AI in radiology, we must determine where it can provide the most value. To do this we must first break down radiology workflow into individual components. Curt Langlotz, a Professor of Radiology and Biomedical Informatics at Stanford terms the depiction of this breakdown “the lifecycle of an examination.” While our components and graphics are different, the idea of breaking down radiology into discrete elements is an essential first step in identifying areas where AI can drive improvement (Figure 1).

From the moment a clinician wants to use imaging to help answer a clinical question through reporting, communicating, and follow-up, each step of this imaging lifecycle can be optimized. With this more global perspective, most imaging informaticists and leaders in radiology agree that applying AI to the segments outside of image interpretation would provide greatest value. Let’s review the segments of the imaging lifecycle and discuss how IT is, and could be applied to each…


A number of IT solutions are already available in this space as a result of the Protecting Access to Medicare Act of 2014. This legislation requires clinicians to consult a qualified clinical decision support mechanism when ordering CT, MRI, PET and NM exams. While these solutions are not typically machine learning AI algorithms, they do use IT to grade the utility of an ordered exam based upon appropriate use criteria and the reason for which the examination is being ordered.


As residents it was our responsibility to protocol each examination. We did this by reviewing the order, examining prior studies, gathering relevant information from the EMR, and in some cases, speaking directly with the ordering physician. In essence it was the residents’ job to collect and integrate all of the necessary information from multiple disparate sources to ensure a proper protocol was provided. While components of this exercise provided some education to the resident, the majority of this time could be better spent on other learning activities. IT systems, if connected to the disparate sources of information, could use Natural Language Processing (NLP) to glean relevant information needed to create the optimal protocol and provide that information to the radiologist. In fact, these systems could use the information to automatically protocol the majority of studies and only forward the more complicated patients to a radiologist. In addition, image detection AI algorithms embedded on the imaging machine could create on the fly protocols, adding additional series
based on identified pathology.

For instance, if an indeterminate adrenal lesion is detected by an AI algorithm on a CT abdomen/pelvis with IVC, the algorithm could create a 15 minute delayed series through the adrenal glands. A washout calculation could then be performed which would allow the lesion to be better assessed when it is initially reviewed. No longer would the patient need to return for a repeat study and deal with the uncertainty of a non-diagnostic diagnosis until the repeat exam is interpreted and communicated.


The risks of the radiation associated with CT scan are a necessary evil inextricably associated with acquiring the relevant information. A certain level of radiation is needed to produce a diagnostic signal to noise ratio (S/N). While there are some dose lowering techniques, none significantly reduces radiation dose while maintaining quality and the validity of the measurements we perform (e.g., HU). There are, however, evolving AI algorithms that are working toward translating a low
dose, low S/N exam into a high S/N exam while maintaining the validity of HU and other measurements, and reportedly increasing the conspicuity of pathology. If successful, the results could be transformative. Imagine decreasing the radiation dose of CT studies close to that of an X-Ray. Similarly, algorithms are being created to decrease the image acquisition time for MR. Both types of AI algorithms will add value for patients, while also improving scanner turn-around-time and decreasing power output so machine components last longer.


Many algorithms can alter the workflow of critical exams as a biproduct of image detection AI. If exams are processed by an algorithm prior to that study appearing on the worklist, the position of the exam on the worklist could be optimized based on the AI predicted imaging findings. Unlike image detection algorithms used primarily to assist the radiologist in identifying pathology, workflow algorithms can trade off specificity for sensitivity and still be successful. Some of these algorithms are already FDA approved and being used in practice. Other areas of workflow improvement, however, are less
frequently seen. For instance, algorithms could elevate an exam on our worklist if that study is delaying patient discharge or if the patient has an upcoming physician appointment. It would also be helpful for cases to be directed to certain subspecialties based on identified findings. As an example, a head CT containing preidentified findings (e.g., mass, edema, postoperative findings) may be better interpreted by and directed to a neuro-radiologist, whereas the neuro-radiologist may not be required for a less complicated head CT.


“Evaluate trauma” – although not ICD-10 compliant, this type of history is not infrequent. Even when a more complete characterization of presenting symptoms is available, some of the relevant data (e.g., cancer history, surgical history, lesions being tracked) is often absent. Similar to the work required to determine the proper protocol, gathering patient information relevant to the exam requires the radiologist to collect and integrate data from different sources.

Unfortunately, in our current fast paced environment, this time-intensive exercise is usually bypassed, sometimes affecting the utility of the resulting interpretation. An IT system using NLP, however, could gather this disparate but essential information and provide it to the radiologist as a part of their workflow.


Most of the available AI algorithms involve detection of pathology, a vital component of image interpretation. However, radiologists are skilled at image review and pathology identification. This skill is in part related to our many years of training, but also because the process is reliant on pattern recognition – a capability that is highly evolved in humans. However, there are other AI use cases in this imaging sector. For instance, AI tools could be used to segment the anatomy so the muscles, organs, and other structures could easily be defined. Based on this knowledge, once pathology is identified by the radiologist, an algorithm could volumetrically measure the size, evaluate for interval growth, identify the characteristics on each series/sequence, compare to a library of similar pathology, provide a differential diagnosis and pass this information to the VR system to be placed directly into our report. Names of structures can be automatically reported by the software (e.g., segment 4 of the liver, gracilis muscle, clinoid ICA) and the PACS viewer toolset could be optimized based on the structure being evaluated. As more radiomic data becomes available, algorithms could utilize this data within pathologic structures to help predict treatment outcomes.


This segment of radiology is rich with potential AI use cases. While not the totality, the radiology report is the visible product of our work as radiologists. With that in mind, we should spend time ensuring it is optimized for best patient care and to best suit the needs of our referring clinicians (the consumers of our report). From a population health perspective, NLP tools can help ensure we are utilizing evidence-based medicine by providing the appropriate follow-up recommendation based on the reported findings and the patient metadata. A similar system could be used to ensure billing and MIPS data is completed during the reporting process thereby limiting follow-up radiologist requests for additional information. This system could also help the radiologist customize the report to the desires of the referring clinician by alerting the radiologist of clinician-specific needs based on exam type and identified pathology. A patient friendly report could also be generated from the physician report using AI as a mechanism to improve patient communication and awareness of the role of the radiologist.


A phone call is one mechanism to ensure important information is collected, but this type of communication is neither permanent nor necessarily performed when it is best suited for the referring clinician and radiologist. An automated system that records and provides relevant findings to the referring clinician could replace our current outdated mechanism of communication. This system allows the referring clinician to acknowledge the result at a time that it is convenient for them, closing the communication loop and assisting with proper documentation.


Surprisingly the majority of followup recommendations we make as radiologists are either not performed or not performed in the recommended time frame. This lack of continuity is not usually because our referring clinicians disagree with our recommendations but rather because our hand off system is flawed. Instead of relying on a piece of paper or a phone call to ensure short and long-term followup is performed, NLP and reminder/ notification systems can be used. Such systems are being applied in a minority of locations. Utilizing this information in our reports, these programs identify necessary follow-up based on evidence-based medicine and additional radiologist generated recommendations, store this information in a database and produce reports for the client site to manage the required follow-up.

As these systems continuously evaluate interpreted exams, they can acknowledge when the proper followup has been performed. In addition, user interfaces can be developed for direct tracking and management of these patients, generating automatic
notifications and updates to stakeholders (e.g., nurse navigators, patients, or primary care physicians), as necessary.


Radiology peer review is starting a long needed and welcomed transition to peer learning. Reviewing potentially missed findings is one of the most effective ways to learn. Frequently this opportunity is lost because of the stigma associated with making a mistake and the limited radiology capacity to perform these reviews. AI programs, however, have capacity and could be used for educational purposes as a second read. Additional programs could be created to identify areas for targeted learning based on the categorization of a radiologist’s potential variances. In addition, AI systems could create
case-based teaching materials by collecting and cataloguing exams with different findings. If given access to a pathology database, a definitive diagnosis could be applied to these educational cases making the resource and even more valuable
feedback loop for learning. Information technology, including AI, has the ability to transform our profession by augmenting the radiologist. In order to identify the areas of greatest need, it is helpful to break down radiology workflow into its individual components, the lifecycle of an exam. Once identified, these components can be used to direct the growth of AI toward those use cases that are the most impactful. Surprisingly to some, many of the most impactful use cases lie outside
of image interpretation. Hopefully we will begin to see an expansion of noninterpretive AI algorithms in radiology and with that produce a greater impact on driving value in radiology and healthcare.

Author, Nina Kottler

Nina Kottler, MD, MS is a radiologist with over 14 years of experience in emergency radiology. With a background in applied mathematics and optimization theory, she has been using imaging informatics to improve quality and drive value in radiology. Nina is a VP of Clinical Operations at Radiology Partners, leading their Data Science and Analytics division. She is also the Practice President for Radiology Partners’ remote imaging division and serves internally on their AI, IT, Leadership and Culture, and RCM support boards. Externally Nina chaired the Population Subcommittee of the ACR DSI’s non-interpretive panel, served on the ACR Informatics Commission and serves on the following committees: ACR’s Quality and Safety Conference Planning Committee, SIIM Machine Learning Committee, SIIM Program Committee (Scientific Abstract Reviewer), RSNA Educational Exhibits Committee (Radiology Informatics Subcommittee), RSNA Radiology Informatics Committee (RadLex Steering Committee), and the RADxx Steering Committee. In 2018, Nina received the Trailblazer Award – an award recognizing a pioneering female leader in the field of imaging informatics.