“Everyone has a plan until they get punched in the mouth.”

Mike Tyson


It has been exciting to see the rapid proliferation and escalation of large language models (LLMs) and imagine how these AI tools can be used in the day-to-day tasks performed by clinicians, but it remains to be seen just how useful these AI tools are with adoption in the clinical setting. One of the most promising uses for these large language models is assisting the decision-making capabilities of the clinicians, especially during a busy clinical day, including in challenging areas such as accurate diagnosis of a difficult case.

In using these available AI tools in the clinical setting for accurate clinical diagnosis, the following are three important caveats for clinicians:

  • While LLMs are “book” smart, these tools are not necessarily “street” smart. There are many nuances in the clinical setting that these tools do not always take into account. The reliability of answers from questions asked of patients and families often vary, for example, but the seasoned clinician will often adjust. In addition, many clinical situations involve multiple organ systems so this can create a clinical conundrum. In short, most cases in the clinical setting are not “classic” textbook cases in which LLMs can excel (as demonstrated by their performance on board examinations)
  • LLMs are not as capable when only limited data and information are available. These tools, as well as clinicians, have some difficulty in solving difficult cases when the clinical presentations are incomplete and/or early in the disease process (2 of 5 “criteria” for example). The prompt engineering aspects for LLMs, just as clinicians who are astute in asking patients and families the “right” questions, are absolutely essential in improving the odds of making the correct diagnosis in a timely manner
  • Accurate diagnosis can sometimes depend on physical examination findings. One potential pitfall with these AI tools is that clinicians may rely less and less on their physical examination skills to detect key findings in order to increase the likelihood of making the correct diagnosis (especially for unusual diseases). Clinicians know that essential perceptive information from physical examinations such as an unusual sounds on auscultation or atypical skin findings on visual inspection can sometimes be key to a diagnosis.

As the aforementioned famous Mike Tyson saying, clinicians often face harsh realities in the real clinical world that these AI tools with well organized differential diagnoses and even clinical plans are helpful but not always impactful. These AI tools, as capable as these are, cannot and should not take place of all the combined cognitive and perceptive capabilities of clinicians. Having LLM as a “knowledgeable” partner, however, can empower the busy clinician.

These insights and discussions on AI and clinical work will be a key element at the in-person Ai-Med Global Summit 2024 scheduled currently for May 29-31, 2024 in Orlando, Florida. Book your place now!

See you there!