“I do think there should be some regulations on AI”

Elon Musk

Scott Gottlieb, former director of the U.S. Food and Drug Administration (FDA) and now a researcher with the DC-based public policy think tank American Enterprise Institute, authored this timely piece on how to safely integrate a large language model into health care. He reminds us that there are tools that are involved in decision support but are not regulated as medical devices by the FDA as the clinician and his/her judgment is involved in the final decision. There are, however, regulated tools that are trained on closed data sets that abide by the traditional regulatory framework. 

The large language model (LLM) that encompasses natural language processing such as ChatGPT is now entering a different category of AI implementation in health care. This tool answers queries by calculating the probabilities of words and can understand and generate free-form text; this capability will permeate medicine more deeply given that it has high potential for improving and augmenting communications in the health care system. Gottlieb recommends that we “achieve effective adoption of these models by proceeding deliberately- aligning application with improving accuracy of these tools”. 

Gottlieb further delineates that the most likely successful integration of LLMs is in “conditions in which interventions are supported by comprehensive longitudinal data from clinical trials, observational studies, and various real-world data sources”. Chronic diseases such as heart disease and diabetes would benefit by increasing meaningful communications between the health system and the patients. Gottlieb, having been on the regulatory side of these AI tools, understands that the fastest trajectory of AI adoption is to focus on tools in clinical situations with known disease trajectories and understood treatment effects. He does point out, however, that developers of LLMs primarily use these tools as digital assistants and not extend their use into diagnostic or treatment recommendations as to avoid the FDA regulatory process. 

As with most AI projects, however, Gottlieb argues that the expansion these LLM tools will rely on “innovative methods to unlock and aggregate health care data”. Although he points out that “enhanced collaboration among health care systems, including data sharing and accessibility, will be crucial” for future LLMs, this requirement is perhaps not entirely necessary especially if federated and swarm learning become more accepted by health care systems. 

Read the full article here 

This fascinating topic of LLM’s within healthcare will be discussed, along with many others at the annual Ai-Med Global Summit, scheduled for May 29-31 2024 in Orlando. Book your place now!