“My words fly up, my thoughts remain below: Words without thoughts never to heaven go.”

William Shakespeare, Hamlet.

This recent commentary in the npj Digital Medicine journal is a timely one on the emergence of OpenAI’s Generative Pre-trained Transformer 3, commonly known as GPT-3, and its potential deployment in healthcare. GPT-3, trained with 175 billion parameters, was tested in few-shot learning settings with few examples. While the expectations for GPT-3 in healthcare are lofty, there are considerable challenges and considerations for this powerfully transformative natural language processing tool. The myriad of NLP applications in healthcare include virtual assistants, chatbots, voice recognition documentation, and automated structured chart abstraction.

The library of publicly available textual data for GPT-3 is huge, but healthcare data of communications and exchanges amongst healthcare stakeholders and patients is relatively much smaller. This pre-training dataset issue will lead to a limitation in healthcare, especially in situations of dynamically changing interactions of human conversations. Another significant flaw of GPT-3 is its inability to correct itself upon an error in prediction; this limitation can be amplified in sensitive contexts in healthcare. Lastly, GPT-3 will ultimately lack humanness in its interactions as it really does not know anything (“Dr. GPT-3 is not coming to a clinic near you anytime soon.”). The authors insightfully suggested that GPT-3 and its class of NLP tools should start in areas of high value, high feasibility, and low risk for all stakeholders (such as non-critical healthcare system encounters) and most importantly, be absolutely transparent.

Click here to read the paper.