
I am a pediatric cardiologist and have cared for children with heart disease for the past three decades. In addition, I have an educational background in business and finance as well as healthcare administration and global health – I gained a Masters Degree in Public Health from UCLA and taught Global Health there after I completed the program.
“So, if this computer is running everything, what am I supposed to do?”
Captain James Kirk in Star Trek asked when being told that the new M-5 supercomputer will be capable of running the Enterprise without him
The myriad of AI advances in medicine has included analysis of medical images, detection of drug interactions, and identification of high risk patients, but more recently has embraced the medical AI chatbot with its two main components (a general purpose AI system and a chat interface). The authors (well qualified for this New England Journal of Medicine report but none of them clinicians) describe an AI system from OpenAI called GPT-4, or generative pre-trained transformer 4 that can be adopted for medicine.
The human user starts the interaction with this chatbot with a query, or prompt, and this is followed within seconds by a response that is relevant to the prompt. This query-prompt exchange can be continual to mimic a conversation and is improved with an art and a science termed prompt engineering. A noted challenge with this chatbot has been the queries that do not have a single correct answer. This impressive AI tool can not only answer a query but also offer a summary of any document, including medical ones. A hallucination is a false response by GPT-4 and perhaps this terminology is a bit unfair to the chatbot (as it has a negative connotation).
It is important to note that GPT-4 (released in March 2023) is a general cognitive skill AI tool and not specifically designed for clinical medicine or healthcare, but Microsoft Research and OpenAI have been exploring uses of GPT-4 in healthcare (documentation, education, research, and interoperability). Other AI chatbots being studied for medical applications include Google’s LaMDA and GPT-3.5; like GPT-4, these aforementioned AI tools were also trained for general cognitive capability and not focused on healthcare.
The authors then explored three examples of medical use of GPT-4: a medical note-taking task, a typical question from the USMLE examination for innate medical knowledge, and a “curbside medical consultation”. Although the AI chatbot can have hallucinations in these examples, it is also capable of catching its own mistakes. In addition, these AI chatbots will continue to improve as it ingests more information in the medical domain. Lastly, issues and discussions of these AI tools, as with any AI tool, include trust and accuracy as well as good-vs-harm.
Click here to read the full article.
The importance of these observations of artificial intelligence in healthcare will be part of the topics of discussion at the in-person AIMed Global Summit 2023 taking place on June 4-7th of 2023 in San Diego. The remainder of the week will be other exciting AI in medicine events like the Stanford AIMI Symposium on June 8th.
We at AIMed believe in changing healthcare one connection at a time. If you are interested in discussing the contents of this article or connecting, please drop me a line – [email protected]