Last week, the Journal of the American Medical Association (JAMA) published a new viewpoint piece underpinning twelve clinical, legal and ethical issues that must be considered before applying artificial intelligence (AI) driven conversation agents in healthcare.

The authors, making up of medical and AI experts at the University of Pennsylvania believe these considerations are critical as chatbots tend to form intimate interactions with patients, a privilege that was once reserved between licenced human clinicians and patients only.

Besides, the ongoing COVID-19 pandemic has also encouraged more health systems to adopt the technology to minimize face-to-face visits at the clinics. So, one can question its suitability catering to the needs of so many patients with diverse medical conditions.

Pointers for consideration

The authors listed down twelve categories including patient safety; scope; trust and transparency; content decisions; data use, privacy and integration; bias and health equity; third-party involvement; cybersecurity; legal and licensing; research and development questions; governance, testing and evaluation, and supporting innovation and related questions that practitioners ought to consider.

For example, if a patient expressed the thought of harming himself/herself or others during a particular conversation, one of the clinical considerations will be “patient safety”. Practitioners should address “who monitors the interactions between patients and the chatbot? Does monitoring occur 24 hours/day and 7 days/week or another schedule?”; “Is there a rigorously tested escalation pathway to a human clinician? What scenarios have been configured to initiate the escalation pathway?” and “How well do chatbots detect subtleties of language, tone and context that may signal a risk for patient harm?”.

Ethically, practitioners will need to turn to “trust and transparency”, whereby they will be asked “do clinicians trust chatbots? DO patients? Should they?” and “To what degree do clinicians and patients need to understand the workings of chatbots to use them effectively, intelligently, and ensure the appropriate amount of trust?” Last but not least, legally, under “legal and licensing”, the questions raised will be “who is accountable if chatbots fail? The sponsoring health care organizations or clinicians? The chatbot vendors? All of the above?”; “What is the role of insurance in chatbot services?” and “Will there be required licenses or credentials for chatbots similar to those required for clinicians?”

A framework for decision making

In general, the authors are confident about the effectiveness of chatbots, particularly in facilitating remote patient management, data collection and rendering physicians more time to focus on other priorities. However, since there isn’t sufficient literature describing the use of chatbots in clinical setting, they thought their viewpoint will serve some form of preliminary framework to assist in decision making process shall the need arise and also to push relevant research forward.

“We need to recognize that this is relatively new technology and even for the older systems that were in place, the data are limited,” Dr. John D. McGreevey III, Associate Professor of Medicine in the Perelman School of Medicine and the viewpoint’s lead author says in the news release. “Any efforts also need to realize that much of the data we have comes from research, not widespread clinical implementation. Knowing that, evaluation of these systems must be robust when they enter the clinical space, and those operating them should be nimble enough to adapt quickly to feedback”.

“To what extent should chatbots be extending the capabilities of clinicians, which we’d call augmented intelligence, or replacing them through totally artificial intelligence?” Dr. Ross Koppel, Senior Fellow at the Leonard Davis Institute of Healthcare Economics and Professor of Medical Informatics adds. “Likewise, we need to determine the limits of chatbot authority to perform in different clinical scenarios, such as when a patient indicates that they have a cough, should the chatbot only respond by letting a nurse know or digging in further: ‘Can you tell me more about your cough?’”

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.