A new study conducted by a group of researchers from the University of Westminster, University College London, and the University of Southampton published in Digital Health found that while most internet users are open to the idea of health chatbots, their hesitancy towards new technologies is keeping them from full engagement. The research team thus urged designers to focus on user-centered and theory-based approaches to create artificial intelligence (AI)-led health chatbots to cater to the needs of patients and enhance user experiences. 

Rationale of the study 

With digitization of healthcare and growing influence of AI, researchers acknowledged chatbots’ potential to improve patients’ accessibility to medicine, strengthen physician-patient communication, and help in managing the unceasing demands for various related services. Chatbots might have been actively employed in health education and coaching, often coupled with other functions such as symptom checker, online triage, interactive live feedback and so on. 

Nevertheless, there remain limited research on the acceptability of chatbots and what truly motivate individuals to use them. As such, researchers administered a semi-structured interview and a 24-item online survey via social media. Responses were then recorded, transcribed, and systematically analyzed. Three broad themes – “Understanding of chatbots”, “AI hesitancy”, and “Motivations for health chatbots” were drafted to outline issues regarding accuracy, cyber-security, and the capability of AI-led services to empathize. 

Limited experience and general hesitancy 

Of the 29 interviewees, most of them could not recall using a chatbot while accessing to healthcare services even though they are aware of the technology. They expressed in spite of media’s coverage of AI and chatbot, people are still not familiar or fail to understand them. Furthermore, participants are hesitant whether chatbot can play an efficient role in their healthcare. They are unsure whether chatbots will give them trustworthy, accurate and highquality healthcare information because the sources of supporting these services are not transparent. 

Some participants expressed their concerns in miscommunicating with a chatbot. Others feared that their sensitive information will not be promptly protected shall the chatbot is not safeguarded reliably. The lack of human presence also made some worry whether a chatbot can demonstrate adequate empathy or understand the emotional needs of its users, especially if it was to be used in the mental health setting. Overall, chatbots are still perceived as inferior to human doctors. 

On a positive note, participants said they are willing to use chatbot for minor health concerns or to replace the traditional medical hotline, to seek rapid guidance or when they struggle to get through the phone line when in need. Surely, the study is first of its kind and did not incorporate a large pool of participants, including those who are using or had benefitted from chatbot services. However, it does provide some preliminary thoughts on how people think of new technologies, which is probably what designers and developers will like to know at the moment. As the researchers themselves suggested, chatbot will need to adhere to users’ needs in order to succeed. 

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.