Researchers have found that people may be less likely to take health advice from an AI doctor when the chatbot knows their name and medical history

The study from Penn State and University of California, Santa Barbara (UCSB) found that when the AI doctor used the first name of patients and referred to their medical history in the conversation, study participants were more likely to consider an AI health chatbot intrusive and also less likely to heed the AI’s medical advice.

However, conversely, while chatting online with human doctors, patients expected the doctors to differentiate them from other patients and were less likely to comply when a human doctor failed to remember their information.

The findings offer further evidence that machines walk a fine line in serving as doctors, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State.

“Machines don’t have the ability to feel and experience, so when they ask patients how they are feeling, it’s really just data to them,” said Sundar, who is also an affiliate of Penn State’s Institute for Computational and Data Sciences (ICDS). “It’s possibly a reason why people in the past have been resistant to medical AI.”

Machines do have advantages as medical providers, said Joseph B. Walther, distinguished professor in communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at UCSB. He said that, like a family doctor who has treated a patient for a long time, computer systems could — hypothetically — know a patient’s complete medical history. In comparison, seeing a new doctor or a specialist who knows only your latest lab tests might be a more common experience, said Walther, who is also director of the Center for Information Technology and Society at UCSB.

“This struck us with the question: ‘Who really knows us better: a machine that can store all this information, or a human who has never met us before or hasn’t developed a relationship with us, and what do we value in a relationship with a medical expert?’” said Walther. “So this research asks, who knows us better — and who do we like more?”

The team designed five chatbots for the two-phase study, recruiting a total of 295 participants for the first phase, 223 of whom returned for the second phase. In the first part of the study, participants were randomly assigned to interact with either a human doctor, an AI doctor, or an AI-assisted doctor through the chat function.

In the second phase of the study, the participants were assigned to interact with the same doctor again. However, when the doctor initiated the conversation in this phase, they either identified the participant by the first name and recalled information from the last interaction or they asked again how the patient preferred to be addressed and repeated questions about their medical history.

In both phases, the chatbots were programmed to ask eight questions concerning COVID-19 symptoms and behaviors, and offer diagnosis and recommendations, said Jin Chen, doctoral student in mass communications, Penn State and first author of the paper.

“We chose to focus this on COVID-19 because it was a salient health issue during the study period,” said Jin Chen. “One of the reasons we conducted this study was that we read a lot of accounts of how people are reluctant to accept AI as a doctor,” said Chen. “They just don’t feel comfortable with the technology and they don’t feel that the AI recognizes their uniqueness as a patient. So, we thought that because machines can retain so much information about a person, they can provide individuation, and solve this uniqueness problem.”

However, the findings suggest that this strategy can backfire. “When an AI system recognizes a person’s uniqueness, it comes across as intrusive, echoing larger concerns with AI in society,” said Sundar.

But in a puzzling finding, the research discovered that 78% of the participants in the experimental condition that featured a human doctor, believed that they were interacting with an AI doctor. Attempting to explain this finding, Sundar added that people may have become more accustomed to online health platforms during the pandemic, and may have expected a richer interaction.