Recently, Chiara Longoni, Assistant Professor of Marketing at Boston University; Carey K. Morewedge, Professor of Marketing and Everett W. Lord Distinguished Faculty Scholar at Boston University, and Andrea Bonezzi, Associate Professor of Marketing at New York University, found that some patients are not willing to trust artificial intelligence (AI) driven medical solutions because they thought a “one-size-fits-all” algorithm will not be able to address their individual needs.

I am unique, so I don’t believe in AI

The researchers explored people’s receptivity towards AI-driven medical solutions in a series of experiments. The findings were published in the Journal of Consumer Research this May. Mainly, most participants showed a preference towards human caregivers. In one of the experiments, when participants were asked to choose between two human doctors, they would prefer the one with a higher performance. Whereas, when choosing between a human doctor and an AI solution, participants will forego performance and choose only the human doctor.

Likewise, most participants are less willing to pay for healthcare services provided by AI, even though they understood human caregivers may succumb to inaccuracies or possible surgical complications. In another experiment, participants were told of a diagnostic stress test performed either by human or an AI, would have an accuracy rate of 89%. In spite so, most participants would choose to pay more to have their diagnostic stress test being performed by a human rather than AI.

The reason being, as researchers found, was a belief that AI does not take into consideration of individual differences and circumstances. Known as uniqueness neglect, often, patients perceived themselves as “unique”. For example, when someone experienced a cold, he or she is likely to refer to it as “my cold”, implying that the medical condition afflicts him/her in a way that’s clearly different from others who are also down with the same condition. The stronger the neglect, the more likely the patients will perceive AI as standardized and inflexible.

How to persuade patients into AI?

As such, researchers suggested several ways for care providers to overcome patients’ resistance towards AI. One of which is to explicitly inform patients that AI is capable of tailoring its recommendations based on individuals’ idiosyncratic characteristics and medical history. For other AI-driven healthcare services such as chatbot, wearable devices or mobile applications, care providers could emphasis on the fact that patients’ information are gathered during the process of them interacting with the device. Thus, certain recommendations are indeed generated based on their personal profiles.

Furthermore, it is crucial for physicians to take up an active role in instilling an accurate perception of AI among patients. The three researchers found that people are most receptive towards AI-driven tools when physicians explain to them how the algorithms work, reviews from other patients and ultimately, a human physician will always be there to make the final decision.

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.