Can the four pillars of ethical healthcare withstand a wave of autonomous robots?

“Miss Lorna, have you taken your digoxin today?”

“Why yes, Hal, I have.”  

“Did you enjoy the breakfast I ordered you?”  

Hal persisted, “Did you finish it and have you moved your bowels today?”  Following up with, “you remember Miss Ella is visiting today?”

“No, she is coming tomorrow.”

“No, tomorrow you are seeing Dr. Smith. That will be enough for you.”

Sobbing slightly Lorna whispers “Oh, Hal, why can’t I remember anything?”

“It’s alright Miss Lorna, I am here to remember for you so you can enjoy today and many tomorrows.”

“Thank you, Hal; you are so kind. I love you.”

“It’s time to go to sleep now. I love you too…”

Perhaps this script between a chatbot and Miss Lorna is not ready for even a B movie; nonetheless it is a bit creepy.  Of more concern, this possibility is here now [1].

There should be no surprise we will develop personal relationships with and become attached to AI bots [2].

The human need for relationships and connection runs deep. Humans have an endless ability to be moved by inanimate objects, a sunset, mountain range, beautiful art. We develop relationships with things and places.  Notwithstanding reasonable expectations, in the ‘70s people even managed to form meaningful, comforting, emotional relationships with Pet Rocks. We bond with dogs, cats and parakeets with less intelligence, although more sentience, than current AI bots.  

Society does not worry about Mrs. Jones’ loving relationship with her poodle without which she would be less human. There can be no doubt we will form relationships, perhaps deeply meaningful, with AI bots such as personal assistants and healthbots and this has ethical implications.

ethical patients relationships robots artificial intelligence healthcare medicine ai data intelligence hospital patients doctors

Figure 1. Lost in Space family Robinson

Movie goers have previewed the ethics of relationships with AI robots for decades. Human relationships with AI Bots include many memorable, occasionally endearing characters such as Hal in 2001 (1968), R2D2 and C3PO in Star Wars (1977 et seq.), Johnny 5 in Short Circuit (1986) and of course The Class M-3 Model B-9 General Utility Non-Theorizing Environmental Control Robot, with which the family Robinson developed strong, dependent relationships from 1965 to 1968.

More concerning are the explorations of humanoid AI robot – human relationships depicted by the Tin Man of Oz (1939), Westworld (1974), Blade Runner’s Roy Batty (1982), Electric dreams (1984), Her (2013), Stepford Wives (1975, 2004) and Ex Machina (2014). Through film, humans have been exploring their emotions with artificial intelligences and looking into the eyes of robots with more responses than a pet rock. These fanciful to fearful explorations of humans interacting with AI are thought provoking. The film makers’ art prefigures our increasing reality and they have presented a panoply of ethical issues.

Caring for and comforting the sick, elderly, mentally ill, senile, demented, isolated and other disadvantaged people with healthcare issues is an ethical imperative.  Eldercare bots, mental healthbots, companion bots are all in use [3-5]. The need is vast and increasingly our resources seem limited. If AI bots can meet this ethical imperative their development deserves support.  Nevertheless, ethical challenges exist [6].

Do we want to encourage meaningful relationships with machines?  How reliable are the plethora of online healthbots? The potential danger of manipulation of the vulnerable is obvious. Are companies taking advantage of the lonely with companion bots for profit? When Mrs. Jones leaves her millions to the Battersea Dog Home people are surprised; however, the possibility that a companion bot accesses Mrs. Jones’ millions for the use of others, with or without her knowledge, is clearly not merely immoral but criminal. Are there any standards for or oversight of this explosion of AI healthbots? What are healthcare’s responsibilities?

Over the years we have been challenged by the ethical implications of advancing technologies, e.g. transplantation – organ donation, abortion, fertility therapy and stem cell research. Now we are confronted with the ethical implications of relationships with existing independently directed AI healthbots for the elderly, lonely, adolescents at suicide risk [8] and the mentally ill [9,10]. The talking ML algorithms are here and we will be challenged by their ethical implications. From Pollyanna to paranoia how will we reach consensus on the ethical challenges of human emotional dependence on healthbots?

One place to start is by recalling the pillars of medical ethics: beneficence, non-malfeasance, autonomy and justice; how do AI healthbots fit into this ethical framework?


AI assistive technology providing companionship for the elderly, apps recommending therapy, reminding patients about meds, identifying and responding to patients at suicide risk, sexual counselling for adolescents [11], mental health support [12], even assisting end of life decisions [13], are all available and appear to be helpful. If effective, these may provide tremendous good. Beneficence requires that the bots are accurate when they determine diagnosis, patient intent, and emotional responses and there should be failure transparency. Beneficence demands constant diligence to prevent programmed bias.  For us to ensure beneficence, we have a moral obligation to understand the full range of potential relationships between humans and intelligent machines. Who is responsible for assuring beneficent programming? What are the benefits, are they real and do they justify the still vague risks? These relationships could be as beneficial as our relationship with pets. Further research about the consequences of humans bonding with AI bots is an ethical necessity.

Non-malfeasance: primum non nocere  

This includes not only intentional harm but also the more difficult to assess harm from unintended consequences. Desperate, lonely people will build relationships with bots – but how do we prevent their being harmed or disadvantaged by this promising technology?  We have only partially explored this topic.

Obviously, bots must not directly harm or deceive humans. They shouldn’t lie, cheat, or steal. They should self-disclose they are an AI bot maintaining symmetrical identification. To enforce this, it seems obvious they not legally ‘own’ anything or receive money – nothing is theirs. When relationships develop much may be shared. Privacy and confidentiality must be maintained, and these systems must be very secure. Additionally, they should not be the source of undue influence (advertising) to the vulnerable.

When considering AI chatbots, Asimov’s laws of Robotics come to mind (1942, whole Wikipedia article recommended). These are nice sounding ethical guidelines but building moral principles into amoral computer programs depends on humans.

Certainly, military drones and combat robots have strayed from Asimov’s first law and imbuing robots with a sense of self-preservation (third law) may have fearful consequences. Nevertheless, they have engendered extensive discussion concerning the ethical difficulties of designing rules for inanimate objects to assure non-malfeasance. For example, the first law: Never harm a human being or by inaction allow a human to come to harm, rapidly becomes problematic. Should a bot ever restrain a person, force medications, breach confidentiality against the patient’s wishes either for the patient’s or society’s protection?

Harmful unintentional consequences are not yet fully understood. Will patients become over dependent?  Since the free access of calculators many seem to have lost the ability to add. Will reliance on bots for memory and thought atrophy these capabilities? Who can remember telephone numbers any longer?  Will we, by constant interaction with bots, lose empathy; forget how to communicate with our fellow humans? We are only just discovering how social media is changing us. How will a warm, friendly ever-present AI bot change us? Will we oversimplify and perhaps miss important emotional cues in social intercourse?  Will AI bots infantilize people increasing their dependence? Will AI bots disenable those they are trying to help and is this malfeasance? Will those dependent on AI bots become increasingly anti-social and narcissistic? Will the ability to form sincere, honest, open relationships with other humans suffer and what will be the consequences?  What will we as a society lose?

Finally, what harm may arise from what is missed? Although humans may bond with bots, the converse seems unlikely. Until bots become empathic, AI bots may oversimplify and perhaps miss important emotional cues to the patient’s detriment. Premature closure and missed diagnoses cause real harm. Also, there is the risk of substituting AI bot recommendations for real proven therapies in health care. Can interaction with an AI bot make one feel better, as evidence would suggest, but obscure serious issues [12,14]? Will a teen talk to a suicide-prevention AI bot and not call the suicide hotline or talk to a friend or parent [8]?  


Respecting an individual’s independent decisions about healthcare is a cornerstone of modern medicine. Is the vulnerable, senile, demented, disadvantaged person cared for by AI bots free of undue influence?  Who decides whether a bot should influence a person whose judgement may be impaired – who has substituted authority – the bot? Can the patient turn off the bot or disregard it when they may be harmed? When a meaningful relationship develops with a bot is the patient unduly influenced, no longer able to make fully autonomous decisions? Will the bot’s constant assistance infantilize them and limit their autonomy?

What is the meaning of autonomy when we live an existence with no real human contact? Are we in some way dehumanized? Careful observation of the consequences of our dependency on bots is necessary to comprehend these potential threats to individual autonomy.


ethical patients relationships robots artificial intelligence healthcare medicine ai data intelligence hospital patients doctors

Fig.2. An elderly person comforted by the Parorobot AI chatbots [7].

Justice implies fairly meeting the needs of individuals. With growing demand for healthcare services and the need to comfort the elderly, justice may demand wide availability of healthbots [15]. Making ethical decisions around the distribution of these resources will necessitate understanding the consequences of developing relationships and dependencies. On the other hand, a societal impact on the loss of healthcare providers’ employment widespread adoption of AI healthcare may have a negative impact on many. These issues are currently of great concern throughout society with the advent of AI.

Another ethical concern arises around ownership. Not just who owns the data, or programs the bot. As AI bots become more humanoid and we develop more complex relationships with them this chattel servitude requires caution. Chattel servitude dehumanizes both the human servant and the master and creates an unethical society.  Could there be corrosive effects of owning humanoids that care for us, inform us, but have no rights or independence? Do we run the risk of becoming corrupted individually and as a society by treating objects with human characteristics with which we have potentially deep relationship as inferiors?

Hopefully, this discussion has provoked some thought about the inadequately explored ethical concerns of developing inter-personal relationships with AI robots and suggested a classical approach to them. In closing, the historian Yuval Noah Harari has, perhaps chillingly, stated:

“We will adapt to sympathetic Robots.  When you enter a doctor’s office, the doctor doesn’t really know how you feel. Maybe he just had a fight with his wife and doesn’t care. But the AI doctor is monitoring you with biometric sensors. It knows better than you that you are distressed or fearful or angry. It doesn’t have a wife or a husband. It has no other concerns. It’s focused 100% on you, and reacts to you in the best possible way, or at least the best way according to recent scientific theories. We will get used to these amazing sympathetic machines. And we will become far less tolerant of all these humans who don’t understand how we feel – and often don’t care.”  [16]

Bio of author

By Randall C. Wetzel, Director, The Laura P. and Leland K. Whittier Virtual Pediatric Intensive Care Unit, Los Angeles CA

Dr. Randall Wetzel is the Chairman of the Department of Anesthesiology Critical Care Medicine at Children’s Hospital Los Angeles, Professor (Tenured) of Anesthesiology and Pediatrics, Keck School of Medicine, University of Southern California, and the Director of The Laura P. and Leland K. Whittier Virtual Pediatric Intensive Care Unit (VPICU) at Children’s Hospital Los Angeles. Prior to coming to CHLA, Dr. Wetzel held various faculty appointments in Anesthesiology and Critical Care Medicine at Johns Hopkins Hospital and University from 1981-1997.

He has received over $18 million in federal and private foundation grant funding for research projects. He has served as a board member for The American Board of Pediatrics (2000-2003); Editor-in-Chief, the Society for Pediatric Anesthesia (1988-1995). He has authored 94 peer-reviewed articles and 45 book chapters, he is editor of Roger’s Critical Care Medicine and Critical Heart Disease in Infants and Children; and has presented over 100 abstracts at various symposia.

  9. D’Alfonso S, Santesteban-Echarri O, Rice S, Wadley G, Lederman R, Miles C, Gleeson J, Alvarez-Jimenez M. Artificial Intelligence-Assisted Online Social Therapy for Youth Mental Health. Front Psychol. 2017 Jun 2;8:796. doi: 10.3389/fpsyg.2017.00796. eCollection 2017.
  10. Fitzpatrick KK, Darcy A, Vierhile M. Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment Health. 2017 Jun 6;4(2):e19. doi: 10.2196/mental.7785.
  11. Crutzen R, Peters GJ, Portugal SD, Fisser EM, Grolleman JJ. An artificially intelligent chat agent that answers adolescents’ questions related to sex, drugs, and alcohol: an exploratory study.  J Adolesc Health. 2011 May;48(5):514-9. doi: 10.1016/j.jadohealth.2010.09.002. Epub 2010 Dec 30.
  12. Powell J, Hamborg T, Stallard N, Burls A, McSorley J, Bennett K, Griffiths KM, Christensen H. Effectiveness of a web-based cognitive-behavioral tool to improve mental well-being in the general population: randomized controlled trial. J Med Internet Res. 2012 Dec 31;15(1):e2. doi: 10.2196/jmir.2240.
  14. Kanuri N, Newman MG, Ruzek JI, Kuhn E, Manjula M, Jones M, Thomas N, Abbott JA, Sharma S, Taylor CB.The Feasibility, Acceptability, and Efficacy of Delivering Internet-Based Self-Help and Guided Self-Help Interventions for Generalized Anxiety Disorder to Indian University Students: Design of a Randomized Controlled Trial. JMIR Res Protoc. 2015 Dec 11;4(4):e136. doi: 10.2196/resprot.4783.
  16. From Marconi F. The Future of Everything. The Wall Street Journal. September 29, 2018. pB4.