Welcome to the Geeky Gynaecologist’s Glog :)

‘First do no harm’ or ‘nonmaleficence’ is one of the four ethical principles proposed by Thomas Childress and James Beauchamp in their famous textbook, ‘Principles of Biomedical Ethics’. This ethical framework is widely used as a litmus test on evaluating the ethical credibility of a given action or in some cases non-action in medicine. The other tenants of their framework are autonomy, justice, and maleficence (do good). 

Now dear Glog readers, you may be wondering if I actually submitted my article on ethics for another journal here by mistake! – Not really. While we get totally submersed in the wonders of the kind of disruption we can deliver, there are certain valid challenges thrown in which need to be fully understood. I want to bring in the ethics around the use of certain disruptive technologies in healthcare. As we explore and expand the use of robotics and artificial intelligence (AI) in the delivery of healthcare, AI-enabled clinical decision supports based on algorithms that are fed in by human experts, come into play. An often-faced scenario in intensive care setting is that of a clinician who has to make a decision to switch off the life support for a paediatric patient when all the clinical parameters are pointing towards futility in continuing the machine-based support. This decision is very challenging and heart breaking often. It is made only with much clinical wisdom and empathetic handling. In cases where the next of kin in their grief disagree with the team after several steps of moderations, the law may need to intervene. 

In this scenario, let’s introduce a machine enabled AI supported decision tool, which is based upon a number of inputs that will be fed with both biomedical as well as social data points, which makes a recommendation. It will have algorithm-based decision output using big data intelligence to search for similarities and will produce Beauchamp based on this mass of inputs fed by real humans with feelings and moral and legal compasses aligned with recommendations on good medical practice.

Are we ready to accept such a decision support system as a clinical fraternity? Those who understand how the CDS (Clinical Decision Support) has been designed to provide its recommendations may be more acceptable with this idea (with strict provisions that the system has the desired inputs highlighted above and that it will only be giving out recommendations, not a dictum laid out in stone). Those who are still unclear on what AI and machine learning works will show derision to the very idea of an inhuman bulk of metal and wires ‘making’ sensitive decisions such as this.

I’m neither advocating that this use is totally justified nor am I refuting its use and here is why I say this.

Any such solution will be based on human intervention! The recommendations that come out will be based upon what data has been fed, how it is aligned, what principles are applied and what guidance is given in interpreting its recommendations. These are all human enabled processes that the machine is able to analyse in relevant context at great speed. One may argue about human fallibility while making those inputs upon which the recommendations ultimately format.

Already there are publications that support this use. Lovejoy et. al., suggest in their recent paper that in the intensive care settings, there is a paucity of positive multi-centre prospective randomized control trials in ICU (Intensive Care Unit) settings, clinician decision-making is driven largely by experience and instinct, resulting in significant variability amongst clinicians. AI could reduce this inter-clinician variability and offer other benefits. AI excels at finding complex relationships in large volumes of data and can simultaneously and rapidly analyze many variables to predict outcomes of interest, such as sepsis or mortality. The modern ICU environment is data-rich, providing fertile soil for the development of more accurate predictive models, better decision support tools, and greater personalization of care[i]. They emphasized the value of AI supported tools that enable sepsis prediction, severity scoring and timing of ventilator removal. This is a vital decision in the ITU (Intensive Treatment Unit) as both premature extubation and prolonged ventilation are associated with higher mortality rates. However, a wide discrepancy of practices is seen, and accurate prediction is challenging.

Bottom line is that as clinicians, we will need to make decisions about adopting technology-enabled delivery of healthcare. This needs us to adapt and become adept with the basic understanding of how it works, to ensure we are not a barrier in transformation rather an enabler!

Until next time…

Glogging out,

Geeky Gynaecologist

[i]Artificial intelligence in the intensive care unit. Christopher A. Lovejoy, Varun Buch, and Mahiben Maruthappu.Critical Care. 201923:7. https://doi.org/10.1186/s13054-018-2301-9 © The Author(s). 2019

Author Bio

artificial intelligence medicine science technology AIMed geeky gynaecologist intensive care unit intensive treatment unit machine learning deep learning prediction algorithm data decision making physician clinician patient blog

Naila is a senior clinician affiliated with the NHS for almost 26 years. Her career has evolved not only in her specialty (Gynaecology) but also in medical education, patient safety and informatics in healthcare. She has held several senior leadership posts such as Associate Dean London Deanery, Associate Director for Medical Education and Lead for OBGYN undergraduate course at Imperial College. She is a champion for embracing technology in the delivery of high standards of healthcare and is a frequent speaker on disruptive technologies and their place in futuristic healthcare. Recently she was interviewed by HIMMS TV at the UK eHealth week, where she delivered two talks which were very well received.