
Alexis is director of content at AIMed, with responsibility for the research, development and delivery of products across events, digital and publishing. A highly experienced events executive with a career focus on the intersection between healthcare and technology, he is also a school governor leading on teaching, learning, and quality of education.
Health Education England’s Annabelle Painter and Mike Nix on how different factors influence healthcare workers’ confidence in AI, and what this means for AI design and deployment.
Health Education England (HEE) and the NHS AI Lab have published a collaborative report exploring the factors that influence healthcare workers’ confidence in artificial intelligence (AI) technologies. While the report covers AI technologies used for any task in healthcare settings, this article discusses the challenges of incorporating AI into clinical reasoning and decision making (CRDM). The key consideration is determining the appropriate level of confidence a clinician can place in AI-derived information for a case-specific clinical decision.
What influences clinicians’ confidence during AI-assisted CRDM?
Effective CRDM requires clinicians to make value judgments about the significance and trustworthiness of information derived from sources of either unknown reliability (for example, patient history) or reliability that has been demonstrated at a cohort or population level (for example, laboratory test results). They combine that information, which can be potentially contradictory, to make an optimal decision for each patient.
Clinicians who use AI-derived information during CRDM will need to understand the nature and context of this information to assess whether it warrants low or high confidence. A complex set of considerations can dictate how to determine an appropriate level of confidence in AI-derived information, depending both on the technology and the clinical scenario, and with certain AI technologies and use-cases presenting lower clinical or organisational risks.
However, several factors can influence how clinicians view AI-derived information (their user confidence), potentially leading to inappropriately high or low levels of confidence. These factors relate to clinician attitudes, AI model design, and cognitive biases, as discussed below.
Clinician attitudes
Personal experiences and attitudes to innovation and AI technologies can significantly impact a clinician’s confidence in using AI technologies. These include general digital literacy, familiarity with technologies and computer systems in the workplace, and past experiences with AI or other innovations.
Clinicians are also more likely to trust AI-derived information that does not relate to their area of expertise. This suggests that less experienced clinicians are susceptible to inappropriately high levels of confidence in AI-derived information developed to support their decision making and may require education to critically appraise these technologies.
Conversely, experts (specialists in their area of care) tend to be more sceptical and question AI-derived information despite AI technologies having the potential to enhance their decision-making performance in some situations.
AI model design
Various design characteristics can influence confidence in AI technologies. For example, the way AI predictions are presented (such as diagnoses, risk scores, or stratification recommendations) can affect how clinicians process information and potentially influence their ability to establish appropriate confidence in AI-derived information.
Previous research has suggested that transparency in how AI computes and delivers an output, and the possibility of proving a ‘human-like explanation’ for the prediction (referred to as explainability) encourages higher levels of confidence in the technology.
However, explainable AI approaches (XAI) do not currently offer a panacea for assessing confidence in individual AI predictions, and may provide false reassurance to clinicians during CRDM.
Cognitive biases
Cognitive biases can affect AI-assisted CRDM. Some of the most common cognitive biases clinicians are susceptible to when using AI-derived information include:
- Automation bias – the tendency to accept the AI recommendation uncritically, potentially due to time pressure, or under-confidence in the clinical task (for example, in non-specialists)
- Aversion bias – the tendency to be sceptical of AI, despite strong evidence supporting its performance at evaluation
- Alert fatigue – ignoring alerts provided by an AI system due to history or perception of too many incorrect alerts (for example, false positives)
- Confirmation bias – accepting AI-derived information uncritically when it agrees with the clinician’s intuition
- Rejection bias – rejecting AI recommendations without due consideration when they contradict clinical intuition
The propensity towards these biases may be affected by choices made about the point of integration of AI information into the decision-making workflow, or the way such information is presented. Interviewees for this research highlighted that enabling clinicians to recognise their inherent biases, and understand how these affect their use of AI-derived information should be a key focus of related training and education. Failure to do so may lead to unnecessary clinical risk or the diminished patient benefit from AI technologies in healthcare.
How do we develop clinician confidence in AI?
During clinical decision making, inappropriate levels of confidence in AI-derived information could lead to clinical errors or harm in scenarios where the AI underperforms, without being properly assessed or checked.
Clinicians need to understand how their current decision-making process could be affected by AI-derived information and understand the importance of retaining a critical eye, to detect potential AI failure cases.
Education and training will be key to developing appropriate levels of confidence during CRDM. We will need to develop and deploy educational pathways and materials for healthcare professionals at all career points and in all roles, to equip the workforce to confidently evaluate, adopt and use AI. During clinical decision making, this would enable clinicians to determine appropriate confidence in AI-derived information and balance this with other sources of clinical information.
AI models can also be designed in a way that optimises user confidence. Further research to understand how certain AI model features influence confidence, and how to optimise the presentation of AI-derived information for CRDM is needed.
This can include investigations for minimising the impact of cognitive biases and maintaining clinician critical appraisal, for example by delaying the availability of AI-derived information until an initial human opinion has been formed.
These areas, along with others listed in the collaborative report, can contribute towards developing confidence in AI amongst clinicians and promote the safe and effective use of these technologies. A second report by HEE and the NHS AI Lab will outline the educational and training requirements to support this goal.
This article was written by Dr Annabelle Painter and Dr Mike Nix – Clinical AI & Workforce Fellows at the NHS AI Lab and Health Education England, with George Onisiforou – Research Manager at the NHS AI Lab.
The report on Understanding healthcare workers’ confidence in AI can be accessed here.
We believe in changing healthcare one connection at a time. If you are interested in the opinions in this piece, in connecting with the author, or the opportunity to submit an article, let us know. We love to help bring people together! [email protected]