Alexis is director of content at AIMed, with responsibility for the research, development and delivery of products across events, digital and publishing. A highly experienced events executive with a career focus on the intersection between healthcare and technology, he is also a school governor leading on teaching, learning, and quality of education.
Risk that patients will not equally share in the benefits offered by AI.
Frontline healthcare staff will need bespoke and specialized support before they will confidently use artificial intelligence (AI) in their clinical practice, a new report has found.
Published by Health Education England and the NHS AI Lab, the report found that, if patients are to benefit from AI, healthcare workers will need specialized support to use AI safely and effectively as part of clinical reasoning and decision-making. The vast majority of clinicians are unfamiliar with AI technologies and there is a risk that, without appropriate training and support, patients will not equally share in the benefits offered by AI.
The report calls for clinicians to be supported through training and education to manage potential conflicts between their own intuition or views about a patient’s condition and the information or recommendations provided by an AI system.
For instance, a clinician may accept an AI recommendation uncritically, potentially due to time pressure or under-confidence in the clinical task, which is a tendency referred to as automation bias.
Deploying AI in a health and care setting will require changes in the ways that the workforce operates and interacts with technology. A second report, to be published later this year, will further clarify the educational pathways and materials needed to equip the workforce, across all roles and levels of experience, to confidently evaluate and use AI.
“Understanding clinician confidence in AI is a vital step on the road to the introduction of technological systems that can benefit the delivery of healthcare in the future”, said Hatim Abdulhussein, National Clinical Lead for AI and Digital Medical Workforce at Health Education England. “Clinicians need to be assured that they can rely on these systems to perform to levels expected to make safe, ethical and effective clinical decisions in the best interests of their patients.”
Brhmie Balaram, Head of AI Research and Ethics at the NHS AI Lab said:
“AI has the potential to relieve pressures on the NHS and its workforce; yet, we must also be mindful that AI could exacerbate cognitive biases when clinicians are making decisions about diagnosis or treatment. It is imperative that the health and care workforce are adequately supported to safely and effectively use these technologies through training and education.
“However, the onus isn’t only on clinicians to upskill; it’s important that the NHS can reassure the workforce that these systems can be trusted by ensuring that we have a culture that supports staff to adopt innovative technologies, as well as appropriate regulation in place.”
The report argues that how AI is governed and rolled out in healthcare settings can affect the trustworthiness of these technologies and confidence in their use. It outlines the many factors that can affect the workforce’s confidence in using AI, including the leadership and culture within their organisations, as well as clear nationally driven regulation and standards.