Yesterday (29 September), fellow delegates shared their respective experiences and thoughts at the AIMed Healthcare Executives virtual conference on the key requirements for successful implementation of artificial intelligence (AI) in clinical and healthcare settings.

To start with a problem and how AI can address that problem

Dr. Mike Fisher, Consultant Cardiologist and Chief Clinical Information Officer at Liverpool University Hospital National Health Service (NHS) Foundation Trust highlighted the importance of culture and trust. He said there needs to be a plan for the right people and system to come on board. While trust fits into the bigger picture of ethics and it will never be established if clinicians become concerns whether AI will make their lives miserable; downgrade them professionally, or even replace them.

Likewise, if AI complicates clinicians’ workflow, especially in high intensity areas like emergency medicine, adoption will not happen even though the algorithm has been proven to be safe and beneficial. As such, Dr. Amrita Kumar, Consultant Radiologist and Clinical Lead in AI at Frimley Health believes it will be more appropriate to start with identifying a problem and how can AI step in to help solving that problem. Karl Hightower, Senior Vice President of Enterprise Information Management and Chief Data Officer at Novant Health echoed that.

He explained his institution approached AI by regarding it as a system intelligence designed to look at friction points exist in clinical and operational workflow. For example, in radiology, time is the friction point as radiologists need to examine scans and make decisions quickly. So, machine learning can kick in to make certain awareness, to point out things where radiologists could prioritize or take note of. This will reduce a lot of the tedious work and make clinical decision making move faster.

Understand that AI is not perfect

Dr. John Brownstein, Professor of Bioinformatics at Harvard Medical School, Chief Innovation Officer of Boston Children’s Hospital thought the ongoing COVID-19 pandemic has been a dramatic force, not only recognizing where AI can support physicians and patients but also in prioritization and allocation of resources and cutting down costs in general. However, Dr. Srinivasan Suresh, Chief Information Officer and Chief Medical Informatics Officer at UPMC Children’s Hospital of Pittsburgh said it’s still fair to presume that AI is still not perfect.

Regardless of whether an AI model is a continuous learning system or static, if no new data is fed to it, it’s just not going to help clinicians much. So, to increase buy-in, Dr. Suresh feels we should see AI as an augmented or assisted tool. Dr. Kumar added, the purpose of AI in her specialty of breast cancer imaging is the baseline is to detect more malignancy and do not raise alerts on things that aren’t cancerous.

Even as clinicians, they do have guidelines to ensure a balance between the two so there will not be variations between practices. At the moment, there are several companies publishing internal reviews on the accuracy and specificity of AI in relation to cancer detection. Nevertheless, at least in the UK, many of which are still at the research level and there is no real path of procurement to use AI models in actual clinical practice.

Putting in place a risk assessment plan

Hightower agreed research and innovation ought to be handled separately. Right now, as people are still very curious towards AI, things have to run in agile manner, to attain small wins and improve baseline metrics to measure the kind of improvement that AI can bring about. If AI can only bring in minor improvement, then one should re-consider if it’s worthwhile to introduce such a big destruction. If the improvement is going to be huge, then the question will be how to roll it out effectively and safely for the best outcome.

On the other hand, Dr. Fisher pointed out the danger of not knowing the differences between research and innovations. AI is perceived as innovation now and many people bear the attitude of getting it out there and prove if it works. This is dangerous thinking in healthcare because if AI ever causes a disaster because it was introduced prematurely, it will take a long time to set the course back again. Thus, Dr. Fisher advised to consider AI as an opportunity for pragmatic research and to look for simple, clinical relevant outcomes via randomized trials within wards in the same institution or across institutions. There are going to be major barriers in adoption until credible evidence are present.

As such, Dr. Fisher recommended a risk assessment procedures based on a spectrum. This spectrum can start with highly simple and transparent system like protocol driven decision support relied on fixed guidelines developed by human experts and all the to the end with highly opaque deep learning model that one will never comprehend how the AI arrives at the conclusion. With such risk assessment in place, both professionals and patients will understand and eventually accept how AI is being applied.

The virtual conference is available on demand here.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.