I attended a seminar on policy building for AI (artificial intelligence) in healthcare for the European Commission recently. The discussions were around two things: Should AI be called ML (machine learning) as a number of stakeholders thought that, that is what it is? And the other was should there be a body who defines the code of ethics for the use of AI? These are very pertinent questions and I personally would urge all the stakeholders to engage in these discussions and get a global consensus around them.

As a clinician, I can say that there are many tasks that I would be relieved if they are taken over by a reliable and efficient ‘entity’. These include screening the results of investigations done for my caseload and flagging up those that are abnormal. A more artificially intelligent system would go further and give me a summarized list of red flags in that patient’s history and examinations when I get the results. An even more sophisticated AI system would predict outcomes for this patient in given set of variables and come up with a holistic bespoke management plan with timelines and teams involved as an example pf ‘precision medicine’. All the above would be an expectation from a safe and competent clinician. However, the numbers of patients that we are seeing in many healthcare systems in today’s’ world, these tasks are very challenging, labor-intensive, high resource dependant and fallible to human error. It would bring great efficiency in the system through these AI-supported systems. Needless to say, it would free up the clinician to focus more on the sick needing more one to one attention.

The issue is reliability of the system. AI system is reliant on three things. First is the accuracy of data that is put in (its validity, timeliness, and relevance); second the alignment of the algorithms that are programmed (as these are what the system is asked to do), and third, the functionalities of the IoT (Internet of Things) platform that hosts the system (i.e. how accurately it follows the algorithm and retrieves the answers from the data).

As there are many AI systems coming to the market, how does one assure of their reliability and ethical standing? Is there a gage or metric that categorizes the systems of reliability of their outcomes? AI could have a score for its accuracy called the Accuracy of Intelligence Ratio or Matrix. This would depend upon the reliability of the data and alignment of the algorithms, as well as the functionality of the IoT platform that processes these variables to give the outcomes.  

We are going from narrow AI to general AI. In a few years, it is predicted that there will be more accurate ML models which will do some of the work that clinicians do. It is not necessarily a bad thing from the patients and clinicians’ points of view. As long as what is delivered is accurate, efficient, and helpful in looking after the patients according to good medical practice, I would welcome it. The challenge is for the industry to convince with evidence that AI is not going to harm patients.

The rebuttal for this expectation from the industry and ethicists are that we are dealing with an emerging technology. The reliability and ethics can be scrutinized of what we have here and now. How to predict what AI-supported solutions perform in future and what ethical dilemmas they pose in the future, is based on conjecture. If I was a Sci-Fi fan, I could be an optimist and think AI will be the superman in healthcare. However, some of my pessimistic colleagues will consider it as the Alien who comes to destroy and take over Earth. I would rather be the realist and be of the opinion that AI is yet another technology which will be, what we make of it.

Author Bio

artificial intelligence medicine science technology AIMed geeky gynaecologist intensive care unit intensive treatment unit machine learning deep learning prediction algorithm data decision making physician clinician patient blog

Naila is a senior clinician affiliated with the NHS for almost 26 years. Her career has evolved not only in her specialty (Gynaecology) but also in medical education, patient safety and informatics in healthcare. She has held several senior leadership posts such as Associate Dean London Deanery, Associate Director for Medical Education and Lead for OBGYN undergraduate course at Imperial College. She is a champion for embracing technology in the delivery of high standards of healthcare and is a frequent speaker on disruptive technologies and their place in futuristic healthcare. Recently she was interviewed by HIMMS TV at the UK eHealth week, where she delivered two talks which were very well received.