In the latest AIMed breakfast briefing: experience the future of AI (artificial intelligence) in radiology, took place in Chicago on 9 April, Dr. Paul J. Chang, professor and vice-chairman of Radiology Informatics of the University of Chicago School of Medicine, gave an informative presentation on the challenges faced by present day healthcare system, which may prevent AI and machine learning from exercising their full potential.
Dr. Paul Chang mentioned an absent of adequate infrastructure to support AI development in medical institutions. Although there has been a hype around AI in medicine and innovations around it are sprouting. The general architecture is practically non-existent. Most executives or leaders of medical institutions are not ready to consume AI, as such, there is no sense of orchestration in hospitals when it comes to initiation or overarching developments.
When it comes to AI infrastructure, it is not contained within a particular hardware or software. Most of the time, it also means leadership. As highlighted by the guest speakers of an earlier AIMed webinar, in order to ascertain AI’s success, its leadership will begin by accessing the organization analytics maturity. There is a need to reflect on the existing resources within the organization and if there is an executive sponsorship for an AI-related initiative.
These will facilitate the decision of whether the organization is interested in creating or buying an AI solution. It will also include competency building; ensuring individuals within the organization have the right technical aptitude to build and drive the AI project to its success. Most importantly, an effective leadership will change the perspectives of physician and clinicians, reassuring them that the technology is not out there to threaten their roles, but to increase efficiency and minimize burnout.
It is illogical to talk about AI and machine learning without mentioning data and storage. Most AI initiatives target electronic health record (EHR) for the abundance of data, without realizing that more patients are also starting to use wearables or smart phone applications to keep track of their medical conditions. Besides, if the patients consult several specialists at once and all of whom employ different EHRs, the obtained data may only be a partial representation of the patient.
Data fatigue can be created as a result of silos. This is especially so if the data comes in different forms from images to text and scan, as in the case of medicine. Even if a centralized system is in place, there is a need to think of its capacity and reliability. Machine learning algorithms feed on data and these data has to be structured and cleaned. Moreover, patients are not supposed to be identified at each level of data cleaning, to protect their privacy.
Ultimately, all of these boils down to the level of AI which the organization plans to use and if they will need the AI-driven interface to provide real-time feedback. When deciding on a particular data storage and analytics framework, there is also a need to recognize its scalability. After all, data is not stagnant, so do AI.
In the second AIMed breakfast briefing: experience the future of AI in radiology, took place in Boston, on 10 April, Sara Gerke, research fellow, medicine, AI and law at the Petrie-Flom Center for health law policy, biotechnology and bioethics, Harvard Law School said we have more important things to tackle other than the AI we have in place. One of which is liability. He believes that at the end of the day, even if the technology advances to the state when we have autonomous robots performing procedures, there should still be a human responsible for what is going on. As such, part of the AI infrastructure should also encompass ethical framework which outlines medical malpractice in the era of AI and new technology.
A science writer with data background and an interest in current affair, culture and arts; a no-med from an (almost) all-med family. Follow on Twitter.