The Society for Artificial Intelligence in Medicine (AIME) will host their 19thmeeting in Poznan, Poland starting today (26 June) to 29 June. The organization has been supporting the use of artificial intelligence (AI) techniques on medical care since 1986 and is hosting a biennial conference for researchers and medical professionals to report any breakthrough or significant results.
This year, the AIME conference program is just as exciting. AIMed is glad to announce that our Founder and Chairman, Chief Intelligence and Innovation Officer of Children’s Hospital of Orange County, Dr. Anthony Chang, is one of the invited speakers for this conference. He will be presenting a lecture entitled, “Common misconceptions and future directions for AI in medicine: A physician-data scientist perspective” tomorrow (27 June) morning.
As the lecture title suggests, even though there are many hypes around AI in medicine and healthcare, some of us may still be struggling with what new technologies can actually do for us. Others may think that perhaps, it is exactly these buzzes and myths that prevent some of us from knowing the real AI. Before we ponder further, we have briefly outlined a list of ubiquitous misconceptions around AI.
Misconception 1: Clinicians will be replaced by AI
This is probably one of the commonly heard misapprehensions about AI. The reality could probably be, as Dr. Chang asserted, the clinicians who do not know AI will be replaced by those who do know AI in the near future. In the recent AIMed Cardiology, Dr. Chang said he had already come across radiology programs which are looking for radiologists with AI or machine learning (ML) experiences. Trained clinicians with data science background are likely to have an advantage in their career. As such, Dr. Chang encouraged medical professionals to get started on knowing more about AI. One of the ways it could be done is to reflect on your present role and responsibilities and how they could be improved or fitted in the AI era.
Misconception 2: You have to know programming in order to contribute
No, because programming is just one of the many parts of AI. For medical professionals who do not know programming, they can partner with data scientists and provide them with clinical insights to develop the solutions in mind. Likewise, they can assist in the cleaning of medical data require to develop the AI solution. Most of the time, medical data from electronic health records (EHRs) are unstructured and it can be rather time-consuming for untrained individuals to get them organized, thus, this is an area where medical professional without programming knowledge can help.
Misconception 3: AI conquered GO so biometrics is easy
No, because the knowledge we have about AI is not current enough and it is not fully democratized. The same for our access to data. AI feeds on data, without data, there will not be information and knowledge and so on. Even if there is sufficient relevant data to train the AI model, it needs to be validated over time and ensure it grows with new data, so that the eventual solution is safe to be applied to a diverse group of people. AlphaGO was trained by playing against human and then other computers, hence, it was a tedious process. We should not underestimate the time and effort require to generate
Misconception 4: AI is for image-focused subspecialties
Indeed, radiology and cardiology are the areas where we have witnessed an AI boom because images are readily available data which developers can tap on. However, this is not all. As Dr. Chang cited, there are at least three other types of AI out there. First, assisted AI or system which provides and automates repetitive tasks like the UR robots for blood work used in Copenhagen hospital.
Second, augmented AI which allows human and machines to make decisions collaboratively, as seen in Watson for Oncology. Last but not least, autonomous AI, in which decisions are being made by adaptive intelligence systems independently. An example of it will be IDx-DR, a screening tool which is capable of self-learning and it has been employed for retinal images by the University of Iowa.
Misconception 5: AI will be added to every aspect of healthcare
We have been living in the space of evidence-based medicine for the past 30 years. However, there is a knowledge gap in precision medicine and population health. Medical professionals are struggling with real challenges from prevention of disease, management of chronic conditions, heightening survival rates, to battling against workforce fatigue and cost. So, yes, AI will eventually be added to every aspect of healthcare, to provide new insights for medical professionals to solve pressing problems but it will take place in a gradual manner. There are still many challenges with regards to AI, safety regulations and ethics that need to be addressed.
Misconception 6: Deep learning will be the main AI tool for a long time
No, AI with subhuman performance was occasionally used in commercial expert system with varying degrees of utility in the early days. Now, we have narrow-task specific AI which is just starting to match and, in some instances, exceed human performance in tasks including conversational speech recognition, driving vehicles, playing GO, and classifying skin cancer. Looking at the trend, although deep learning dominates the present AI scene, it is likely to evolve again, to a kind of cognitive architecture, whereby general AI will exceed human performance and reasoning in complex tasks, including writing best-selling novels or even performing surgery. Human intelligence shall improve as we learn from AI.
Misconception 7: We need more data for deep learning in medicine
The correct way of putting it should be, we need more structured data, open sources, and effective tools for data-sharing. As mentioned earlier, there are data waiting to be cleaned within EHRs and it’s an area where medical professional without programming can help. However, it is also true that some of these data are not accessible to medical professionals. On top of which, there are also challenges coming from privacy and regulations that hinder a wider sharing of data to protect individuals from being identified at a personal level. Even though the chances of being individualized is slim, the debate continues.
Misconception 8: AI will make clinicians less human
Yes and no. Automation bias is the tendency to rely or favor automated decision support systems more than human cognition. As Dr. Chang noted, he had already started to see younger physicians’ preferences for machines, even in their search effort. There had been studies to show how prolong screen time may change structures of children’s brains. Over-relying on automation may eventually compromise physicians’ intellectual capability and dexterity.
On the other hand, although AI excels human in the ability to accumulate and make sense of a large amount of data and perfect memory in details, AI is not capable of creative problem solving and making complex decisions. As AI moves onto the next 10-20 years of development, when deep learning reaches a plateau, that’s probably when the cognition parts kick in and there is a need for more human-machine collaborations.
Misconception 9: AI is a Blackbox
Presently, yes, most of us are still unsure why AI make certain decisions or do thing the way they do, after they are being fed on certain data. Nevertheless, there are many research going on at the moment, to ensure the Blackbox is open.
Misconception 10: AI in medicine will be here in the future
No, we believe AI is already here. AI may still be very narrow now as it only does things which we ask it to do. We are in the stage of reinforcement learning and are still finding ways to head towards general AI, which is the super smart AI. However, there are solutions being approved by the authority and are already in place for adoption. So, AI is in the present tense.
A science writer with data background and an interest in the current affair, culture,