The latest AIMed Webinar took place on 26 March 10 am PST. The hour-long session was facilitated by Dr. Anthony Chang, founder, and chairman of AIMed, chief intelligence, and innovation officer of Children’s Hospital of Orange County (CHOC). The invited speakers were Molly K. McCarthy, national director, US provider industry and chief nursing officer in the health and life sciences sector for Microsoft and John Frownfelter, chief medical information officer of Jvion. 

The agenda of the webinar was to help attendees to understand the Artificial Intelligence (AI) healthcare market, identifying components which lead to operationalize and successful AI strategies and important steps to bring AI to the organizations. The session was interactively kick-started with a poll. McCarthy asked of the attendees’ background nad what is their current interests in AI. The vast attendees expressed their interests in gathering information about AI and to understand its application in healthcare. 

McCarthy cited that the healthcare AI market is likely to increase at a compound annual growth rate (CGAR) of between 47% and 50%, with projection to reach $36.1 billion by 2025. In spite of the trend, Frownfelter said based on the marketing analyses that his company had done earlier, as of 2019, only about 10% of hospitals are presently deploying AI solutions. However, he foresee that within the next 12 to 24 months, the number will double or triple. 

Key components of successful AI strategies 

Frownfelter said in order to build a successful AI strategy, it is important to find out where the company is, in terms of analytics maturity, who has the power to drive the operational initiatives at the highest level and existing resources etc. AI opportunities can come in the forms of clinical, operational and financial. Frownfelter believes it is risky to have company coming forward to say “we want to invest in AI but we are not sure where”. It is recommended to look at comparative strategies in the market and not just draft a solution to look at a particular problem. 

McCarthy agreed. She said it is crucial for different departments; IT or digital health, to sit down and define an AI strategy. Besides, do take into consideration of capability building, to build the technical competency of individuals in the organization. At the same time, there is a need to think outside of the box. “Not within organization or healthcare, but look at consumers, patients, even consulting or talking with folks outside of healthcare, in areas like retails where AI has been adopted and been successful,” McCarthy said. 

Speakers also mentioned the flexibility or the cultural requirement to assimilate AI into healthcare. Frownfelter highlighted, to some, AI is both disruptive and threatening. This is especially so for some healthcare professional who are trained with the mentality that they will have to reject anything that cannot be feel physically. AI’s pitfall is just that, we can only see the output so the validation approach is different. 

To think AI is invincible is the beginning of a failure 

Speakers highlighted that AI strategy may falter when enterprise underestimated the complexity of healthcare and building an AI solution which targets it. It is too simplistic to think that AI is going to do everything or to replace something. AI should be regard as another form of a “lab test”, which healthcare professionals will use to interpret and make medical decisions to change their direction a little of what they will like to do with the patients. 

Furthermore, as Frownfelter pinpointed, an institution may have a robust data scientist team, but they will still over-escalate their competency. They thought they could do anything and promise to deliver certain AI solution yet most of the time is they can’t. When there is a discrepancy between what you promise and what you can deliver will create frustration. “There is a need to create some boundaries and focus who is capable of doing what,” Frownfelter added. 

Ultimately, if the company is interested in building their own AI solution, they will have to bear in mind of their target and timeline to convey it. Building process can be long especially if it is done internally. It may take months or even years, and then the process will repeat itself when it comes to the implementation stage. Most of the time what happen is organization will begin with building something on their own until they realize the effort proves no avail and they will resort to buying. 

What is not good for AI? 

Dr. Chang concluded the session with a thought-provoking question: What do you think is not going to be good for AI? McCarthy’s answer was apart from thinking about a solution and scaling it, healthcare, at the end of the day, is about people. So, regardless of how technology will evolve, we do not want to lose that human connection; the kind of human touch. Likewise, Frownfelter expressed that if he were to be a patient one day or in the near future, he will not want a machine to give him empathy. He will want a human healthcare professional because empathy will never be replaced by AI. 

The webinar is still available for re-visit here. Keep an eye on our blog or magazine for more AI and new technology related articles and information. 

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in current affair, culture and arts; a no-med from an (almost) all-med family. Follow on Twitter.