There were two parts to latest AIMed webinar on Behavioral Health: Using AI to predict suicide, depression and opioid misuse before it happens. In the first part, Alexis May, AIMed Director of Content shared the first AIMed research findings on the adoption of AI within US healthcare organizations. More information can be found here.
In the second part, Guest speakers Karen Murphy, Executive Vice President and Chief Innovation Officer at Geisinger; Dr. Carlo Viamonte, Medical Officer at Anthem, and Dr. John Frownfelter, Chief Medical Information Officer at Jvion shared their thoughts on the use of AI in addressing at-risk behaviors like suicide and depression. Particularly, convincing clinicians to trust AI and the move from predicting clinical outcomes to taking actions that will influence or even change the outcomes.
From predictive analytics to prescriptive analytics
In his presentation, Dr. Frownfelter said Jvion is not just using machine learning to identify at-risk patients and predict what is going to happen to these patients down the road. They are taking predictive analytics to prescriptive analytics, where patients with modifiable risks will also be identified, so that clinicians can do something to change their trajectories and eventually, the outcomes.
He highlighted behavioral health has been in a crisis. US was challenged by the opioid epidemic for years before the COVID-19 pandemic took over. As people are now isolating themselves at home, losing their usual way of interactions and even go jobless, some of these risk factors could be magnified. In a way, COVID-19 had compounded or increased the incidents of mental illnesses, especially suicide, depression, drug abuse and domestic abuse.
This is why Jvion is wishes to understand the relationship between depression and opioid abuse so machine learning can come in to identify whether an individual may be at high risk of suicide, depression or self-harm in the next six months, with or without prior history or family history of mental illnesses. Although they found that previous suicide or depression related emergency department visit is the biggest predictor whether an individual will engage in similar behaviors again, some people do die in the very first attempt.
As such, the algorithm needs to be smart enough to pick up these individuals. The algorithm which Jvion created not only identify the 0.2% of the population that are high-risk and with former history but also the next 6.8% of the population who are in medium risk. The small percentage means more targeted actions can be taken. This, hopefully, will capture 50% of all the occurrences of suicide before they take place. “It is not just about the ability to predict but the ability to impact the outcomes and we are using evidence based guidelines, data and intelligence to take evidence based care to a new level,” Dr. Frownfelter says.
The need to change clinicians’ perceptions
Nevertheless, executing prescriptive analytics remains challenging. Dr. Frownfelter believes the challenges come in two-fold. There is a need to inform clinicians that their intuitions may not always be right, some people may find it hard to digest. He cited those who were trained in medicine in the 90s or early 2000 like himself, would probably realize social determinants were never considered key drivers when it comes to treatment recommendations. It is always what the physicians prescribed matter the most.
Such perception turned out not to be true. Social and behavioral factors around the patients – whether they will follow what is recommended to them; whether they are going to take time for cancer screening; whether they will answer the phone when the case manager calls, and so on – all matter more than what the doctors prescribed. Often, it is a combination of social data, behavioral data, clinical data and many more that will give a holistic picture of the patient. This is something clinicians ought to appreciate.
Besides, Dr. Frownfelter said there is also a need to prepare clinicians for the likelihood of having an AI solution to say that the opposite of they have concluded is true. All these will boil down to translating the familiar to the unfamiliar. It is interesting to see physicians relying on technology to navigate on the highway but not using the same form of technology to make their practice safer and more efficient.
AI is supporting evidence based medicine
Dr. Viamonte thought this leads to the whole concept of trust. He believes as physicians, they are an opinionated group of individuals. AI blackbox is not transparent or explainable at the moment, physicians do not know how an algorithm arrives at its conclusion. So, it is extremely difficult asking them to put their trust on AI when they do not know whether it will work or work in the way they desire. This is something physicians have to overcome as a group.
Murphy reassured that leveraging AI does not equate to deliberate deviation from evidence based medicine. In fact, AI is working to support the evidence. As such, physicians have to keep an open mind in believing that work in the AI space is adding to existing evidence rather than changing them.
Dr. Viamonte added this is an exciting time. All along, the triple aims have been to improve patient outcomes, improve patient experience, and reduce cost. Some advocates had recently introduced the forth aim, that is work-life balance and prevent burnout, which are closely related to mental health and well-being. Technology, digital solutions and AI all have the potential to enhance each of these areas, as long as we avoid implementing them like the way we did for electronic health records (EHRs). A tool which did not streamline workflow but obstruct the way physicians interact with patients. Innovations that need to be instilled into healthcare need to be done in a seamless manner.
More information about the webinar can be found here. The webinar will be available on-demand shortly.