Can a chatbot ever be an scientific way to treat people suffering with mental health conditions?

When an AI company aimed at treating mental health appears in the news all the time, it’s difficult to tell whether they’re a fantastic new innovation, or they’ve sunk an unhealthy amount of their budget into marketing.

Many press reports in the last year have focused around WoeBot being the chatbot which can talk to people suffering with mental health issues and help treat them, especially after Andrew Ng, former VP and Chief Scientist at Baidu, joined their team.

In a blog post after joining the company, he wrote: “Depression is the leading global cause of disability. You probably have friends with depression that don’t talk about it.”

“Apart from the human cost, mental health also imposes a huge economic burden. Heart disease costs the US healthcare system $147B annually; mental health costs even more: over $200B.”

“While a software chatbot will never replace a human therapist, Woebot makes it possible to inexpensively deliver counseling to millions.”

“AI is the new electricity: even with its current limitations, it is already transforming multiple industries. The transformation of mental health care will help millions of people who struggle with their mental health, sometimes through literal life and death decisions.”

The AIMed community is thrilled by emerging technology with the potential to help patients on a global scale, but only if it has science and academic-integrity at its core, and so we’ve interviewed the WoeBot team to find out more about what they’re doing.

AI MED: Please provide an update for our readers on the latest results Woebot has achieved in treating mental health?

Team WoeBot: We did a study that showed symptom reduction in both anxiety and depression after 2 weeks. Since then we have replicated that effect in another study with a larger group, over a longer period, so on average, people are maintaining effects. But what was really incredible was the way that participants oriented toward Woebot and this helped us realize the potential we have to help people.  Since we’re a bunch of recovering academics, we still measure success in terms of best-practice outcome measures (can’t help it) and that’s what we optimize for.

AI MED: How is Woebot ensuring it remains an academic project with science at its heart?

Team WoeBot: Woebot actively works with research universities and government agencies to create versions of the product for use in randomized control trials. We also actively collect anonymized data on the efficacy of our product and use these results to fine-tune the offering. It is our philosophy to only deliver evidence-based interventions that are supported by data.  As a company founded by academics, we also give back to the research community through publication in peer-reviewed journals.

AI MED: What are the ethical considerations for the Woebot team in treating mental health?

Team WoeBot: There are several ethical considerations in this space and I believe we should all be talking about them. Since we spent almost 2 decades in Academic Medicine as practitioner-scientists, this stuff seeps into everything we do.

As a company, we want to set a higher standard in digital mental health care, so that’s what we’re aiming for all the time. The most pressing ethical issue we face is that people do not mistake Woebot for being an entity that is capable of intervening. 

We want to protect our user’s anonymity so we do not know who anyone is, and therefore can’t intervene even if we wanted to, should someone divulge something that would be considered high risk. So, we deal with that by being completely transparent. The second ethical consideration is around data privacy.

Again, we anchor on informed consent and full transparency, so for example, our users on Facebook have to acknowledge that they understand that whilst they’re anonymous to us, they are still subject to Facebook’s data policy. From our perspective, Woebot must establish trust with his/her users because, after all, that’s the foundation of any good relationship. Transparency is crucial for that.

AI MED: What are the legal considerations for the same?
woebot andrew ng artificial intelligence mental health chatbot data intelligence healthcare medicine

Andrew Ng (second from right) sitting down with Team WoeBot

Team WoeBot: Many of our ethical considerations align with our legal considerations. We believe in transparency about our services; we work to make it clear from the outset that Woebot is not a doctor or human and that therefore there are limitations to the service. Additionally, there is informed consent to use the product as a built-in Woebot feature for all first-time users. The consent, delivered conversationally via Woebot, informs users that it is not a crisis service and details its safety net protocol. In the case that someone is in crisis, Woebot follows a best practice procedure that we call our Safety Net.

Specifically, Woebot is programmed to detect crisis language and if it does, it will ask and confirm the crisis with the user. If the user confirms the crisis, Woebot offers resources (911, suicide crisis hotlines, international emergency resource services) that were carefully curated with expert consultation.

However, our data indicate that users do not use Woebot for crisis management; about 6.3% trigger the safety net protocol, with only 27% of those confirming that it is indeed a crisis when Woebot asks to confirm. We also make readily available all of Woebot safety net procedures in Woebot’s Toolbox (a place that stores helpful bot-related info such as this!). Finally, for a service like ours, trust is everything. We value transparency and want to highlight what we do to respect and protect users’ privacy.

Here’s an overview: all data are encrypted, anonymized, and de-identified. We also tell users up front in our onboarding and consent process that all the data they provide (i.e., the conversations with Woebot) are used directly to tailor their conversational experience with the bot. We will never sell data to advertisers.

AI MED: How does Woebot work with clinicians to ensure their work is aligned with the needs of healthcare providers?

Team WoeBot: Woebot reaches its users through multiple channels, including direct to consumer as well as via healthcare providers and self-insured employers. Our product and business development teams are closely intertwined, with clinical researchers and founding team members often the ones that meet directly with healthcare providers. 

Much of the core logic behind our virtual assistant is concerned with determining which techniques would be most useful to a user at their time of greatest need and delivering them. These goals are closely aligned with the needs of healthcare providers, who seek solutions that will help their most in-need and expensive customers, thereby diverting cost while increasing health and happiness.

For healthcare providers, costs associated with mental health exceed all other categories, and the savings that can be incurred easily reach into the hundreds of millions.

AI MED: What is the future of treating mental health via virtual assistants? Where do you see the space moving in the next 3-5 years?

Team WoeBot: Hundreds of millions of people struggle with mental health and have no access to trained therapists or cannot afford it. Virtual assistants are well-poised to answer this problem. Evidence-based mental health interventions have been studied for decades, their efficacy well-understood, and have been fruitfully converted into digital interventions.

However, typical online or mobile applications are not very engaging. Conversation and what we describe as “relational technology” is very engaging. Woebot is already alleviating suffering at scale. In addition, the field is undergoing an AI revolution, where advances in AI and NLP are made every day. Over the next 3-5 years, these advances will allow the digital interventions to feel even more conversational and natural.

This interview originally appeared in AIMed Magazine issue 05, a Deep Dive on Robotic Technology & Virtual Assistants, which you can read here.