Patricia N. Mechael, co-founder and Policy Lead at HealthEnabled on the harmful biases being built into AI algorithms in mental health applications and services

 

Digital health enabled by artificial intelligence is increasingly being used as an effective tool to increase access to and delivery of mental health services. There are many applications and services to diagnose, treat, and provide on-going care for mental health disorders. These range from chatbots that detect and treat anxiety and depression to algorithms that scan through electronic medical records to predict the likelihood of mental illness alongside a physical ailment. They also include a new class of digital therapeutics in which digital health technology serves as a therapeutic intervention similar to a drug that would be prescribed by a psychiatrist.

While overall, it is perceived that AI has the potential to be beneficial to mental health and well-being, there are concerns that potential biases that can cause harm are being built into AI algorithms in mental health applications and services. These come from two potential streams of harm. The first is the lack of diversity and inclusion as well as the bias in the protocols, research and data being used to develop and train models for mental health applications. The second is that machines are not readily capable of factoring in non-verbal cues that a trained therapist would be able to detect. These cues may vary by gender, race, and age as well as the inter-sectionality of all three. Increasingly, these factors are being recognized and addressed within Responsible Data and AI Principles and Practices and included within the regulatory approaches to ensuring quality and safety of digital health applications and services.

In parallel, there is an on-going debate about whether digital technology enabled by AI and in particular the internet and social media is helping society to be more connected and less lonely or contributing to and/or exacerbating mental illness- especially among youth.

Of course, there are many mental health benefits associated with the internet and social media; including the ability to access health information, emotional and community support, self-expression and identity, and maintaining and building relationships. However, excessive use of social media is now known to increase anxiety, depression, and a range of other mental health disorders with significant impact on adolescents, including significant increases in suicide rates among both boys and girls – but especially girls. It is also known to increase violence among youth in the form of bullying, gang violence and self-inflicted violence. These dangers and the studies that have documented them are well articulated in the recent Netflix documentary, Social Dilemma. Social media is now proven to be more addictive than smoking and/or alcohol consumption, and it is associated with disrupted sleep and poor body image issues.

A key challenge is that many of the applications associated with an increase in mental illness are not health applications. And as such, they are not subject to the same transparency and regulations that digital health information and technologies and related AI algorithms are increasingly becoming. There are also very strong disincentives for private sector companies to do something about it, since much of their revenue is driven by increased traffic driven by making the use of their services addictive and sticky- irrespective of the impact that the information or the way it is manipulated and shared is having on people who largely do not pay for the content to which they are exposed. The irony is that many of the AI models used by these companies are designed after the same behavioral frameworks that were originally developed for health promotion and increasing and sustaining positive health behaviors like improved diet and exercise as well as mental well-being. And similar to the AI models in applications and services for mental health, they are not trained to detect the human cues and adverse effects that they might be having on individuals.

Technology is often presented as neutral, but in today’s highly unregulated technology ecosystem the harms are becoming more real and better documented through longitudinal studies. In addition to increased regulation of mental health apps and algorithms, greater effort is needed to ensure that the internet and social media can be harnessed for mental wellbeing, while mitigating the harmful effects we now know they promote. We need greater warnings, transparency and awareness for individuals, especially youth and their parents in what and how information is processed and shared and the impact it has on mental wellbeing. In addition, we need policies and regulation that will hold companies more accountable for the impact that they have on society especially in the area of mental health.

We are at a critical inflection point in the proliferation of AI when we need to work towards making sure that the mental health benefits outweigh the harms.