A new study headed by Ziad Obermeyer, a health policy researcher from the University of California, Berkeley found a risk prediction program manufactured by a Minnesota-based healthcare company, was biased against non-white patients. 

What went wrong? 

The algorithm made use of illness-tracking and hospitalization data obtained from patients’ electronic health records (EHRs) to determine the severity of their respective medical conditions (i.e., diabetes, hypertension, chronic kidney disease and so on). It will then assign a risk score to each patient and shall abnormality arise, patients’ primary care physicians will be alerted for interventions. 

What the research team realized was many black patients were given “strangely low” risk scores even though their health conditions may be deteriorating. The cause of it, as the research team uncovered, was the use of medical bills and insurance payouts as a proxy to define an individual’s overall health. 

Although it is a relatively common method to generate health-related algorithms in both academic and commercial settings, most non-white patients, in comparison to their white counterparts, tend to have a lower health cost as a result of irregular access to the healthcare services, less flexible job schedules, more household responsibilities, or the mere fact that most of them live farther away from their hospitals.

As such, black patients sharing a similar risk score with white patients, actually have a higher number of serious chronic conditions including cancer and diabetes, as well as higher blood pressure and cholesterol levels. The finding was first reported by Science on 24 October. 

Challenging to stay objective 

The research team suggested a minor modification to the existing algorithm: to predict the number of chronic illnesses that a patient is likely to have in any given year, rather than a focus on the cost of treating those illnesses, may actually reduce the racial discrepancy by 84%. 

Obermeyer believed this is “an industrywide systematic error” by “putting healthier white patients further ahead in line” and stressed a similar bias could occur for other algorithms being employed across the country. At the same time, he credited the way the company behind the algorithm responded. 

Upon sending the finding to the company, they immediately replicated the study and committed to correcting the model. “The algorithms that power these tools should be continually reviewed and refined, and supplemented by information such as socioeconomic data, to help clinician make the best-informed care decisions for each patient”, the company’s spokesperson said. 

Indeed, keeping in mind of diversity, continuous revisions of existing algorithms, and using knowledge to compensate data inadequacy, perhaps some of the more appropriate approaches to deter discrepancies at the moment. Nevertheless, shall there be an ongoing absent of new rules and regulations to upkeep these algorithm-driven tools, being objective, remains a huge challenge. 

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.