Dr. Anthony Chang, AIMed Founder and Chief Artificial Intelligence (AI) Officer at Children’s Hospital of Orange County (CHOC) opened the recent AIMed Webinar: Population Health and Social Inequities by noting the disproportionate influx of Black and Hispanic COVID-19 patients. The pandemic has probably rendered a unique opportunity for AI and us to reflect on the issue of racial biases and social inequities.
Dr. Chang advised human ought to work closely with machines to avoid automation bias. He used the term “contextual cognitive architecture”. “We need to be aware that like a child, who learns from a parent, a computer also learns from its creator. So, we need synergy; we need human oversight for possible bias in the entire model”. He added the IEEE (Institute of Electrical and Electronics Engineers) already has some preliminary work on creating standard for algorithmic bias considerations. Perhaps, it’s time for the AI medical community to get involved.
Using AI to help the underserved neighborhoods
The webinar has invited several speakers who took turn to present their area of expertise related to the theme before an open discussion with Sara Gerke, Research Fellow, Medicine, AI, and Law at Harvard Law School. To kick start, Dr. Victor F. Gracia, Founding Director, Trauma Services and Professor of Surgery and Pediatrics at the University of Cincinnati pointed out AI can very much be one of the major assets in helping to address the persisting inequity.
He highlighted children that were born today are not better off than those born during the civil rights era, especially those who live in underserved neighborhood. “Neighborhood is a greater predictor of health inequities than genetics, parental education and one’s education. Children living in certain ZIP codes will not only achieve less but really designated them for certain disease and cognitive disabilities over a life course,” Dr. Gracia says.
Dr. Gracia’s research involves to use of AI and machine learning to identify leverage points within a complex adaptive system, because neighborhood has a collective dynamic that exists in randomness and nonlinearity, unless technology is leverage on, one will not find meaningful value just by looking at the withering figures.
The role of nurse leadership and health inequities
Next, Brooke Newman, Doctorate of Nursing Practice, gave a structured presentation on the implications of nursing leadership on public health inequities. She mentioned disease burden, gender, race, socio-economic status, education and digital divides as contributing factors to global health inequities. Many large nursing organizations including International Council of Nurses, American Nurses Association California and Nurses on Boards Coalition have targeted their missions and visions into combating the differences.
Newman believes, from the nursing perspective, bias should be treated as a patient safety mechanism, so that nurses are able to communicate them more effectively in their practice. Newman also endorsed collaboration as nurses emerged to take on a more active role in technology adoption. “I believe there’s a need to discuss globalization of nursing and how to incorporate leadership succession training to facilitate knowledge transfer. There’s also a need for quality tutelage on AI, preferably via a leadership platform,” Newman comments.
The meaning of “fair” in algorithm
Data Scientist and Medical Doctor, Dr. Candance Makeda Moore recalled having to develop an algorithm that is made for a population that she was not familiar with. She said, anyone could be in such situation and when that happens, it’s best to start reading on the health inequities and health disparities literature specific to that population before dwelling on the data.
Meanwhile, be reminded that fairness issues can sneak into AI algorithms for really unexpected reasons and always bear in mind that algorithm cannot be truly fair. Often, what we think it’s fair may actually cause trade-offs in other areas. Dr. Moore gave many examples from past research as references, including the accuracy of wearables being used on darker skin individuals which AIMed also reported last year.
“I think when you are designing an algorithm, you need to explicitly list down the parameters and think about every data point it’s picking up on and whether it correlates with some sensitive groups. Once you understand all these, you can proceed to define some kind of fairness goals before you really start coding and building the algorithm,” Dr. Moore suggests.
Focus on the algorithms, not data
Last but not least, Dr. Ablerto E. Tozzi, Chief Innovation Officer and Research Area Coordinator at Bambino Gesù Children’s Hospital in Rome, Italy talked about bias in epidemiology. He candidly emphasized that data will never be unbiased through a human interpretation because we mediate happenings and measurement. So, we should be more mindful of the performance of algorithms instead of being overly preoccupied with clean data. Algorithms to be used in healthcare should be tested in different sub-populations.
“Inclusivity is the key word here. We should be very careful about the representation of minority and inclusion of people of color, this is one of the very first steps that we sometimes miss,” Dr. Tozzi asserts. He also specifies that many hospitals and healthcare institutions are trying to develop their own algorithms and tried hard to prove that they work on large groups of people.
Whereas the right logic should be get connected with others and other data bases, so that the accuracy of the algorithms can be improved. Overall, Dr. Tozzi believes more science and research need to be done in AI and related technology and it should not be limited to the mere comparison between the goodness of some algorithms with human capabilities.
What will the ideal scenario be like?
Listening to the many hurdles, Gerke asked fellow speakers if there ever will be “an ideal scenario” and if there is, what will it be like? Dr. Moore answered perhaps it will be really helpful for more hospitals and institutions to open up their data bases. She said many “data activists” like herself are keen to play around with these data, from different perspectives, in the process, they may actually bring in new ideas. “I think these institutions realized that their data are extremely valuable but it’s much harder to understand what kind of biases and decisions have been made when you can’t even access the data sets,” Dr. Moore says.
On the other hand, Dr. Chang feels there is a need for more human oversight especially when it comes to monitoring AI outputs. “If we just use our common sense, looking at the output of AI may influence how we look at data in the first place”. Gerke agrees; like medical devices, there is a centralized body to check its efficacy. So, AI algorithms should also have a good starting point to clarify biasness. However, it’s not so straightforward and we all need to work together. The webinar is now available for re-visit here.