If we are not careful, AI will perpetuate the bias in this world. Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does. The computers learn from their creator-us.”   Aylin Caliskan, computer scientist


Racism. Even its definition is now being changed to include not only the usage of certain offensive words but the much more destructive systemic oppression of the group that is disliked.

Symbols of racism are literally coming down (like statues of Confederate leaders) but perhaps we should replace them with the new heroes of this nascent anti-racist era? Even though we will finally make progress in this movement, we also need to maintain some tempered calm and compassion for the many police who are genuinely dedicated to their jobs but are now unfortunate collateral damage.

I am so very proud of all the people who have come together to make these protests a global movement against racism. I personally had some exposure to racism as a Chinese-American boy growing up in New York City where I repeatedly heard derogatory comments about my race; upon return to Asia, I had Chinese boys tease me incessantly about my American (or foreign) ways.

In a way, we all could be a victim of a derogatory label (‘obese’, ‘queer’, ‘nerd’, ‘hillbilly’, etc). Yet it is precisely this diversity that makes any activity and gathering a much better one. Perhaps we have inadvertently learned from our current viral overlords: the virus seem to self-organize without a central dominant leader (like these movements) and yet able to emerge with a cohesive impact, except the viruses are able to achieve this without disorder nor conflict.

There is an underlying irony in that even in this pandemic that can be indiscriminately lethal for any human on this planet, there is a very obvious and disheartening racial disparity in morbidity and mortality with Blacks and Latinos disproportionately affected.

While there is a myriad of ways that artificial intelligence can automate or perpetuate historical discrimination in healthcare, perhaps there is also a way for artificial intelligence to neutralize this injustice. Although the ethics of algorithms is in its infancy, there is some existing work in the form of the IEEE P7003 Standard for Algorithmic Bias Considerations that is presently under development as part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.

This effort is aimed at methodologies to eliminate issues of negative bias in the creation of algorithms so that characteristics such as race, gender, sexuality, etc are protected.

The possible solution(s) lies principally in the three functional elements of machine and deep learning: input of data, the algorithm itself, and the resultant output. If artificial intelligence can learn autonomously from the input of human-derived data, it may be very difficult (if not impossible) to implement some sort of data audit to minimize bias since big data increasingly relies on artificial agents (the so called “paradox of artificial agency”). Other issues with data input also include unbalanced populations and sample size disparities.

Another possible solution to mitigate bias is to have total transparency of the algorithms so that these algorithms can be monitored closely for any propensity for bias; this process, however, is exceedingly difficult and tedious to accomplish due to the explainability challenge of the more sophisticated methodologies (like deep learning) and would depend on some degree of public algorithmic literacy. While causal reasoning can be applied to algorithms to detect bias, this is not always possible.

Finally, a more feasible solution may be regulation of the output of the algorithm to assure that there is equity especially in the context of how these outputs are utilized for the decision-making process. This is the final and most critical step for the safeguarding of equity and justice that mandates human cognition with machine intelligence.

Perhaps we need to approach this bias conundrum from the summation of all three of these elements in the future while combining machine and human intelligence as well as data science and ethics. It is also critical that we increase diversity of all involved in this promulgation of equity in artificial intelligence.

As the paradigm of artificial intelligence transition from statistical deep learning to contextual cognitive architecture, it is even more vital that this anthropomorphizing of artificial agents strives to be much more fair and just than the best of their human counterparts.

In the meantime, at AIMed, we will continue to not only talk about this issue but will also institute measures that will foster group diversity and gender equality. We stand in solidarity with every group that feels disadvantaged and we plan on incorporating these themes as part of our ongoing programs. All members of our AIMed team feel very strongly and passionately about equality and will strive harder to emphasize this in all our endeavors.