On-Demand: Population health and social inequities
July 28, 2020 @ 3:00 pm - 5:00 pm EDT
“If we are not careful, AI will perpetuate the bias in this world. Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does. The computers learn from their creator – us.”
Aylin Caliskan, computer scientist
The COVID-19 pandemic and the protests for racial equality rage on unrelentingly as dual forces that are forcing change. There is an underlying irony in that, even in the pandemic that can be indiscriminately lethal for any human on this planet, there is a very obvious and disheartening racial disparity in morbidity and mortality.
While there is a myriad of ways that artificial intelligence can automate or perpetuate historical discrimination in healthcare, perhaps there is also a way for artificial intelligence to neutralise this injustice. Although the ethics of algorithms is in its infancy, there is some existing work in the form of the IEEE P7003 Standard for Algorithmic Bias Considerations that is presently under development as part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. This effort is aimed at methodologies to eliminate issues of negative bias in the creation of algorithms so that characteristics such as race, gender, sexuality, etc. are protected.
In this exclusive webinar, our expert panel discuss the context, the challenge, and possible solutions to the bias conundrum, weighing the roles of machine and human intelligence as well as data science and ethics.