It started with a tweet, then it turned into a swirl. The mission of the newly established Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University: “The creators and designers of AI must be broadly representative of humanity” was being questioned. The reason being none of its 121 faculty members is Black. 

This is not the first time the AI/Tech community is being challenged for a lack of representativeness. Some time ago, The Partnership on AI, a conglomerate launched by Google and several other industrial tycoons, pledged to look into transparency, fairness and ethics of AI, was also scrutinized for the absent of Black board member. Nearly three years had gone by and if you happen to re-visit their board members again, nothing has really changed. 

We probably have witnessed the possible consequences of biasedly designed AI algorithms. For examples, software which gives a higher score for Black individuals on the likelihood to commit future crimes. An AI judge which has a tendency to regard White contestants as prettier and a mortgage algorithm with exhibits racial discriminations. 

Meredith Broussard, data journalist and assistant professor at New York University said in her book “Artificial Unintelligence” that “algorithms are designed by people and we are biased”. Broussard calls it technochauvinism: overpraise technology at the price of human judgment. So, who else should be included in the AI diversity talk, apart from colored/mixed-race individuals, LGBTQ+, and female?  

Religious leaders? 

Vatican held a workshop entitled “Roboethics: Humans, Machines and Health” this February during the Assembly of the Pontifical Academy for Life. The workshop was divided into three session, took place over a period of two days. During which, topics concerning robotics research and developments, how robotics changes socio-anthropological relations, and various ethical implications were discussed. 

While the Pope may not be the first person we approach when we think about AI and related technology, he delivered an opening message which cautioned against technological advancement that show no concerns towards humanity and our society. The meeting might be taken as a form of epochal change. Surely the idea of keeping religious views separate from science and technology may need to change too. 

Patients? 

AIMed has been trying to include patients in the discussions of AI and technology’s impacts on medicine. Information proliferates in this digital era and more patients are forming support groups on social media to share their medical conditions. AIMed believes medicine is decentralizing gradually as patients are no longer passive recipients of care. While a power struggle is unlikely, involving patients in the AI diversity talk helps to build trust. It will enable them to realize the real value and benefits of AI in medicine. 

Domestic chief medical officer? 

meta-analysis of 55 papers describing 52 distinct studies between 1966 – 2011, showed most parents are expressed interests in sharing medical decisions with care providers. The analysis concluded that influences on final decisions could be diverse and their impact on medical outcomes remained unclear, but it does not evade the roles of parents. This is especially so in paediatrics medicine. 

The BMJ had published strategies which physicians can adopt for shared decision making. A more recent paper also illustrated the importance of shared decision making with the use of an extreme incident when the child’s life is at risk. This dialogue is still taking place, perhaps it is time to include AI and how the present shared decision-making process may shift, as it gains more attention. 

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.