At the recent AIMed webinar “Data rights in the age of machine learning”, speakers mentioned some specific challenges unique to artificial intelligence (AI); one of which is transparency. “Do we need to open the AI blackbox” and “Can AI-driven solutions remained in blackbox” were questions raised. With that, Sarah Gerke, Research Fellow, Medicine, AI and Law, Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics at Harvard Law School said for more than 70 years physicians have been prescribing aspiring and most of us do not know the mechanism behind the drug. All we knew is that it works.

Why is there a blackbox debate?

Perhaps, if there’s sufficient clinical trials and evidence to prove an AI product is safe and effective, opening its blackbox may not be necessary. Nevertheless, if we wish to release the true potential of AI, we will need to have adaptive algorithms that can learn and accustom to new conditions over time. In that case, we will need additional precautions and continuously monitoring to ensure the AI product will remain risk-free and trustworthy. A concern which does not exist within a drug like aspirin.

Dr. Mark Hoffman, Chief Research Information Officer, Children’s Mercy Hospital added the whole concern around AI blackbox is a very real one. From a research perspective, it’s putting a test to the researchers’ abilities because under the influence of high-volume and high velocity data, if one runs an algorithm today and again in two months’ time, the output is likely to be somewhat different. Thus, AI still a lot of work need to be done to reproduce the same, if not, similar results under the impact of new and inbound data.

Indeed, perhaps it’s alright not to know how AI outwits human in a game of Go or predicts our next Amazon purchase. However, things are not so straightforward when it comes to medical decision making, law enforcement and driverless cars. Primarily, accountability forms the basis of this AI blackbox debate.

We need to understand how machines arrive at decisions and know when they could be wrong, to either salvage or prepare for the worst that no-one wishes to happen. At the same time, people want to be in control; they feel the need to disapprove or turn down an automated decision. Otherwise, they may reject the technology, like the way facial recognition systems installed in public areas are being criticized.

The problems with AI glassbox

Some researchers are working towards building an AI glassbox, or simplified version of a neural network, so that others can trace and track what is impacting an AI system, making the technology more explainable. Even so, if data is disorganized or comes with complex images and text, it’s inescapable to have a relatively opaque AI.

Besides, a 2018 study found that naturally, people are able to follow AI’s predictions when there is a smaller number of input features. Yet, interpretability does not improve with transparency, in fact, the more transparent the AI, the more mistakes people made in following its AI’s predictions due to information overload. To overcome the challenge, a group of researchers from the University of Michigan and Microsoft Research added visualization tools which reveal only the main features and underlying data of an AI model, so that one can distinguish problems in no time. Eleven AI professionals with different education background and working experiences participated in the study.

What surprised the researchers was while the visualization tool enabled participants to spot biases and missing data, there is a tendency of over-trust which led to misinterpretations or inappropriate assumptions. It appeared when AI becomes too transparent, it will instill false confidence; people will accept AI’s predictions even if the explanation or output is nonsense. A similar result was also found in a separate group of 200 machine learning professionals. Researchers blamed automation bias; we are instinctively propelled towards trusting machines.

As such, a new wave of human-centered AI research is now focused on requesting AI to explain their decisions out loud. In that way, human will also be involved in AI design right from the start because they need to supply an AI system with the many explanations that translate a decision into action. In the long run, researchers hope this will make AI more down to earth rather than convincing human that they will always outsmart us.


Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.