“If you had all of the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.”

Sergey Brin, Google co-founder

For this week’s AIMed publication of the week, we will discuss the comment on explainability in AI in medicine by Sandeep Reddy in the Lancet on the earlier Viewpoint article (Lancet Digit Health 2021; 3:e745-750).

The author reminds us that AI adoption has had its challenges, and one main issue has been the “scarce transparency associated with specific AI algorithms, especially black-box algorithms”. This issue looms large – especially since evidence-based medicine, the accepted paradigm in decision making in medicine, is based on transparency. To counter this issue, explainable AI has emerged as an important dimension to AI, especially in the context of deep learning. The author also reviews the earlier article and its main premise of asking stakeholders to entrust AI in medicine by relying more on validation rather than explainability. The author correctly points out that the previous article’s framework on explainability based on validation approaches is specious at best.

The author concludes that “ignoring or restricting explainable AI is detrimental to the adoption of AI in medicine as few alternatives exist that can comprehensively respond to accountability, trust, and regulatory concerns while engendering confidence and transparency in the AI technology.” Perhaps we can eventually find the sweet spot between AI researchers with their algorithms, machine intelligence and clinicians, with clinical practice and human cognition. A maturing synergy between deep learning models and human cognition with a Bayesian framework can be an ideal paradigm for clinical decision making in the future.

By the way, are we humans with the “pink bag” called a brain too critical of the “black box” of AI?

Read the full papers here: