Jill G Klein
I wrote this article in late 2004. As a social psychologist I was very aware of the role of decision-making biases in our judgments, and found it interesting that there wasn’t much awareness or discussion of these biases in medicine. The biases I covered in the paper included:
The overconfidence bias, which is our tendency to think our judgments are correct more often than they actually are. Thus, we might think that our diagnostic decisions are more likely to be correct than is the actual case.
The availability heuristic, which is that we tend to think that something is more likely or prevalent because it easily comes to mind. We remember events that are more recent or vivid than other events, and thus these events have a greater influence on our decisions than they ought to have. If we have recently seen a patient suffering from a particular affliction, we may be more likely to think that a patient with similar symptoms has the same affliction.
Confirmatory bias is our tendency to search for, believe and remember information that fits with what we desire or expect. Thus, when we have an initial diagnosis in mind, our questioning, the tests we run, and our interpretation of the evidence all can lead to a greater likelihood of confirming the diagnosis than of disconfirming it.
Illusory correlation is our tendency to see unrelated events as related. This can lead us to see a treatment as causing symptom relief when in fact there is no connection.
To illustrate illusory correlation, I gave the example of a homeopath seeing a relationship between giving a homeopathic treatment and symptom improvement. The practitioner is more likely to remember patients who improved, and thus perceive a causal effect between treatment and symptom relief. For years after the article was published, I received emails from indignant homeopaths. I responded by asking for even a single peer reviewed article in a reputable journal that supports the effectiveness of homeopathy. I never received one in reply.
The second example is more troublesome to me. In discussing the availability bias, I wrote that because stories of opioid addiction are very vivid, addiction likelihood may be overestimated in treatment decisions. Further, I wrote that the risk of addiction is actually low, particularly for slow release formulations. This statement has obviously not stood the test of time. It was based on research published in reputable journals, but we now know that the dangers of slow-release opioid addiction are higher than thought (particularly when over marketed and over prescribed). In hindsight, I very much wish I had used a different example to illustrate the availability bias.
The role of decision-making biases has now attracted a great deal of attention in medicine, and there is a growing area of research investigating when and why biases occur and what remedies can be used to make them less likely to influence our judgments. When I teach clinicians, I suggest a couple of remedies that don’t take much time. In making a diagnosis, it can be helpful to ask oneself a couple of questions once a provisional diagnosis is reached: ‘What symptoms don’t fit this diagnosis?’ and ‘If it turns out this diagnosis is incorrect, what else could be underlying these symptoms?’
As humans, we all have biases that affect our decisions, and because medicine is a high stakes environment, these biases can cause harm. It is important to be aware of cognitive bias in diagnostic and treatment decisions, as well as the remedies to help overcome these biases.
A view from
Anthony Chang, MD, MBA, MPH, MS
“In this age of artificial intelligence and deep learning, it is more essential than ever before to comprehend human cognitive processes, specifically the many clinician biases and heuristics that can result in a poor judgment or bad decision in biomedicine. This timeless article not only delineates the psychology of decision-making, but also discusses the most common heuristics, such as representative heuristic, the availability heuristic, and the confirmatory bias. This is a must-read for any clinician early (and late) in their careers so that they can be thinking about “thinking” and avoid the pitfalls in decisions about diagnosis and prescribing.”