I am a pediatric cardiologist and have cared for children with heart disease for the past three decades. In addition, I have an educational background in business and finance as well as healthcare administration and global health – I gained a Masters Degree in Public Health from UCLA and taught Global Health there after I completed the program.
“By updating our initial beliefs with objective new information, we get a new and improved belief.”
Sharon Bertsch McGrayne in The Theory That Would Not Die
Thomas Bayes, the eponymous Presbyterian minister of the renowned theorem, first introduced his mathematical expression close to 300 year ago. This Bayesian approach is the mathematical formulation of the concept that one can continually update an initial belief about data with new data and evidence. The recent surge in computational power and machine learning with one of its popular methodologies (“naive” Bayes) has perhaps reinvigorated this centuries-old theorem and its theoretical framework of prior and posterior probabilities.
The Bayesian vs Frequentist statistical schools
Bayesian and Frequentist statistics are two major competing philosophies in statistical analysis. In the former, one uses prior knowledge (which can be subjective and is therefore a main source of criticism) to derive future events. In the latter, probabilities are based on observations and therefore ignore the prior information. In other words, the Bayesian advocate updates beliefs based on new data and is content with knowledge evolving over time. The more “objective” Frequentist, on the other hand, considers solely the frequency of occurrence of the outcome of the event. In spite of this fundamental difference, there seems to be a rapprochement between these two statistical schools in the current artificial intelligence milieu.
The Medical Test Paradox
This paradox relates to the observation that an accurate test is not always as predictive of disease as one would think. An example to illustrate this paradox is as follows:
If a person tested positive for COVID-19 in a population that has prevalence of 1% for the infection, and the sensitivity and specificity are 90% and 91% respectively, how many who tested positive actually have the disease?
After all the calculations, it may surprise some that only 1 in 11 who tested positive actually have COVID-19. If the prevalence is much higher at 10%, then the chance of actually having the disease in those who tested positive is much higher (around 50%). Of note, the real world sensitivity of the COVID-19 PCR test is only around 80%, so multiple tests are necessary to increase the positive predictive value.
Both statistical schools are essential in clinical medicine, so perhaps we should leverage the advantages of both of these methodologies while mitigating the relative weaknesses of either.
In addition to Bayes’ theorem and other interesting aspects of intelligence-based medicine, many other topics will be discussed at our in-person AIMed Global Summit on May 24-26 of this year, to be held at the Westin St. Francis in San Francisco. We are fortunate to be partnering with Stanford’s AIMI as the AIMI Symposium will be the day before AIMed in Palo Alto. Representatives of many centers of AI in medicine will be participating at this meeting in addition to the diverse attendees.
See you there! Find more information here.