“Don’t find customers for your product. Find products for your customers.”

Seth Godin, American business executive and author

         

                                                                                                                          

This is a manuscript with authors from Europe and the United States on the interesting as well as timely topic of direct-to-consumer applications of medical machine learning and artificial intelligence. One of the authors is our frequent AIMed faculty member Sara Gerke from the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

The direct-to-consumer (DTC) applications of medical artificial intelligence is now increasingly being used for a myriad of diagnostic assessments for telemedicine and home healthcare, especially during the COVID-19 pandemic that resulted in its exponential need for these applications.

This global market is already valued at above US $10 billion, and a well publicized example is the Apple Watch for atrial fibrillation that has approval as a Class II (moderate risk) device. Most of the DTC AI devices to date, however, do not have FDA approval.

One of the main arguments the authors made is that the regulatory landscape for these applications need to be different for clinicians than for personal use, as this market is being explored for both clinicians as well as for the public.

The DTC applications are directed to users with limited medical and statistical literacy (especially Bayesian aspects of test and disease and the concept of base rate neglect; the authors provided a good example) and who may be risk averse about their health outcomes. These applications are therefore different than DTC devices that provide a biomedical measurement like blood pressure and temperature.

The authors further describe the impact of false positive test that can be quite common and the negative externality this creates can result in the added burden on the healthcare system and its clinicians (I personally have experienced this many times). Another issue is also the liability of a false negative test as a favorable test result may lead to false reassurance that lead to a late or delayed diagnosis, thus affecting a potentially positive outcome.

In addition, the authors raised the issue of how the regulatory review of these applications should consider the social costs under widespread use. This is an extremely important point: while the very few positive findings of certain disease states in a few patients may be good to discover, we should question the true value in the context of society as a whole and factor in the many false positives and the added burden to an already taxed health system especially during this pandemic era.

It is possible that the false positive findings can displace care from those who have much needed care from real medical conditions, but even those with positive findings can fall under the category of “over” diagnosis of medical conditions in which either the condition is relatively benign and can simply be followed or in which the medical treatment risk outweighs the benefit.

It is therefore imperative for the regulation of these DTC medical AI/ML apps to have additional statutory authority to examine not only the system’s accuracy but also the aggregate social costs.

Finally, the authors suggest that the regulators take on the following three strategies:

1) adopt a system view especially human/AI environment

2) couple DTC AI/ML devices with clinician virtual visits

3) stratify some DTC medical AI/ML apps so that these are not available to the public

While these are laudatory strategies, even the authors readily admit that this is a difficult set of recommendations to execute.

Perhaps the DTC AI/ML app market should not totally adhere to the aforementioned business adage from the business guru Seth Godin.