AI Augmented Radiology promises to transform healthcare diagnostics. But what are the legal implications of new disruptive medical technologies and what will the new defensive medicine look like?

Doctors are human. And humans make mistakes. And while scientific advancements have dramatically improved our ability to detect and treat illness, they have also engendered a perception of precision, exactness and infallibility. When patient expectations collide with human error, malpractice lawsuits are born. And it’s a very expensive problem.

In 2010, the Harvard School of Public Health published a study, revealing the total cost of medical malpractice in the United States was $55.6 billion per year [1] –  or 2.4% of US healthcare spending. The study estimated that about $45 billion of the total amount was spent on defensive medicine – tests and procedures that doctors ordered to protect themselves against, or in the event of, litigation.

The high cost of medical errors is no doubt top of mind when conversations about AI augmentation turn to the question of liability. “Who will be liable for errors” is a popular question from doctors and patients, as well as journalists who cover medical technology. And with good reason.

The same people who are liable today will be liable tomorrow: doctors and hospitals, and through them insurers will continue to underwrite risk of new treatments, drugs and technology

Medical errors are more frequent than anyone cares to admit. In radiology, the retrospective error rate is approximately 30% across all specialities [2], with  real-time error rates in daily practice averaging  between 3% and 5% [3].

Most of these error rates are attributed to perceptual misses – a radiologist not seeing a particular abnormality [4]. Each missed diagnosis is a missed opportunity for treatment. And while radiologists don’t top the list of most frequently sued physicians, they are among the most frequently named co-defendants [5], a fact that highlights their central role in the detection and diagnosis of disease.

The short answer is that the same people who are liable today will be liable tomorrow: doctors and hospitals, and through them insurers will continue to underwrite risk of new treatments, drugs and technology. But this is a superficial answer, because AI augmentation stands to disrupt medical liability alongside the practice of healthcare itself.

To understand why, it’s important to understand how medical errors become negligence in the first place. For negligence to exist, four conditions must be met: there needs to be a duty of care, a breach of that duty, causation, and harm.

Of the manifold promises of AI augmentation in radiology – early detection, improved triaging, better allocation of resources, lower costs, greater precision – the promise of reducing errors gets the most resonance

Most medical malpractice lawsuits that allege a missed- or mis-diagnosis are alleging a breach of duty, a breach in the standard of care. This is typically defined as the level and type that a reasonably competent and skilled medical professional would provide under circumstances similar to the ones that led to the alleged malpractice.

This does not mean that all doctors are required to provide the same care: diverging but respectable minority opinions about treatments or best practices are not a basis for a successful malpractice lawsuit.

Geography matters, too: smaller hospitals with fewer resources are not expected to provide the same care as larger state-of-the-art medical institutions.

The legal perspective of the standard of care as we know it can be traced back to a landmark malpractice case from 1974: Helling v. Carey.

A 32-year old patient visited her eye doctor complaining of nearsightedness. The doctor prescribed contact lenses, but her condition worsened. Five years later, the patient was diagnosed with glaucoma and sued her doctor. Glaucoma is rare in young people, and professional guidelines in ophthalmology at that time didn’t require routine pressure tests for patients under the age of 40. The doctor, the defence argued, adhered to professional standards and as such, there was no breach of the standard of care.

The courts disagreed. The test for glaucoma is easy, cheap and highly effective. And the persistence of the complaints warranted further inquiry. Despite the fact that professional guidelines didn’t call for a routine screening, adherence to these guidelines alone was not sufficient to shield the doctor from liability. And with that, the era of defensive medicine was born.

Of the manifold promises of AI augmentation in radiology – early detection, improved triaging, better allocation of resources, lower costs, greater precision – the promise of reducing errors gets the most resonance.

It is not difficult to imagine a scenario in which a patient sues their doctor for failing to adopt AI augmented secondary review technology for screening and the detection of abnormalities

In fact, retrospective analysis and secondary review are among the most popular initial deployments in radiology. The AI isn’t used to diagnose, but rather to review – ingesting images and radiology reports and comparing them to its own analysis to flag cases for a second look. And while it’s too early to have firm numbers, the results are promising.

Simply triggering a second look at cases where the AI disagrees with the findings of the radiologist can go a long way to reducing missed diagnoses. Such a technology could provide a valuable and affirmative defence against allegations of malpractice. If an AI trained on billions of images and outcomes couldn’t spot an abnormality, it’s unreasonable to expect a human would.

This no doubt explains at least in part the eagerness of medical networks and insurers to see this technology applied as a quality management tool. But in their zeal for new error reducing technology, clinicians and administrators may have missed an important component of the standard of care.

AI deployments for secondary review are relatively easy. And the human-in-the-loop decision making process means there is a lower regulatory bar in many jurisdictions around the world.

The democratizing power of AI tools delivered through the cloud means locality is no longer an impediment to a higher standard of care.

In the coming months and years, as results of clinical trials are made public, patient expectations for AI in healthcare will begin to take shape. It is not difficult to imagine a scenario in which a patient sues their doctor for failing to adopt AI augmented secondary review technology for screening and the detection of abnormalities.

The question for hospitals isn’t “who will be liable if we use AI” but rather “how soon will we be liable if we don’t”?

Sally Daub is a technology entrepreneur, investor, and CEO

Bio:

Sally Daub (B.A.Sc., LLB) is a technology entrepreneur, investor and CEO. As Founder and Managing Partner of Pool Global Partners, a deep technology venture capital firm, Sally’s depth of experience and expertise is fostering the next generation of AI innovation.

An experienced operator with a track record of turning innovative ideas into market solutions, Sally advises numerous Silicon Valley and Canadian companies, including Enlitic, a San Francisco based AI medical imaging and diagnostics company where she serves as a board member and executive advisor.

A frequent lecturer and guest speaker at universities and events around the world, Sally is also a Fellow at Creative Destruction Lab, a leading deep technology innovation pipeline, where she provides mentorship to entrepreneurs.

Sally has been recognized with numerous awards and appointments, including Women’s Executive Network’s Top 100 Most Powerful Women (2010, 2011, 2012, 2013) and inducted into WXN’s Hall of Fame (2014), The RBC Trailblazer (2010) and Profit Magazine’s Top Entrepreneurs (2010-2014).

Footnotes:

 

  1. Michelle Melo et al., 2010 “National Costs of the Medical Liability System” in Health Affairs, Vol. 29 No. 9 (https://www.healthaffairs.org/doi/abs/10.1377/hlthaff.2009.0807)
  2. Cindy S. Lee et al., 2013 “Cognitive and System Factors Contributing to Diagnostic Errors in Radiology” in America Journal of Roentgenology, September 2013 Vol. 201, No. 3 (https://www.ajronline.org/doi/abs/10.2214/AJR.12.10375)
  3. ibid.
  4. Smith MJ. Error and variation in diagnostic radiology. Springfield, IL: Thomas, 1967:4, 71, 73–74, 144–69
  5. Carol Peckham 2016 “Medscape Malpractice Report: 2015” (https://www.medscape.com/features/slideshow/malpractice-report-2015/radiology#page=2)