Saurabh Jha, Associate Professor of Radiology at the University of Pennsylvania and Scholar of Artificial Intelligence (AI) in Radiology believes AI is not only stretching the limits of present-day medicine but also testing the water of our legal boundaries.

Machines are now capable of highlighting abnormalities in computerized tomography (CT) scans; making diagnostic predictions with accuracies matching human professionals; extracting information from electronic health records (EHRs), assisting patients in managing their medical conditions and so on. As AI becomes more accepted, at one point in time, it bounds to make an error that has an impact on patients. By then, Jha questioned in a recent op-ed published in Stat, who or what should be responsible?

Creator of the algorithm

In a hypothetical scenario, Jha wrote that if an algorithm fails to detect an obvious pneumonia and patient died of a septic shock as a result, the creator of the algorithm should be liable for the error.

If the algorithm was developed by the medical institution that administer it and the institution did not deploy a human medical professional to monitor or verify what the AI had done, it would be considered as an enterprise liability. This signifies the medical institution will need to bear all the risks associated with the use of AI and exemplify how ineffective it is in this case as the institution’s effort to increase efficiency and lower medical costs have claimed a patient’s life.

If the algorithm was developed by an external vendor, most probably it has gotten the approval from the US Food and Drug Administration (FDA), this will somewhat prevent the vendor from assuming full responsibility under pre-emption, at least at the state court level. However, this may rule out at the Federal level, depending on the kind of FDA approval that the AI had to fulfil.

The nature of the algorithm

If the AI underwent a less rigorous path such as the expedited 510(K) rather than the rigorous premarket approval, the Federal court may not exempt it from possible litigation. As such, how the court deals with an AI algorithm depends on whether the algorithm had been treated as a drug or medical device and the kind of approval it received.

As of now, most AI algorithms were approved via the expedited manner and they tend to be static. This means the algorithm itself is not able to learn and evolve as the data input increases. Surely, if an AI algorithm changes, it will not be the same as the one approved by the FDA originally and as of now, the agency is still deciding on an appropriate plan to approve this kind of AI algorithm. Till then, vendors may still have to absorb all, if not, most of the liability.

On the contrary, AI has also introduced new liability, that is if a medical professional choose to ignore or fail to adhere to the recommendation of an AI algorithm which turns out to be correct. Jha believes, logically speaking, many of these uncertainties will be decided after the first major AI litigation occurs. At the end of the day, the cost and consequences of using AI will not necessarily be cheaper than what we have at the moment.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.