“Enigma is an extremely well designed machine. Our problem is that we are only using men to try to beat it (Enigma). What if only a machine can defeat another machine?”

Alan Turing to Commander Alastair Denniston at Bletchley Park

There is currently an impressive panoply of disruptive technologies in healthcare: extended reality, robotic surgery, stem cell therapies, genomics and gene editing, 3D printing, blockchain, and artificial intelligence. This JAMA Health Forum manuscript by Scott Gottlieb, former commissioner of the Food and Drug Administration (2017-2019), is a discussion on the strategies of regulation for such disruptive medical technologies with abundant relevance to particularly artificial intelligence.

Gottlieb highlights the discussion in the form of 3 lessons:

Lesson 1: Begin with Established Processes

Gottlieb advocates using a familiar regulatory pathway for a novel technology. While this strategy can indeed accommodate innovators to enter the market more quickly than to configure a new path, AI can present special challenges as it is not “software-as-a-device”. AI can and needs to change continually in its algorithmic form, and having a more traditional device-oriented process is not necessarily a good fit after its inception. Gottlieb is correct, however, in stating that this strategy can “provide the FDA with opportunity (and time) to incrementally modify its regulatory policies to best accommodate an innovation based on practical experience.” He further elucidated how FDA had a “firm-based approach” to the premarket regulatory process of the Apple watch, although this may presents its own challenges and limitations.

Lesson 2: Take a Risk-Based Approach to Regulation

Gottlieb further suggests that low-risk activities perhaps need to be sequestered during the evaluation process as to minimize unnecessary oversight. He went on to mention that this strategy failed with the stem cell clinics as it became obvious after the initial relaxed regulation that there was indeed harm with these clinics’ practices. FDA decided to exercise enforcement discretion over certain stem cell procedures as this strategy evolved. There is a place for this approach in artificial intelligence, especially with automation types of its use in healthcare.

Lesson 3: Use Existing Authoritative Benchmarks

Gottlieb finally delineates the use of available existing bodies for this initial evaluation of the new technology. The FDA new policy for regulation of next-generation sequencing was mentioned as an example. Artificial intelligence may not be able to learn from this lesson, however, as such existing bodies do not yet exist with a history of regulation in this domain.

While some of these insights are valuable and even relevant, unfortunately artificial intelligence, especially the current developments in deep learning and generative artificial intelligence, is unlike any other novel technology before (even the aforementioned disruptive technologies).

The exponential improvement of AI and the velocity of adoption are somewhat different than the other disruptive technologies in healthcare. Any regulatory process (which is usually relatively slow and “linear” in its evolution) that falls short of an exponential configuration will not likely to be sufficiently robust nor expedient for artificial intelligence in healthcare. Perhaps AI should regulate itself to some degree.

Read the full article here

AIMed’s Global Summit 2024 will be bigger and better than ever! With a keynote stage bringing together the superstars of strategic and thought leadership for smart healthcare. Book your place now!