“Many people who try to do big bold things in the world find out it’s not about the money or the technology: It’s about the regulatory hurdles that will try and stop you.”

Peter Diamandis, Greek American physician and entrepreneur


While the artificial intelligence methodologies and tools in clinical medicine and healthcare are on an exponential rise in capabilities and hold great potential in the future, regulatory processes remain in a lower trajectory but will need to be in place to monitor these advances for wide adoption and trustworthiness. The following is a discussion of the regulatory frameworks that will overlook AI in healthcare. Next week, the FDA role of regulatory processes for AI tools will be delineated in detail.

The regulatory frameworks 

An artificial intelligence algorithm is often conveniently labeled by regulators as a software as a medical device (SaMD), defined by many as software that is intended to be used for a clinical purpose that is not part of a hardware medical device. The following is a brief but concise review of the history and the current regulatory frameworks for artificial intelligence-based tools:

Global Harmonization Task Force (GHTF). This group (with representation from the European Union, United States, Canada, Japan, and Australia) was formed in 1993 to promulgate a convergence of safety, effectiveness, performance, and quality of medical devices via requirements and regulatory processes. This global harmonization of regulation of medical devices classified medical devices into four risk classes:

  • Class I: low risk (devices that are deemed to be noninvasive)
  • Classes IIa and IIb: medium risk
  • Class III: high risk (devices that affect functioning of vital organs or life-support systems)

International Medical Device Regulators Forum (IMDRF). This forum was formed in 2012 and proposed a framework for medical devices, including its definition as well as clinical evaluation and investigation. It is this entity that proposed the aforementioned definition of SaMD. The regulatory scrutiny of AI-enabled algorithms, according to IMDRF, is in terms of risk categories based on the healthcare condition severity, as well as the decision. In addition, clinical effectiveness evaluation is assessed in three phases: clinical association, analytical validation, and clinical validation.

Food and Drug Administration (FDA). The FDA also has a classification system for medical devices that originated from both the GHTF and the IMDRF :

  • Class I: devices with minimal potential harm to the user so these devices have the least stringent regulatory oversight. These devices are exempt from premarket notification
    • Examples include: arm sling or handheld surgical instruments
  • Class II: devices that require special controls such as labeling requirements and performance standards as well as postmarket surveillance. These devices require premarket notification by submission and FDA review of a 510(k) clearance-to-market submission (although a few devices can be exempt from this requirement)
    • Examples include: physiological monitors or X-ray systems
  • Class III: devices that do not have sufficient information on safety and effectiveness and support or sustain human life and may present a potential higher risk of illness or injury to the patient. This class of devices usually do require premarket approval submission for marketing and is also the most stringent regulatory control
    • Examples include: replacement heart valves or implanted cerebellar stimulators

In addition to the regulatory processes in clinical AI, many other topics will be discussed at our in-person AIMed Global Summit on May 24-26 of this year to be held at the Westin St Francis in San Francisco. We are fortunate to be partnering with Stanford’s AIMI as the AIMI Symposium will be the day before AIMed in Palo Alto. Representatives of many centers of AI in medicine will be participating at this week’s meeting in addition to the diverse attendees.

See you soon! Find more information here.