I am a pediatric cardiologist and have cared for children with heart disease for the past three decades. In addition, I have an educational background in business and finance as well as healthcare administration and global health – I gained a Masters Degree in Public Health from UCLA and taught Global Health there after I completed the program.
“Software as a device is defined as software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.”
As defined by International Medical Device Regulators Forum (IMDRF)
We discussed the past, as well as some aspects of the current regulatory framework for medical devices over the past few weeks. We will cover additional aspects this week, as well as next week (part IV).
As a result of public discussions, this discussion paper led to the AI/ML Software as a Medical Device Action Plan that will be a multi-dimensional approach to FDA’s oversight strategy, and this effort is to align with the Digital Health Center of Excellence.
The following is a summary of the aforementioned Action Plan:
- Develop an update to the proposed regulatory framework presented in the AI/ML-based SaMD discussion paper, including the issuance of a Draft Guidance on the Predetermined Change Control Plan
- Strengthen FDA’s encouragement of the harmonized development of Good Machine Learning Practice (GMLP) through additional FDA participation in collaborative communities and consensus standards development efforts
- Support a patient-centered approach by continuing to host discussions on the role of transparency to users of AI/ML-based devices. Building on the October 2020 Patient Engagement Advisory Committee (PEAC) Meeting focused on patient trust in AI/ML technologies, hold a public workshop on medical device labeling to support transparency to users of AI/ML-based devices
- Support regulatory science efforts on the development of methodology for the evaluation and improvement of machine learning algorithms, including for the identification and elimination of bias, and on the robustness and resilience of these algorithms to withstand changing clinical inputs and conditions
- Advance real-world performance pilots in coordination with stakeholders and other FDA programs, to provide additional clarity on what a real-world evidence generation program could look like for AI/ML-based SaMD
The FDA report also elucidates on the three types of modifications to an AI/ML-based SaMD (it should be noted here that these changes are not mutually exclusive):
- Performance: Modifications that are related to clinical and analytical performance but with no change to inputs and intended use (see below) can include changes like model retraining. This modification can be handled by an update from the manufacturer without changing the the original claims about the algorithm
- Inputs: Modifications that are related to inputs used by the AI/ML algorithm with no change to the intended use such as expanding the SaMD compatibility with other sources of data or adding different input data types can also be managed by not changing the original claims about the algorithm
- Intended Use: Modifications that result in a change of the significance of information provided by the SaMD and the healthcare situation or condition will be limited in scope by the pre-specified performance objectives as well as algorithm change protocols
This approach is aligned with the FDA proposed Total Product Lifecycle (TPLC) Regulatory Approach for the AI/ML SaMD. It is designed for AI/ML based-SaMD that require premarket submission, but not for those that are exempt from requiring premarket review (such as Class I or Class II exempt).
This approach also includes four general principles:
- Establish clear expectations on quality systems and good ML practices (GMLP)
- Conduct premarket review for those SaMD that require premarket submission to demonstrate reasonable assurance of safety and effectiveness and establish clear expectations for manufacturers of AI/ML-based SaMD to continually manage patient risks throughout the lifecycle
- Expect manufacturers to monitor the AI/ML device and incorporate a risk management approach and other approaches outlined in “Deciding When to Submit a 510(k) for a Software Change to an Existing Device” Guidance in development, validation, and execution of the algorithm changes (SaMD Pre-Specifications and Algorithm Change Protocol)
- Enable increased transparency to users and FDA using postmarket real-world performance reporting for maintaining continued assurance of safety and effectiveness
In addition to the regulatory processes in clinical AI, many other topics will be discussed at our in-person AIMed Global Summit on May 24-26 of this year to be held at the Westin St Francis in San Francisco. We are fortunate to be partnering with Stanford’s AIMI as the AIMI Symposium will be the day before AIMed in Palo Alto. Representatives of many centers of AI in medicine will be participating at this week’s meeting in addition to the diverse attendees.
See you soon! Find more information here.