This April, the US Food and Drug Administration (FDA) Commissioner, Scott Gottlieb, announced the steps to consider a new regulatory framework targeting at artificial intelligence (AI) driven medical devices. In the announcement, Gottlieb acknowledged the impacts of AI algorithms in providing treatment suggestions and assisting in the screening of certain medical conditions. In fact, as of 2018. The FDA has given its green light on two AI based devices, which detects diabetic retinopathy and alerts providers of a potential stroke in patients. 

In spite of the authorization, those that received FDA’s clearance were “locked” algorithms, or AI which will not continue to learn and adapt every time it is being used. The manufacturers have the responsibilities to modify these algorithms at intervals, to feed and re-train them with new data. In another word, these devices have to undergo constant verifications and validations that are done manually. Further change may subject to separate approval. This leaves behind machine learning algorithms which adapt and learn overtime and do not require much human interventions over time. 

As such, the FDA hopes to develop a new framework which takes into consideration of the evolving nature of AI and machine learning algorithms. The framework will have to ensure all changes taking place along way – from pre-market development to post-market performance – the devices will meet the gold standards for safety and effectiveness. The FDA had also created a separate discussion paper, to capture the feedback of different sectors. 

Recent response from AMIA 

In response to the request, the American Medical Informatics Association (AMIA) had offered their insights. In the letter addressing to Dr. Norman Sharpless, the Acting Commissioner published on 3 June, AMIA showed their support towards FDA’s initiative and their engagement with stakeholders. 

At the same time, AMIA also offered suggestions on a strong focus and understanding of the differences between “locked” algorithms and continuous learning algorithms. A discussion on how new data input will affect the output of an algorithm. A discussion to access the impact of data breach, hacking or data manipulation on the output of the algorithms and a discussion on how manufacturers should address algorithm driven bias and ascertain the products will not facilitate or promote such bias when in use. 

AMIA recommended the FDA to include requirements for periodic evaluation that are independent from planned updates or re-training and to request for additional opinions to form a basis of when the periodic evaluation should occur. AMIA highlighted AI’s susceptibility when they are trained using poor or biased data and its inability to provide an explanation for offered decisions. 

Will there be a limit to innovations? 

3 June was the deadline to comment on FDA’s proposed steps to a new regulatory framework. As Scott Gottlieb stepped down as FDA’s commissioner this April, some fear that they are losing a strong alliance to introduce new technologies into the medical/healthcare field. However, the new acting Commissioner Dr. Norman Sharpless is no stranger to healthcare entrepreneurship and thus, a continuation is expected. 

As Gottlieb said this April in the release, “our approach will focus on the continually evolving nature of these promising technologies. We plan to apply our current authorities in new ways to keep up with the rapid pace of innovation and ensure the safety of these devices.” 

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.