I am a pediatric cardiologist and have cared for children with heart disease for the past three decades. In addition, I have an educational background in business and finance as well as healthcare administration and global health – I gained a Masters Degree in Public Health from UCLA and taught Global Health there after I completed the program.
“Risk more than others think is safe. Care more than others think is wise. Dream more than others think is practical. Expect more than others think is possible.”
Claude Bissell, former President of the University of Toronto
This short summary from a multinational consortium comprising the FDA of the United States, Health Canada, and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA), which have put together 10 guiding principles that are aligned with the development of Good Machine Learning Practice (GMLP). This guide, along with work from the international Medical Device Regulators Forum (IMDRF), international standards organizations, and other collaborative bodies, is meant as a starting set of principles to guide medical devices that use artificial intelligence and machine learning with the accompanying challenges and nuances.
The 10 guiding principles have important themes: humans need to be involved (focus is placed on the performance of the human-AI team”), and multi-disciplinary expertise is essential (“multi-disciplinary expertise is leveraged throughout the total product life cycle). A few are both predictable and general (“good software engineering and security practices are implemented” and “training data sets are independent of test sets”), but others are more helpful as a reminder for every stakeholder (“deployed models are monitored for performance and re-training are managed” and “users are provided clear, essential information”).
Overall, this is a solid list of principles to raise awareness and increase discussion about the nuances of AI/ML in clinical medicine and healthcare. I am always unsure about the terms “software-as-a-device” and “locked algorithm” as they are not consistent with the desired agile nature of artificial intelligence and machine learning; I am more sure about terms such as “intelligence- or data-as-a-service” as these terms do not place constraints on data and AI as a resource.
Finally, my observation is that many AI/ML projects have too little clinical relevance and too small patient impact, and future guidelines need to create an even higher level of expectation: delivering improved outcomes.