A panel discussion dedicated to regulation, ethics and legal aspects of healthcare AI took place on the second day of the latest AIMed Clinician Series, Imaging. Here’s what our experts had to say on this ever-important topic…

 

A recent survey by the Pew Research Center on a group of AI experts from various sectors including healthcare found that 68% doubted that ethical AI would be a norm within the next decade,” said Dr. Hamilton Baker, Associate Professor of Pediatric and Congenital Cardiology at the Medical University of South Carolina. “The majority of these AI experts worried that evolution of AI by 2030 will continue to be focused on optimizing profits and social control, rather than achieving consensus about ethics,” he added. Dr. Baker also brought up the top 10 legal considerations for use and development of healthcare AI, published in the National Law Review last February.

“We can see that this is not a small list and it highlights the breadth of the challenge,” Dr. Baker continued. “There’s no shortage of headlines, reviews and reports talking about not just biases but also other issues like liability, privacy, trust, explainability, transparency and others that have not yet been identified in the development, application, and impact of healthcare AI. It’s precisely this scrutiny that leads to the ethical, legal, and regulatory bodies putting forward effort to best avoid or combat these issues as the field progresses.”

Going beyond data

Katy Haynes, Managing Director of Nightingale Open Science, believes AI models are only as good as the data they are being trained on. As such, she emphasized the need for more open data in the space to understand what’s going on, allowing those who are working in the regulatory perspective to have more visibility on how algorithms are being created. Furthermore, Haynes also cited a Stanford paper released last September criticizing the fact most healthcare AI tools were created using data from just three US states -California, New York and Massachusetts.

“Because access to data is so siloed and not many institutions have the resources to build the infrastructure needed to make data available to the masses, we are not getting a diverse cross section of patients’ data to create algorithms,” Haynes observed.

Cynthia Stamer, Health Data Legal Expert agreed. She added that healthcare data is historically biased. Many of which were collected in the era when representation was not clearly understood and appreciated.  Some of the factors, including poverty and cultural biases, that have contributed to present healthcare disparities were not initiated by the system.

“I think there’s also a need to think about statistical reality, outliers, and how we are aggregating the big data,” Stamer noted. “We need to balance disparity control with many other challenges so that the care for disparate groups is not undermined or overwhelmed with treatment. This is extremely difficult, especially if we are taking the issue to a policy level, where it tends to be heavy handed.”

Stamer and Dr. Baker also addressed how AI would impact the present patient-clinician relationship. “Sometimes, patients’ expectations and expectations detailed in the legal framework may not be aligned,” Stamer said. “This further complicates the question of whether AI should be used and how it should be used in the clinical setting.” Dr. Baker added that it’s worthwhile to highlight how informed consent is being made. “We may reach a point where fully understanding how an AI system works is nearly impossible,” he said. “But things like explainable AI helps to the best of our ability to ensure that healthcare remains credible.”

The gap between progress and regulations

Sarah-Jane Green, Head of AI Regulations and Policy at NHSx said the UK is handling the challenges in two main ways. “First, we ensure we have an external validation process to test these algorithms,” Green explained. “We want to ensure we have good machine learning practices, training algorithms on a broad range of datasets. Second, we try to maintain a meaningful human control when we are testing or using these algorithms. We have an oversight on the safety aspect to help eliminate the bias should there be one.”

Moreover, Green said the UK Medicines and Healthcare products Regulatory Agency (MHRA) is also looking at the use of synthetic datasets to exponentially replicate healthcare data that won’t be linked to individual patients, enabling an indefinite stream of data to train AI to eradicate biases. “I don’t want to use the term ‘minimal standard’,” Green said. “But I believe all AI models going into the clinical setting should achieve a level that everyone has agreed upon, be it how the data was obtained or how algorithms are being trained.”

Green also noted that regulation is not catching up with technological progress. Stamer echoed the same point. She said this has become prominent during the COVID-19 pandemic when science is rapidly evolving but not regulations. With an even greater gap in the real-world, where it means people may not get the treatment they need if it’s not reimbursed.

Brad Cunningham, Associate Office Director, Office of Product Evaluation and Quality at the US Food and Drug Administration (FDA) admitted the regulatory body had experienced challenges. “One thing I want to stress is that the FDA doesn’t necessarily get too heavy handed when it comes to what the dataset looks like,” Cunningham explained. “However, we are stricter with training rounds and validations. We want to know what those validations may look like and tease out as many disparities that we see. It’s not a foolproof approach but we do have Blackbox algorithms that we don’t completely understand. It’s tough to know how they may play out when the AI model is being released into the wild. Nonetheless, the FDA is always looking forward to this type of discussion to help develop a framework to handle all challenges.”