The latest AIMed webinar “Continuing the discussion for successful translation of AI (artificial intelligence) into clinical radiology”, supported by Pure Storage, took place yesterday (13 November) at 11 am (PST). As the title suggests, it marks a continuation from its previous session “Breaking through the bottlenecks: Successful Translation of AI into clinical radiology”, whereby speakers expressed various challenges hindering the real adoption of AI in the clinical radiology domain. 

Yesterday’s webinar was moderated by Emily Watkins, Solution Architect at Pure Storage, who helps companies to streamline their data pipeline and scale up their respective AI projects from infancy to significant outcomes. Speakers include: Dr. Tessa Cook, Assistant Professor of Radiology at the Perelman School of Medicine, University of Pennsylvania; Dr. Samir S. Shah, Vice President of Clinical Operations, Radiology Partners, and Dr. Ross Filice, Associate Professor and the Chief of Imaging Informatics in the Department of Radiology, MedStar Georgetown University Hospital. 

What if AI is being capitalized? 

Once again, speakers emphasized the need to collaborate. Technology vendors who are currently working on an AI tool should seek feedbacks from fellow radiologists; understand their workflow and develop solutions that can be integrated into the existing PAC (Picture Archiving and Communication) systems. As highlighted by Dr. Cook, radiologists have a rather hectic and highly stressful workflow, as such diverting part of their attention to different work stations and working constantly between PAC and AI solution is simply not going to work. 

At this point in time, Dr. Shah also threw in an interesting question, will it be that one day, radiologists who helped in developing certain AI tools limit others’ rights to use it, exercising some sort of awful capitalization on the technology? Dr. Filice was not surprised, he replied this is how healthcare enterprises handle certain resources at the moment. 

“There certainly won’t be an absolute capitalization but it’s reasonable for us to account for their IP (intellectual property) values and others’ access to the AI tool. The US adopts a capital healthcare system, it is creating many challenges in different spheres, whenever anything we share comes with a marketable value” Dr. Filice explains. Watkins agreed, even though she had not met with such an extreme incident in her professional career, but there is no way an AI solution can avoid a fate of competition, opportunities for practices to set themselves apart, or succumb to be questioned by the authority in this capitalist society. 

“If we look from a purely practical standpoint, if you developed something in your organization and you use within your organization, you don’t need to seek the FDA (Food and Drug Administration) approval but as soon as you try to share it and use it somewhere else, whether you are selling it or giving away for free, that changes,” Watkins said. 

The whole issue of ethics again 

Dr. Filice believes the argument stamps from the fact that AI is still relatively new to radiology, that’s why institutions and companies have yet to find a sound standpoint, like the way data sharing is being handled right now. He thought entities should have a position of how they feel about data sharing, sitting on the data, and co-developing with companies. So that at the very least, they can begin to inform patients that they are using information ethically which is reasonable. 

Dr. Cook agreed. She said perhaps technology is evolving a little faster than legal requirements. As a result, some organizations will have to kind of define what is right for them, like the way they decide on the company’s code of conduct. Watkins said, the conversation often comes back to patients saying, “you have got the data, because of me, who needed care”. Therefore, is there a need to report to patients, especially in the cases whereby some data are being sold, how their data has been used? Because some IRB (Institutional Review Board) guidelines suggest that patient will not need to be notified shall their data are being used in de-identified form. 

Dr. Shah brought the discussion further by highlighting that what if the developed AI solution made a mistake, does the patient has the right to file a lawsuit against the AI? Or should the responsibility be on to the radiologists and the developers? Watkins added the problem could become trickier as when collaborations and co-designs of an AI solution were involved. “Full integration should not do away with the full picture and defining of ethics”. 

You may revisit the webinar session here.

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.