AIMed is unsure if you are one of those who countdown to the premiere of “The Inventor: Out for Blood in Silicon Valley”, an investigative documentary made by Alex Gibney debut on 18 March. But earlier at the European Congress of Radiology (ECR) 2019, we seized the opportunity to question some of the artificial intelligence (AI)-healthcare company leaders, for their opinions on medical fraud and if we have sufficiently considered integrity and responsibility as we progressed.
In case you do not know, the documentary unfolds the story of Theranos, a company which claimed to have a game changing product but turned out to be one of the biggest deceptions in the medical history. Back in 2003, Stanford dropped out Elizabeth Holmes founded Theranos. She claimed to devise a printer-size machine which is capable of running hundreds of different tests with just a drop of blood.
Despite the absent of formal results and scientific proves, Holmes managed to convince many people, including dominant investors and politicians, to come on board to support the company. At one point in time, the company was worth $9 billion and media had called Holmes “the next Steve Jobs”. It was not until a piece published on the Wall Street Journal in 2015, which questioned the technology, that the whole company begins to falter.
Transparency and validations
Many company and healthcare leaders told AIMed that transparency is probably the key to prevent a similar incident from happening again. Lennart Thurfjell, chief executive officer of Combinostics said the system which AI or machine learning (ML) is supporting, has to be clear. “It’s important for users to see how the collated data will lead them to the suggestion, so that clinicians are confident of what the system is capable of performing,” said Thurfjell.
In order to uphold that transparency, there is a need to collaborate with medical centers, to run validation studies and publish the methods. “So, if you have a black box (i.e., knowing that data goes into the algorithm and results are being churn out but there is now knowledge of what goes on in between) which suggests why the system indicates something, that will make transparency challenging,” Thurfjell added.
Michele Debain, senior director for business development at iCADexpressed likewise. She said it’s reasonable for people to get skeptical when the data you used to train your algorithm comes from a particular area but you claim that the algorithm works for everyone. “It’s crucial to set the criteria: let others know how and where did your data come from, how large is your data base, how are your training your algorithm and who are you working with. These are questions that physicians or any user should ask,” Debain said.
Is it time to regulate people
Ulli Waltinger, head of machine intelligence research group and technology head of Siemens AI lab believes one bad success story should not be the driver for us to think about responsibility. It should be something genuine to begin with, “it’s about trust, how do we make sure that we use technology for the right purpose,” he said. He added there is also a need to think about sustainability: what do we want to develop with technology and how to deploy it. “How do we ensure optimization and not harming the system,” he noted.
At the end of the day, it all boils down to people. Debain said there should be active discussions on ways to guide people to the companies that actually have integrity with the product. Founder and chairman of AIMed, chief intelligence and innovation officer of Children’s Hospital of Orange County (CHOC) Dr. Anthony Chang once suggested, on top of regulating medical devices and AI, perhaps it is time for US Food and Drug Administration (FDA) to start regulating people.
Indeed, if we ridicule the idea of testing many conditions with a drop of blood, then there is no reason for us to think that once a product received its official approval, it’s safe. After all, it may land in the wrong hands. In the age where quick success is perhaps valued higher than the technology itself, it’s hard to ignore any lucrative return, even if it’s fabricated or short-term. If so, why can’t protocols be instilled to ensure people are as credible as these credited tools? Till then, maybe somehow, we are all black sheep.
A science writer with data background and an interest in current affair, culture and arts; a no-med from an (almost) all-med family. Follow on Twitter.