Innovations bring changes to our lives but it’s not always the case. Market intelligence platform CB Insights had tabulated 155 biggest product failures of all time earlier this year. All names mentioned failed for a reason: either born too early; bad marketing strategies; unable to deliver what it had promised, too similar to competing products, falling short on pricing and so on.

In view of the growing prominence of artificial intelligence (AI) and related new technologies, the medical community is now worried that a similar fate may befall upon the industry. Some of its prominent figures including Eric Topol, Cardiologist and Executive Vice President of Scripps Research; Mildred Cho, Professor of Pediatrics at Stanford Center for Biomedical Ethics; Steven Nissen, Chairman of Cardiology at the Cleveland Clinic; Bob Kocher, partner at the venture capital firm Venrock and many more, had expressed their concerns in Scientific American article published recently.

The medical community is worried sick

Fundamentally, based on an AI report released by the National Academy of Medicine on 17 December 2019, there is no or limited research to show prove the 320,000 medical applications made available to the public can actually benefit health. Often, only AI developers who collaborate with medical professionals may have a better insight on the real clinical challenges and generate solutions that show a better fit into their workflow. Other times, the tech sector inclines towards a “fail fast and fix it later” mentality which poses risks to patients and make it harder for regulators to draw their perimeters.

Besides, although the US Food and Drug Administration (FDA) gave green light to over 40 AI-driven products thus far, these solutions tend to be “locked” and problem-specific, they do not grow or evolve as new data comes in. For some, this may be devaluing the true potential of AI but for others, they are perturbed by the generalizability of systems that were created using data from a particular institution. Previous incidences showed that algorithms’ performance may backfire when different profiles of patients or patients from different ethnic groups were involved.

Some experts thought the flaws of AI will not be apparent until they are widely deployed on a large number of patients. But before that, the medical community will need AI to undergo sufficient randomized clinical trials to ascertain its safety. Also, the tech industry should be encouraged to stop or minimize stealth practice; boosting the accuracy of their AI solutions in press releases, news reports, or promotional events, and allow members of the medical community to judgmentally review their creations.

In need of a breakthrough

Many believe transparency will not only overcome the AI Blackbox challenge – the inability to explain why and how an AI algorithm derives at an answer, but also path the way to a new “gold standard”, telling people who would work and ensure software developers comply to them and apply for regulation clearance or approval. At the moment, some AI developers are still not at all interested in carrying out expensive, time-consuming but crucial clinical trials to ensure the safety of their solutions. At the same time, there is an absent of incentives for developers to verify their solutions and encourage medical community to adopt them when they are safe.

Most importantly, most AI developers relied on electronic health records (EHRs) to render them patients’ information without realizing that EHRs was created initially for billing purposes, not patient care. Thus, some of them may succumb to errors or missing data, and what AI developers are doing now, is trying to build a tower on a broken foundation.

However, this is not the first time AI is being critically questioned. AI experienced its winter in the mid-1970s and was revived in the early 1980s when there were some commercial successes in expert systems. As such, Gartner Hype Cycle is sometimes used to refer to the present AI development. While a downfall is imminent, a new breakthrough can always bring it back on track again.

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.