The Wall Street Journal reported that it took the landline telephone 75 years to hit 50 million users. YouTube, Facebook and Twitter hit that 50-million user mark in four, three and two years, respectively. But these figures are positively sluggish when you consider that “Angry Birds” app, a video game, took a mere 35 days to hit 50 million users.

So it’s clear that the difference between previous introductions of new technologies and current times is the speed with which change is occuring. Disruptive and sustainable innovations are overwhelmingly fast in their development and introduction to markets. What’s more, their uptake and implementation are equally rapid.

Like any other industry, healthcare is a lucrative market for the application of these technologies. According to a research report by the market research and strategy consulting firm, Global Market Insights Inc., the Healthcare Information Technology (IT) market is predicted to be worth over $441.8bn by 2025. This trend is driven by the fact that the disease burden of an aging population and chornic diseases is simply unsustainable through traditional means. No wonder there is an enthusiasm amongst the providers to welcome any technological innovation that brings efficiency and reduces cost without jeopardising quality of care.

Amongst the many emerging technologies such as use of virtual reality, mixed reality, augmented reality, blockchain, 3-D printing, the ones that have made headline news lately are artificial intelligence, machine learning, robotics and IoT (Internet of Things).

AI solutions exhibit the ability to learn and store knowledge; undertake analysis, identify patterns and make recommendations; sense and interpret the external world; and interact using natural language. According to Price Waterhouse Coopers (PwC), AI will ultimately underpin how all businesses interact with each other and with their customers. Examples include face recognition at possport control, financial fraud detection, virtual assistants on smartphones or via devices in homes, messaging chatbots on social media, content recommendations for video streaming services and, increasingly, driver support in vehicles.

Whilst these technologies are used in the process management to streamline healthcare related management tasks, there is no real ethical dilemma. However, when it comes to the use of these technologies in making clinical decisions and data handling, there are some serious discussions to be had on the ethical standing for a clinician who is going to use them in their clinical managements.

In medicine, the basic ethical principles is “Nonmaleficence” i.e. first do not harm. Any intervention that is made in clinical practice must have a sound scientific credibility, clinical validity and realiability or predicted outcomes. In biopharma industry, the period it takes to develop a new drug and go through clinical trial is, on average, around 10-15 years. Medical devices have classifications as to the risk stratification of their uses and have to go through rigorous regulatory scrutiny.

In April 2019, the FDA published a discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as A Medical Device”. This further opens up an area of rigorous and pertinent discussion around who will regulate the use of AI, which ethical framework will it conform to and is there a need for a bespoke ethics regulatory body to be created? One that consists of multi-disciplinary stakeholder representatives who understand the potentials as well as challenges to form a set of metrics and measures that need to be met for an application to be used in healthcare.

Being part of the recent brainstorming session on “Ethics in AI use in healthcare” at Barcelona’s University Pompeu Fabra, for the purpose of making recommendations to the European Parliament, I recognized that these areas are still unclear when it comes to regulations and checks. I can understand on one hand there is this “gold rush” drive to embed these technologies to handle the enormous challenges that we face in health related issues globally. But on the other hand, we need to ensure we do not end up harming patients either via direct consequences such as AI based silutions giving inaccurate judgements drawn from corrupted data or indirect consequences such as inaccurate recognition of personal data due to lack of the machine learning algorithm to differentiate between similar variables for more than one individual. One may argue that these inaccuracies are due to the lapses in human input and not actually of the technologies. However, as long as there is human input required, it is part of the risk profile of the application as a whole.

So what are the solutions? AI and machine learning are certainly here to stay. Entrance of emerging technologies in healthcare is not a new phenomenon. But there now needs to be a globally recognized body that drives consensus building on ethical and legal aspects of the use of these technologies in clinical use. The Academy of Royal Colleags UK have called for a joined-up regulation which they think is the key to ensuring that AI is introduced safely, as currently there is too much uncertainty about accountability, responsibility and the wider legal implications of the use of this technology.

I’d like to see the United Nations, WSIS (World Summit for Information Society) and WHO (World Health Organization) collaborate and form a high-level ICT Emerging Technologies in Healthcare forum. It could consist of specialists alongside legal and ethical experts from industry and academia, who would jointly and effectively develop these urgent requirements.

Author Bio: Dr. Naila Siddiqul Kamal is a senior gynecologist and senior lecturer at Imperial School of Medicine, London.

This article is also available between pages 18 and 19 of AIM Magazine Volume 2, Issue 3.