Artificial intelligence (AI) is promising but if its development follows the Moravec’s paradox (i.e., what appears to be simple and automatic for human is challenging for AI; while what appears to be simple and automatic for AI is challenging the human), AI will probably remain assistive for the time to come. This is especially so as scientists continue to struggle in finding suitable methods that best teach machines emotional understanding, ethics, creativity, empathy and many other qualities that are not negligible to human.

AI is exceptionally good at systematic thinking, looking for insights or patterns in a sea of data, and execute routine tasks. This means soft skills will have a more dominant role, and not less as many would assume, in the future. As we enter a new era when we are being told, increasingly, by algorithms what is more appropriate and without us questioning too much on those decisions, it is crucial to have a human, armed with the right soft skills, to mediate the process, particularly in healthcare.

An increased focus in soft skills

For example, in an unprecedented situation like the ongoing COVID-19 pandemic, some AI tools were rapidly developed or re-purposed. Emergency created a kind of short-sightedness which prevent many from fully understanding the values and impact of these AI tools. In a situation when there aren’t many choices being presented, people will be influenced or follow the recommendations given by these algorithms without questioning much about it. However, there is insufficient evidence that AI will bring real benefits in the absence of new risks.

There is also not enough awareness on how AI biases would contribute to or aggravate existing inequalities in medicine. Concerns around patient privacy is limiting the amount and quality of data being shared that will improve algorithms. A recent finding showed race and socio-economic status played a huge role in the number of individuals being tested, contracted and died from COVID-19. On average, African Americans are dying at 2.5 times the rate of their White counterparts. AI may not be able to point out these disparities because these conclusions were derived from COVID-19 testing sites and hospitals where the vulnerable communities are avoiding in the first place.

Many who are not properly insured fear unexpected fees from testing and they are also afraid of losing their jobs if they happen to be tested positive and is unable to get back to work. It will take a human being, who understand empathy, diversity and the gray areas in life, to address the hidden concerns. Unlike machines which only differentiate the black and white, human have a moral compass and intuitions we learnt through our education and upbringing that help us navigate global health crisis like this.

Learning why, where and how to use AI

Nevertheless, AI is still at its dawn, many healthcare leaders may face challenges making sound business cases of developing and/or deploying AI in clinical practice. It is very easy to get sway away by statistics and survey results on the growing percentages of healthcare systems that are adopting AI, without in depth understanding how AI may uniquely benefit the hospital or institution which one is heading.

Algorithms feed on data and each healthcare system has their own patient population that generate a particular set of data. As such, healthcare leaders need to define the values that AI can bring to their system and how will the adoption make them stand out from competitors who are also using AI. To do so, healthcare leaders will have to be the ones to have a thorough comprehension what algorithms can do and assess their advantageous and disadvantageous.

More importantly, healthcare leaders need to decide which part of the business process do they want the AI to automate. They need to ensure involving AI will not change the work culture in a way that people feel like they are being supervised by a machine or they themselves are being treated like robots.

The differences between management and leadership

Professor David De Creamer, Founder and Director of the Centre on AI Technology for Humankind at the National University of Singapore Business School wrote about making a distinction between managers and leaders in the AI era. To manage is to end the chaos and AI will be a great candidate for management because it is about introducing a structure and bringing orders and consistency to a place. AI is already doing that in certain work places, including healthcare, via predictive analytics and giving advice that others can follow.

On the other hand, leadership is about change; it requires one to be agile, resilient, creative and adaptable to cope with the demands in this ever-changing World. It is also about making decisions that are valuable to human. The complete opposite but at the same time, goes hand in hand with management. Something which AI is not able to achieve as of now. In sum, it is the need to bring synergy between human and machines rather than exploiting on the differences, that will allow us to make technology more efficient without undermining humanity and this is what the healthcare leaders of tomorrow should know and embrace.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.