Artificial intelligence (AI) is gaining attention in recent years as it shows promises in the detection, triage and treatment of various medical conditions by giving early warning signs or sifting out effective interventions. The potential of AI is further explored in the ongoing COVID-19 pandemic, particularly in the areas of disease prevention, allocation of resources, drug and vaccine development, and drafting out public health policies and responses. Yet, this growing reliance on a new technology and assimilating it in a rapid manner means ethics and safety are being challenged.

A new way of observing ethics

Last week, researchers at the University of Cambridge’s Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence published a comment piece in Nature Machine Intelligence. They wrote if AI is leveraged to help in a global health crisis for societal benefit, we may need a new kind of ethics. This is especially so if the use of AI and big data put individuals’ privacy and civil liberties at risks and we overestimate its efficacy which undermines public trust and impact AI development in the long run.

The research team termed this new way of observing ethics as “ethics with urgency”. Mainly, it encompasses thinking ahead and tackling problems proactively; incorporating vigorous processes to ascertain AI safety, and endorsing independent assessments to garner public trust. They believe ethical considerations need to be part of the AI development process and not an afterthought. Questions such as “what data is needed and what issues may arise” as well as “how should the model be built to address key challenges” have to be addressed during the whole development process. At the same time, it’s also crucial to ensure ethics and risk assessment experts are on board from the beginning.

Moreover, the research team noted if the AI is to be used in healthcare or other areas where safety is regarded as paramount, the algorithms will need to be stringently verified and validated. The government is encouraged to fund the process and ensure the AI is reliable under different circumstances including those at high risks. Most importantly, the core of “ethics with urgency” is to be marked by a need to build public trust. If an algorithm is error prone or generates additional problems, this will make the public question if AI is truly beneficial or necessary.

The research team proposed the use of “red teaming”, a method widely adopted in security setting, whereby an independent body is employed to thoroughly and strictly examine an AI system. In a way, this will challenge developers of any blind spots by deliberately look out for flaws. So that any defect will be fixed before the AI is being implemented and respective stakeholders will also be prepared for what’s about to come.

Why new ethics is needed?  

In an interview with the MIT Technology Review, Jess Whittlestone, Senior Research Associate and one of the researchers involved in the comment piece said based on her experiences in reviewing AI ethics initiatives, AI ethics is not very practical in general. In comparison to biomedical ethics for instance, AI ethics tends to focus much on high level principles and they do not always address what happens when these principles come into conflict other than stating that AI can be used for good.

Furthermore, AI ethics only addresses existing problems rather than preparing the public to anticipate new ones. People continuously debate over privacy and algorithm bias and AI efficacy is overhyped at times. All these show that AI ethics are not powerful enough. Although AI plays a part in medical diagnosis or resource allocation, present machine learning systems are not mature enough to play a huge role in this ongoing global health crisis.

“With this pandemic we’re suddenly in a situation where people are really talking about whether AI could be useful, whether it could save lives. But the crisis has made it clear that we don’t have robust enough ethics procedures for AI to be deployed safely, and certainly not ones that can be implemented quickly,” Whittlestone says.

As such, there’s a need to think about ethics differently. Perhaps the word “ethics” should change in the first place. As Whittlestone thought scientists and engineers need to be trained to think about the implications of what they are building and what they will be like when it’s applied in a real-world setting. The research team is now working with some early career AI scientists, to inspire them into thinking about the societal impact of their work.

“Maybe instead of saying, ‘Oh, let’s have this ethics board and that oversight board,’ people will be saying, ‘We need to get this done, and we need to get it done properly’”.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.