The discussion of ethics in artificial intelligence is on the table again when researchers from the Media Lab of Massachusetts Institute of Technology (MIT) published the results of The Moral Machine experiment on Nature last week. This thought experiment embarked in 2014, when a game-like platform was set up to crowdsource people’s opinions on how a self-driving car should react when faced with different Trolley problem scenarios.

The classic Trolley problem was devised by philosopher Philippa Foot in 1967, which situated a moving trolley and two separate tracks. There are five people tied up to the first track and one tied up to the second track. The participant who stands by the lever, can choose not to do anything, so that the trolley moves forward and kill the five people or pull the switch to sidetrack the trolley to kill the lone person instead.

The Moral Machine experiment adapted the setup into nine different scenarios and results showed that people’s answers are highly correlated with culture and economics. Researchers believe the data not only provided an insight into the collective ethical priorities of different cultures but will also help in drawing the ethics perimeter for AI in near future.   

Who has the control in the trolley problem?

Judith Jarvis Thomson, another philosopher once questioned the Trolley problem which goes beyond who ought to be killed; she wanted to know why the participant was given the power to decide who should live. In fact, the situation will no longer be bizarre if the participant is now the doctor while the people tied to the track are patients. Often, doctors are the ones in absolute control of the kind of treatments and medications which patients duly observe.

A common complaint as a result of such passive approach is patients find doctors not listening to their experiences or take into account of their presence. While life may be extended at the end of the day, the quality of life is overlooked and end of life decision becomes a moot. Right now, there is no regulation to render patients the rights to own their medical records. If this change in the coming years, the development of artificial intelligence (AI) may be hindered due to insufficient data.

Even if that future does not arrive and diagnosis advance to be more personalized and precise, will doctors use AI as a tool to persuade patients to undergo or give up certain treatments; citing reasons like “the program says you have a 80% chance of recovering if you use this drug” or “the algorithm says the success rate of this operation is below 50%, you may wish to consider something else”.  

A unique trolley problem for AI in medicine

The likelihood of doctors to create harm is low, but some good actions may have harmful side effects. Usually, morally acceptable actions are those which side effects had been foreseen and considered as unintentional. Perhaps the most appropriate solution is to strike a balance.

Nevertheless, if there ever be a Moral Machine experiment in AI med, the problem will probably be, “You are tied to a track. There is moving trolley and two separate switches. One switch is used by AI, which runs on data and algorithm generated by past health records and present situation. The other is used by doctor, who has years of experiences and understand the consequences the trolley may create on you.

Neither of the switch will stop the trolley completely but will minimize the impact the trolley is about to create on your body. There is no absolute guarantee, you may still die or survive without harm, who will you choose to press the switch?”

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.