Surgical da Vinci is one of the dozens of robots approved by the US Food and Drug Administration (FDA) and had received certification (CE) marking to head-on in the European Economic Area (EEA). In general, the dexterity of most of these robotics surgeons is not in question. Thus, the discussion has been whether it is ideal to employ them alone or should a human-surgeon artificial intelligence (AI) collaboration be encouraged or to have both in place? 

Recently, the debate has escalated to a whole new level as researchers being to teach “fight or flight” response in robots to make them a better driver. When faces with a potential threat, the human response system will ask an individual to either tackle the situation head-on (i.e., fight) or to run away (i.e., flight). A sign of fear will also be generated through physiological reactions like increase in heart rate and sweaty palms, to alert us to stay vigilant. 

To replicate that kind of response in machines, researchers at Microsoft placed sensors on volunteers’ fingers to capture and measure their arousal while they are put in a driving simulator. The data are fed into an algorithm which predicts the average pulse amplitude of a person at each moment of the drive. With that, every time when the machine performs driving and is met with a “fear signal”, it will realize that it has done something wrong. 

Fear is not enough

Researchers found that AI trains with such method will have 25% fewer crashes as a non-fearful AI. Instilling emotions, on top of cognitive and mechanical, could lead to greater autonomy in machines. A similar approach could also take place in automatons employed in medical and healthcare settings. However, rendering complete independence to robotic surgeons are still far from possible. 

Since 2012, Digital Surgery, a technology company based in London and the US had begun to analyze operations as a chronological sequence of events. They have also developed an application in which surgeons can rehearse their skills. The robust data accumulated from both avenues were fed into an AI algorithm that allows the machine to predict the next step of surgery. Nevertheless, there is still a lack of tactility to the information. 

The AI is unable to have a grasp of the how much force to use, how hard to press and how quickly to change the direction. There is still a gap between visual observations and hands on performing the action. Often, surgeons, athletes, and musicians make many attempts and faced to perfect a particular move. The kind of tactile sensation generated as a result of year after year of training, is the what present generation of surgical robots are lacking and what data cannot be delivered at the moment. 

Who should be liable? 

It is expensive to train surgeons and about five billion people around the World do not have adequate access to safe and affordable surgical care. In most cases, surgeons belong to an institution and either the institution or the surgeon or both entities will be liable shall any medical malpractice occur. However, there remains a heated debate of who shall be accountable shall any medical malpractice occur. At the recent AIMed Breakfast Briefing – Experience the future of AI in Radiology, Boston, guest speakers mentioned the likelihood of different law on liability within the member countries of European Union and different states in the US. 

At the same time, liability could also suggest the likelihood of robot working independently without the supervision of medical professionals, which may not be possible, even in the near future. As such, the final decision should always be made by the surgeon and they should be accountable. 

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.