For the past few years, deep neural networks were brought into play on various tasks ranging from speech recognition, computer vision, to social network filtering and playing of strategic games. In medicine, deep learning mainly monitors vital signs and notifies doctors shall the patients are off the trajectory. It also assists clinicians in detecting abnormalities, as in the case of an electrocardiogram (ECG) acquisition system. Deep learning models are trained to look out for signs that indicate an irregular atrial fibrillation but recently, a group of researchers from New York University, Evidation Health, and NYU Langone Health found that they may be vulnerable to adversarial attacks too.

Staging an attack

In order to test the limits of these deep learning driven electrocardiograms, researchers had to stage a hypothetical attack. They began by obtaining a set of ECG recordings and sorting them into four groups: normal; noise; those with indication of an atrial fibrillation and those that come with other indications. These data were subsequently used to train the deep neural networks. Researchers then introduced a minute noise – something which is too brief for human being to notice but an artificial intelligence (AI) system may pick it up, regarding it as an atrial fibrillation.

As foreseen by the research team, the deep learning driven ECG acquisition system mistakenly recognized normal ECG as examples of an atrial fibrillation up to 74% of the time. When the same adversarial attack was done on human clinicians, only 1.4% of them will make an error in their readings. Researchers concluded although it’s challenging for hackers to replicate such attack in the real-world as they do not have direct access to raw data, but that does not stop the generation of other adversarial examples.

Indeed, there is no quick fix to adversarial attacks at the moment. As AIMed reported earlier, facial recognition had turned into a dystopic joke during the Covid-19 Pandemic as a result of face masks that come with AI generated images. Driverless cars are also easily tricked into seeing non-existent objects by small manipulation of road signs. Some researchers blamed it on the way an AI is being trained. Most deep neural networks were trained using labeled data and a tiny tweak to the input, may lead to a very serious mistake.

Changing the way to learn

As such, researchers are now trying to change the way AI learn. Reinforcement learning is a relatively unexplored area as AI is now being taught “how to behave”. Each time when AI behave appropriately in a given situation, it will be rewarded and eventually, AI will pick up “policy” or skills that enable them to plan their actions.

Nevertheless, reinforcement learning is not bullet proof. For example, Adam Gleave, a PhD candidate at the University of California, Berkeley and his colleagues have been training virtual stick figures to play different two-player games, including kicking a ball at a goal. They trained the first set of bots to master the task before training a second set of bots that will exploit the first set. With reinforcement learning, the second set of bots soon uncover a leeway to sabotage the actions of the first set of bots.

Gleave and his research team noted that adversaries do not necessarily have to outsmart the victim AI systems, all they need to do is to break its “policy”. Hence, with reinforcement training, it may be easier for researchers to fine-tune the potential adversarial targets to account for “off policy” behaviors; something like catching a foul play as human do.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.