At the beginning of the month, researchers from Massachusetts Institute of Technology (MIT) had invented a technique to assess how robust convolutional neural networks (CNN) are for different tasks. They do so by spotting when the networks are making mistakes when they should not. 

In general, CNNs are designed to process and categorize images for computer vision and many other tasks. Unlike human, CNNs’ perceptions or classification of an object are duly influence by the environment or the slightest modifications. Such change is known as the “adversarial examples” and they have been assisting researchers to realize how vulnerable artificial intelligence (AI) can be as a result of inconsistency and unexpected inputs in the real World. 

For example, a 2018 study had found that by placing a black and white sticker over a stop sign, driverless cars will be fooled into misclassifying the sign. As a result, it will fail to stop as required, which could potentially cause unnecessary harm. Presently, it is still impossible to evaluate the resilience of CNNs to adversarial examples. Those that are available do not accommodate to the complexity of CNNs. 

The new method 

The method developed by the MIT researchers will, thus, either look for an adversarial example or ensure all inputs are rightly classified. Fundamentally, the CNN will check all possible alterations to each pixel of an image. If it assigns correct classification (i.e., bridge) to each altered image, no adversarial example exists for this particular image. 

“Mixed-integer programming” is the technique behind this method. Researchers set a limit for every pixel in every input image to be brightened or dimmed up to a certain point. Within the limit, the altered image will still resemble the original input image, this will assist CNN in avoiding an adversarial example trap. Mixed-integer programming is used to find out the smallest change to the pixels that will cause the CNN to make a mistake. 

By doing so, the robustness of various CNNs can now be determined. One of the CNNs designed to classify handwritten digits or a MNIST dataset with 60,000 training images and 10,000 test images. They found that 4% of the inputs can be slightly perturbed to produce adversarial examples that lead to the wrong classification. The findings were presented at the International Conference on Learning Representations conference took place at the turn of the month. 

Algorithm surveillance 

Algorithm surveillance is probably one of the most important factors which deters AI from exercising its full potential in medicine and healthcare. While it is a real struggle to ensure an algorithm, once it is being deployed, will work. Securing its safety and functionality over a period of time is also going to strain the resources of medical institutions. 

In the discussion paper released by the US Food and Drug Administration (FDA) this April, the authority had proposed a risk-based approach to warrant algorithm safety. Manufacturers are held responsible for the development of the algorithm. They are also expected to perform risk assessments or evaluate the risks that are reasonably mitigated once the algorithm had received its approval. 

The paper is still in discussion phase till 3 June. Hopefully, the eventual liability of algorithm surveillance will go beyond the manufacturers. At the same time, there will also be more guidance in ensuring data use and robustness of AI algorithms. 

Author

Andrew Johnson