The mention of GANs or Generative Adversarial Networks may bring some of us to think of the first algorithm-generated portrait ever auctioned at Christie’s. Others may think of the notorious deepfakes; authentic-looking digital content that turned out to be factitious. GAN was the brainchild of Ian Goodfellow, the present Director of Machine Learning at Apple, who was challenged by his colleagues back in 2014, to make a computer that can create photos on its own.

The idea was not novel; there were already researchers working on the topic at that time. They tried to design faces using neural networks but met with little success as the final products tend to be blurry or come with fundamental defects like missing ears. What Goodfellow proposed was to have two neural networks working against each other. They would both be fed with training data but given a separate task each.

The first network (i.e., generator) is responsible of generating artificial outputs through imitations of the training examples. The second network (i.e., discriminator) would then decide whether the fabricated outputs are real by comparing them with the training examples. Whenever the discriminator rejects an output produced by the generator, the latter will go back and try to recreate the imitation again. The process repeats until the discriminator is not able to tell whether the output was genuine or fabricated training examples.

From the creation of synthetic data to drug discovery

The inception of GAN had made Goodfellow a “celebrity” (i.e., Father of GANs). It also became one of the most look-forward to inventions in the field of artificial intelligence (AI). In medicine, GANs are associated with the creation of synthetic healthcare data for AI projects to overcome related data-deficiency or inaccessibility challenges.

For example, in 2018, technology company Nvidia had partnered the Mayo Clinic and the MGH & BWH Center for Clinical Data Science to construct abnormal MRI (i.e., Magnetic Resonance Imaging) scans with GANs that can be used to train its deep learning model. Researchers involved in the project believe the use of GANs would not only guarantee the production of low-cost and diverse data that has no privacy concern and can be freely shared across institutions. It also allows them to make changes to the size of a tumor or its location, to come up with millions of different combinations which would otherwise be hard to achieve in organic images.

Around the same time, Insilico Medicine, an American biotechnology company specializes in drug discovery and aging research had combined GANs with reinforcement learning to build an Adversarial Threshold Neural Computer (ATNC) model for the design of de novo organic molecules that come with the required pharmacological properties; facilitating the drug discovery process as a result.

Researchers also demonstrated at the 2018 Institute of Electrical and Electronics Engineers (IEEE) International Conference on Healthcare Informatics (ICHI) that AI trained using GAN-generated synthetic data for tissue recognition can reach accuracy level (i.e., 98.83%) equal, if not, superior than human experts.

Others chose to use GAN as a data augmentation tool. A group of researchers from the National Institutes of Health Clinical Center used GAN to modify contrast CT (i.e., computerized tomography) images into non-contrast ones. They then trained the U-Net (i.e., convolutional networks for biomedical image segmentation) using both the authentic CT image and a combined authentic and GAN-generated images and uncovered the latter showed a better performance when asked to perform various CT segmentation tasks.

Challenges to be resolved in this new decade

Like its AI counterparts, GAN does have its own unique set of challenges. First of all, both generator and discriminator may forget strategies they employed earlier during training which prevents them from improving over time. The networks may also experience “mode collapse”, when the generator only learns a subset of the training data being fed to it and repeatedly creating the same output (i.e., a round chair instead of chairs of different shapes and sizes).

On the other hand, one network may progress faster than the other (i.e., overpowering), to the extent that productive learning cannot take place anymore. Indeed, GAN is different from deep learning, as it’s does more than recognition and classification and genuinely creates an alternative reality based on reality. Nevertheless, GAN can be rather complex and takes up a lot of memory, putting an added demand on hardware and computational power.

As such, it becomes a struggle for the medical and healthcare community whether they should spend time and effort on training GANs for synthetic data or should they splash on collecting and cleaning real-world clinical data. Probably, these are the challenges that will resolve on their own in this new decade as we progress, especially with the emergence of quantum computing. However, at the end of the day, the question remains how robust our AI can go, given the hardware that we have got now?

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.