“In deep learning, the algorithms we use now are versions of the algorithms we were developing in the 1980’s, the 1990’s. People were very optimistic about them, but it turns out they didn’t work too well.”
Geoff Hinton, cognitive psychologist and computer scientist

Deep learning has made major impact in medical imaging in fields like radiology, pathology, dermatology, ophthalmology, and cardiology; a major drawback, however, has been its penchant to need large amounts of biomedical data. This leads to a myriad of challenging issues like data privacy, potential monopolies, bias, and inconsistent qualities. In addition, deep learning can also have limitations in spatial hierarchies between dimensions and objects (so that it lacks “intuition”) as well as explainability (so that it falls short of “transparency”).

Just as a child does not need to learn from looking at hundreds of thousands of pictures of any object to know what this object should look like, eventually deep learning and AI will need to have the same capability. It is 60 years since Frank Rosenblatt published about the precursor of the neural network, the perceptron, so it is time that cognitive elements in learning (such as generative adversarial networks, transfer learning, temporal convolutional network, recursive cortical network, and one-shot and even zero-shot learning) all converge and push the envelope of artificial intelligence to its fullest in order to achieve faster adaptations needed in biomedicine.