Anthony Chang looks forward to the next paradigm in AI in medicine: Neuroscience-inspired integrated cognitive architecture

 

Artificial intelligence has made robust advances in the past decade, especially in the sub-domains of deep learning (convolutional neural networks in medical image interpretation), reinforcement learning (deep reinforcement learning in defeating the human Go champion), and natural language processing (generative pre-trained transformer, or GPT-3), but further advances may be thwarted by significant limitations of these methodologies.

Deep learning has been duly impressive in narrow tasks such as pattern recognition and image classification, but higher level challenges such as abstract reasoning, learning flexibility, and knowledge transfer remain unsolved in the current state of artificial intelligence. In other words, deep learning can answer the “what” but too often not the “how” nor the “why”.

Even the leading authorities in the deep learning sector readily disclose that there remain significant limitations for deep learning: requirement for large amount of human-curated data or simulations for training as well as limited ability for deep learning to generalize to new tasks or settings. There are, however, ongoing discussions of modification elements based on the brain as part of the next development of deep learning; this strategy is exemplified by the work of Geoff Hinton and his concept of the capsule network to overcome rotational relationships in images.

There are many interesting differences between a brain and a machine: while the brain is capable of learning from a very little data or examples and is good at creating solutions to problems not explicitly outlined (with the deployment of attention, metacognition, concept formation, and memory), the machines cannot match these capabilities of the human brain but are able to perform at a high level in other tasks such as perception (see figure below).

On the other hand, the brain is prone to many biases and heuristics while the machine can be more objective from its von Neumann computer architecture. A continual deep understanding of neurosciences, therefore, is absolutely essential for future development and innovation of artificial intelligence, especially artificial general intelligence (AGI).

 

Integrated Cognitive Architecture

According to Joseph Voss, there have been up to now two waves of artificial intelligence: the first wave was the traditional programming wave, followed by the second wave of neural nets and deep learning. The third wave of cognition-based AI, with its cognitive architectures that embody capabilities of perception, attention, learning, action selection, memory, reasoning, and others, will be by far the most difficult and challenging AI wave but concomitantly the wave with the most promise as well as reward (see figure below). It is this third AI wave that will be even more inspired by neurosciences as we get closer to AGI in the future.

An early neuroscience-based approach to artificial intelligence was Jeff Hawkins’ hierarchical system for storing and applying memory called cortical learning algorithm. This effort was an early attempt to transition from statistical learning of neural networks to contextual adaptation of cognitive architectures of the future. An essential understanding of not only how these deconstructed modules of the cognitive process function but how these components interact with each other will gain insight into the complex functions of the human brain.

There are dozens of integrated cognitive architectures being studied, and among these are ACT-R and SOAR. Adaptive Control of Thought-Rational (ACT-R) is a unified model of cognitive architecture intended to encompass higher-level human cognition with declarative (factual information) and procedural (use of knowledge to solve problems) knowledge. SOAR has a similar cognitive architecture with a central working memory positioned between long term memories and the environment.

In addition, dimensionality reduction and feature selection processes to achieve high and low-level abstraction in an integrated cognitive architecture are necessary components to simplify the complex problems that it faces.

The future of a neuroscience-inspired integrated cognitive architecture holds great promise. One strategy is to incorporate into AI methodologies how a child learns: innate knowledge and common sense with much less reliance on trial and error and large amounts of data and experience. Examples of these tools that have contextual meaning are: interaction networks, neural physics engine, and recursive cortical networks.

Another strategy, neurosymbolic AI, is to develop intelligent agents that will embody a myriad of AI tools, such as neural networks as well as knowledge representation and abstract reasoning. In a recent AI Magazine issue, a standard model named the Standard Model of the Mind was proposed to researchers in cognitive architectures for a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics.

Artificial intelligence has made robust advances in the past decade, but further advances in AI will need a major paradigm shift towards better understanding of the neurosciences. In the upcoming era of integrated cognitive architecture, we could consider convergence of the entire portfolio of AI tools to eventually transcend what the human brain can do, seemingly without effort.

Just as humans in the past realized the yearning to fly by observing birds in flight and devising principles of aerodynamics, we can configure an increasingly more capable machine intelligence portfolio by delineating and adopting the complex and wondrous cognitive functions of the human brain. This integrated cognitive architecture of brain-machine synergy will promulgate the future brilliant innovations in artificial intelligence.

 

Don’t miss AIMed’s virtual multi-track CME-accredited event, ‘Surgery, ICU and Neurosciences’ on 30th and 31st March.

 View the full two day agenda and book here