“If you work as a radiologist you’re like the coyote that’s already over the edge of the cliff but hasn’t looked down” and “Deep learning is going to be able to do everything” 

Geoff Hinton, AI expert

The article for this week is on deep learning from the technology and science journal Nautilus. When Geoff Hinton (winner of the 2018 Alan Turing award for excellence in AI) made the bold statement above in 2016 , no radiologist agreed with him, simply because they fully realize that they do far more tasks and make many more decisions than solely interpreting medical images.

As the AI in medical image interpretation matures, the value of this capability is more about complementing the radiologist’s (and any other image-focused subspecialist’s) work and reducing the burden rather than an antagonistic threat to replace them. A good example has been the real-time application of deep learning in invasive procedures such as colonoscopy to rule out malignant diseases.

Marcus points out that AI is hyped: it has often not fulfilled its promises and has often fallen very short of its expectations; in healthcare, IBM Watson and its promise to help cure cancer was particularly noteworthy as a failure of AI in healthcare. Deep learning has indeed made a contribution, but the size and impact will probably decrease with time.

Marcus points out that deep learning is particularly good at tasks that have relatively low stakes where perfect results are not fully expected.  When the stakes are higher (such as radiologic images or autonomous cars), however, deep learning needs an additional cognitive (“smarter”) element for real world, real time application particularly in outlier cases.

The same limitations and mistakes exist, Marcus claims, for the impressive transformer language models such as GPT-3. Toxic language and misinformation problems are now more noted in these language models and related works, and one AI researcher has even called these AI tools “stochastic parrots”.

The trustworthy AI that we desperately need in healthcare to solve problems will need not only much more meaningful data but also cognitive architectures (such as manipulating symbols and computer-internal codings) that will put AI in a different plane of capabilities and understanding for healthcare solutions.

In short, I agree with Gary Marcus that we need more “intelligent” AI that can deal with new and outlier situations, and this cannot come simply from more and more data and deep learning alone.

Read the full paper here: https://nautil.us/deep-learning-is-hitting-a-wall-14467/