“A ship in a harbor is safe, but that is not what ships are for.”

Grace Hopper, American Navy admiral and computer scientist

I have the very special privilege of talking to many clinicians from a myriad of meetings (AIMed Clinician Series and Global Summits, review courses of the American Board of AI in Medicine, the Medical Intelligence Society, etc.) about how clinicians can be more involved in AI in medicine in the future. These clinicians range from premedical students to physician executives. I have three suggestions for our clinicians who want to leave their comfort zone of clinical medicine and enter the less familiar domain of AI in healthcare and clinical medicine:

1. Learn about AI concepts and review related domains as your first step

There is an understandable yearn to rush to learn Python or R programming languages to feel like one is actively engaged in this new domain. It takes a few hundred (perhaps a few thousand) hours to be proficient at programming at a high level for healthcare. One could argue that learning AI concepts (such as types of deep learning, natural language processing, and cognitive computing), as well as the ethics and limitations of AI is even more important than learning to program. In addition, clinicians can review related fields such as health informatics and biomedical statistics, which are excellent foundational and related knowledge domains for AI.

2. Stay in clinical medicine and learn the many nuances of medicine

There are too many projects and ideas that have too little clinical relevance and/or impact, and this chasm is very costly. Silicon Valley has a number of clinicians in AI in healthcare companies who actually have never spent a single day in a healthcare setting or hospital. Experience in healthcare as a clinician is a very valuable asset for AI in healthcare and, without this perspective, projects and interpretations may not be spot on. It takes a long time to develop this special experience and wisdom in clinical medicine and healthcare, and it is easy to underestimate this length of time.

3. Study how humans and clinicians think and make decisions

In some ways, even more important than learning about artificial intelligence in healthcare is developing an innate understanding of how humans (and, in particular, clinicians) think. Humans have myriad biases and heuristics that render humans vulnerable to making errors. One example is confirmation bias, which many seasoned clinicians unknowingly have. In addition, the advent of AI can create automation bias, or over reliance on automated decisions. Learning about decision analysis is particularly useful, as present ML/AI projects need more cognitive architecture for the future.