Tushar Ranchod MD, a retinal surgeon at Bay Area Retina Associates and Founder and Chief Medical Officer at BroadSpot Imaging, discusses the future of diagnostics and AI across ophthalmology and beyond
Tushar M. Ranchod, M.D. is the Founder, Chairman and Chief Medical Officer of BroadSpot Imaging Corporation as well as a partner and vitreoretinal surgeon at Bay Area Retina Associates. Under his leadership, BroadSpot invented a retinal imaging platform that combines portability, performance and scalability in order to leverage machine learning.
Over the last twenty years, Dr. Ranchod has developed medical devices with supporting patents and also worked for a variety of early-stage technology companies, bringing his expertise in clinical care, as well as the business and logistics of medicine, to his efforts to transform medicine using retinal machine learning biomarkers.
He is an assistant editor for RETINA, the leading scientific journal in the field of vitreoretinal surgery and disease, and also the chair of the annual Business of Retina Meeting, held by the American Society of Retina Specialists.
What did you want to be when you were a child?
My father is a pathologist and I always admired his approach to medicine, particularly the intellectual side of patient care. I loved the idea of solving a meaningful problem that not everyone could solve. Also, I grew up in Palo Alto breathing the air of Silicon Valley. I grew up thinking it was normal to follow multiple professional threads at once, and well before I went into medicine, I started to see medicine as a platform that interconnects with other fields such as engineering, law and art. So as a child, I think I knew that I was headed for medicine but unsure of how the branches would grow from the tree.
Could your career have taken a different direction?
It almost did! Right after high school, I worked for a tiny startup company which grew, and several years later that company was acquired by Yahoo and became Yahoo!Mail. I went to college and continued to work for that company during my vacations, but more importantly the CEO of that company mentored me and inspired me to think about starting a company one day. That spark never left me. After undergrad at Stanford, I worked for another startup. I built the GUI for the prototype and when the founders raised funding, a friend and I became the first employees. That company had a great idea at its inception but failed miserably for a variety of reasons. It actually turned out to be a valuable education. I saw what to avoid, and that only made me more excited about entrepreneurship. Then when I was in medical school, I worked as a consultant and built the early GUI for an education startup. While my contributions probably had little to do with that company’s subsequent success, the experience gave me confidence in tackling complex product-user interactions.
Each of these experiences made me pause and think about whether to continue on my path to medicine or whether to jump sideways into some sort of venture. Ultimately, I continued with medicine then ophthalmology and then retinal surgery, but there are several points at which my career could easily have changed course. Years later when I started a company, I used everything I learned along the way as I formulated a business plan, raised capital, created IP, prototyped hardware designs, and mocked up software interfaces to get the company going.
Where do you see the next big advances in AI in healthcare?
I believe the next big advances will come from revising how we think of diagnostic instruments. AI has enormous potential for enhancing clinical diagnosis, and the time has come for us to adopt a methodical approach to the possibilities of deep learning.
Look at the application of AI to a common diagnostic tool, the electrocardiogram. The first step was to improve upon the current crude automated algorithms that are built into many ECG machines by using AI to provide more human-like interpretations. This has been done, and it’s a nice little incremental improvement. The next step was to use the same ECG machine and feed its relatively unmodified output to a neural network with a rich clinical data set to see what else it can identify. Now that’s been done, AI can look at an ECG and identify gender as well as clinical conditions that humans cannot identify. This is a larger incremental improvement and it and it can be done for cross sectional relationships as well as prediction of disease progression.
Next, we can incrementally modify the ECG machine to acquire data that humans currently have no use for, but which AI can mine for information not previously available. In the ECG example, this could mean attaching leads at hundreds of points instead of twelve, and feed the results with a rich clinical data set into a neural network to find out what relationships emerge. Or perhaps this means casting a wider net by collecting ECG data from non-cardiac patients who have other serious metabolic and hemodynamic problems.
The step beyond is really exciting, and that is to develop diagnostics that actually have no current basis in human interpretation. This means generating hardware design inputs that are optimized for machine learning with little regard for what humans can detect today. In the ECG example, perhaps we could reconceptualize the human body as a highly integrated neuromuscular electrical system that deep learning can tap into, and we identify several bets to place. We would collaborate with colleagues in physics and biochemistry to identify several entirely novel ways to evaluate the dynamic human electrical field and pursue those which are most likely to be repeatable, reliable and accessible.
Further along, we can develop multiple diagnostics that inform each other and thereby triangulate to improve accuracy. There are obvious downsides to such inter-dependency, but also new possibilities.
There are many ways to think about this evolution of diagnostics with AI, and right now there is value in having conversations about the concepts that map to our research on the ground.
The future diagnostics sound complex. What are the barriers to pursuing them?
Many current machine learning applications use unmodified diagnostic devices with labels and data sets rich in clinical and demographic variables from other sources. Much of this work can be done retrospectively, and doing it prospectively isn’t prohibitive because the diagnostics are often already deployed at scale.
The future diagnostics are challenging because you have to prospectively collect new diagnostic inputs along with the other variables. The cost is higher and the complexity is greater in just about every way you can imagine. In many ways the future applications of diagnostic instruments are more like pharmaceutical development than today’s diagnostics.
In pharmaceutical development, the hypotheses are often complex and the time and capital requirements are daunting. The work is approached in sequential phases, with larger clinical trials at each phase. Developing a pharmaceutical requires taking on substantial risk, but the rewards can be enormous and that’s why investors place those bets. By contrast, the development of a traditional diagnostic device is simpler. The risk is lower and the rewards are historically lower.
The future AI-enabled diagnostics are more like pharmaceutical development. Prospectively collecting data in a high-volume clinical trial or series of trials with a modified diagnostic will take time and capital separate from the actual hardware development. Traditional diagnostic device investors may not be comfortable with this additional risk. On the other end, AI investors may shy away from the risk of hardware modification. The pursuit of future AI-enabled diagnostics will require the community of researchers, entrepreneurs and investors to talk in a common language about this category of device that combines novel hardware and more complex applications of deep learning.
What excites you most about the future of AI in ophthalmology?
I’m really excited about new biomarkers that will change the way we make diagnoses in ophthalmology, but these new biomarkers will not be limited to ophthalmology. They will reshape the approach to diagnosis in fields such as cardiology and nephrology. For example, you can take the same process I just described for the electrocardiogram and apply it to fundus photography.
First, diabetic retinopathy screening with fundus photography was automated. Next, clinical studies showed that fundus photos can reveal things like gender and smoking status that humans can’t see, along with cardiovascular risk. That expanded the playing field enormously – everyone started to wonder what else deep learning could identify that humans can’t see in an image.
The step after that – the step in which we collect entirely new data sets prospectively – is just starting to take place in ophthalmology. Fundus photography and optical coherence tomography are being used to collect data in kidney failure, with some promising early results. I envision a day when deep learning and fundus photography can direct the management of heart failure and kidney failure in the ED or the ICU or even in the eye clinic. That day isn’t far away.
How do you spend time out from ophthalmology?
I create to relax. Sometimes I write poetry. Sometimes I make a surprise card to put in my daughter’s lunch box the next morning. Sometimes I push new projects in my clinical practice. It doesn’t really matter as long as it’s something that’s never been created before. And sometimes I just zone out and do nonsense of course, to let the wheels spin in the background. Everybody needs a little time to let the mind digest its lunch quietly in the back room.
Finally, what would you tell your younger self?
I would advise my younger self to study mechanical engineering as an undergraduate, instead of economics. Without studying any formal engineering discipline, I think I’ve improved my work by incorporating engineering processes and principles. I love the phrase “engineering is life” because every complexity can be thoughtfully unpacked and addressed using logical processes, and that approach energizes me. I also see that many of the physicians pulling medicine into the future are engineers. I imagine that if I’d studied a core engineering discipline before going into medicine, I could do even more with my ideas and energy today.
Dr. Tushar Ranchod will be speaking at AIMed’s virtual multi-track CME-accredited event, ‘Imaging’ on 29th and 30th June.
View the full, exciting two day agenda and book here