Alexis is director of content at AIMed, with responsibility for the research, development and delivery of products across events, digital and publishing. A highly experienced events executive with a career focus on the intersection between healthcare and technology, he is also a school governor leading on teaching, learning, and quality of education.
Dr. Michael Abramoff, legendary ophthalmologist, computer scientist and entrepreneur, talks exclusively about his remarkable journey, the future of AI and the pain of being called ‘The Retinator’…
Born in the Netherlands, Dr. Michael D. Abramoff, MD, PhD, is an internationally renowned neuroscientist, fellowship-trained retina specialist, entrepreneur and computer engineer. He is Founder and Executive Chairman of Digital Diagnostics (formerly IDx), the first company ever to receive FDA clearance for an autonomous AI diagnostic system. He is also the Watzke Professor of Ophthalmology and Visual Sciences at the University of Iowa. He brought together intellectual property, including 16 patents on AI algorithms and sensors, the autonomous AI concept and its liability implications, its bioethical and health economics foundation, that inspires the vision that brought Digital Diagnostics to where it is today.
As a physician-scientist, Dr. Abramoff continues to treat patients with retinal disease and trains medical students, residents, and fellows, as well as engineering graduate students at the University of Iowa. Dr. Abramoff has published over 300 peer reviewed journals on AI, image analysis, and retinal diseases, that have been cited over 32,000 times, as well as many book chapters. He is the inventor on 17 US patents and 5 patent applications on AI, medical imaging and image analysis, and was one of the original developers of ImageJ.
Firstly, can you give us a flavour of what you’ll be talking about at AIMed’s upcoming Imaging virtual event?
I don’t want to give too much away but I’ll by touching on the fact that if you want AI to benefit patients, you need to address all the concerns people have – the performance and the safety, both on the clinical side as well as the ethical side. I like to compare it to the Wright brothers. They came up with a plane, they engineered it but then let other people like the aviation authority develop the safety rules and the airlines who developed the business model. That doesn’t work with healthcare and AI. You’re essentially building the plane while you’re flying it but you also need to develop the business model and the regulatory ethical and safety aspects at the same time. You cannot do it sequentially, one at a time. That’s the argument I will be making. It should be an interesting and thought provoking session.
Did you always have ambitions to get into this field?
From an early age, I always wanted to become a rocket engineer. So here’s a coincidence. Just yesterday, my middle son graduated from the Air Force Academy and will be joining the space force as an aeronautical engineer. So as of yesterday, my middle son achieved my childhood ambition of becoming a rocket engineer!
You’ve clearly been an influence on him. Who’s been the biggest influence on you?
My parents had a great influence on me. They were focused on business – no-one had a medical degree in my family. But they always instilled in me the importance of doing the right thing and creating a legacy. That’s always stuck with me. I never forget that technology is a tool and using it to make people’s lives better is what it’s all about. That’s the sort of legacy worth creating.
I’ll add this. In the past, my family greatly suffered from authoritative regimes. That’s all I’ll say on that – but that’s ultimately why we ended up here in the US. I came here 17 years ago to do something that’s good for people. It was lucky that I was able to have some insight into technology and was able to work it out and found people who supported that. I have my family to thank for that. They’ve always shown me the value of great persistence, and maybe I’ve got a bit of obstinacy, but that’s a really useful trait to have. So I can’t repay the good things people did for me in the past but I can do it now for other people. I like to think I’m paying it forward.
How did you end up getting into medicine?
When I was growing up in the Netherlands I was always interested in engineering. But as I got older, medicine began to appeal. Later still, I became interested in the patient/human aspect. This interest never left me and so I tried to find a way of combining the skills and learning of engineering with specialised medicine. I found these large gaps in medicine that we all recognise that I thought technology could bring hugely effective solutions to.
I studied AI during my residency and later started to apply it to a direct role with patients where they couldn’t get access to diabetic eye exams and I thought this is a problem that AI and neural networks should be able to solve. So lets do it! So I started doing research from there, ultimately founding what is now Digital Diagnostics and the rest, as they say, is history. Years later it’s so exciting that we’re now able to practice medicine with AI because ultimately I see it as a way to solve all the problems we have in healthcare like health disparities and giving everyone access to high quality medicine.
What are you most excited about the future of AI in ophthalmology?
I’m excited that the future is already here! Right now we’re doing it. We have a computer making a medical diagnosis for a diabetic eye exam. It’s not such a big problem in the UK as there’s such a big NHS screening project but in many countries including Netherlands and the US, it’s a giant problem with people not having access to high quality diabetic eye exams. It’s really about solving that across the country. But it’s happening and there’s a lot more to come. So it’s really exciting to not be talking about being able to do something in the future but to actually be doing it today.
Are you concerned about the ethical side of using AI in medicine?
I’m very concerned about that. I worked really hard on FDA approval – and I’m proud that in 2018 we received the first ever FDA approval for an autonomous AI system that detects diabetic retinopathy in primary care. Essentially the healthcare system is a very complex system – you have a lot of stakeholders, patients, physicians, the government, the regulators – they all need to come together to make these changes in healthcare. If you don’t have agreement, it doesn’t work because you’re all coming at it from such different angles.
The typical concerns about AI ethics have always been about bias and doctors losing their jobs. Ten years ago an editorial piece about me in Ophthalmology Times dubbed me ‘The Retinator’. It was about the fact that what I was doing with AI was supposedly going to destroy ophthalmologists’ jobs. So early on there was some pushback from my colleagues but now they’re very supportive. The American Academy of Ophthalmology is probably the strongest supporter of AI anywhere in medicine as is the American Medical Association. It just shows you that if you try and ram something like this through, you will just get resistance and you won’t change anything. So that led me to create an ethical framework that I’ve been pushing a lot. My main scientific job right now is the ethics. Are we losing jobs? Is there racial bias? Is this actually improving patient outcomes? There’s a lot of what I call ‘glamour AI’ out there which is technologically meaningful and exciting but does it actually improve patient outcomes and can you show that? We have to be able to demonstrate that if we want people to accept it.
The other important aspect of AI is how are we going to pay for this? And more importantly, who is going to pay for it? We don’t want to make healthcare more expensive, we want to make it more affordable. In the past, a lot of technology has just led to rising costs. We also have to address the issue of who’s liable for it? If the AI makes an error – and it will, it’s not perfect, I’m not perfect and I’ve been practising for years but it’s better than me– but if it makes an error, who’s liable for that? I said early on that the liability lies with the creator of the AI. And many AI creators are trying to avoid it. So we need to solve all these issues and then create a really good ethical framework around it.
How long will it take to fully overcome those challenges?
Well, we are solving it already. We wouldn’t have got the FDA’s first ever approval for an autonomous medical AI system if we didn’t have this giant ethical framework in place. I worked incredibly hard to get everyone aligned. If you don’t have buy-in from physicians, you can be the Uber or the Airbnb of healthcare but it won’t get you very far. So we’ve shown those challenges can be overcome. It’s great because it now means that other AI creators can look at what we’re doing and say, ‘Okay if I do these things, I get these points that shows my AI is ethically grounded.’
What’s been the greatest challenge you’ve overcome?
Being dubbed ‘The Retinator’ was a good learning moment. It was very painful to be called ‘The Retinator’ by your colleagues but it was a good thing to overcome. I was trying to do good but it shows you really need to get support from everywhere.
Healthcare is a very complex and conservative system because it’s ultimately about patient safety and patient outcomes. So we don’t change things just for the hell of it. We want to make sure it benefits patients. There’s a lot of exciting technology but if you push it in the wrong way, without thinking about patient benefit, racial bias etc, you get nicknames like ‘The Retinator’. You’re pushing something where it cannot go. We’ve seen it with gene therapy. There was a moratorium for gene therapy for decades. It was a very promising technology but young people died in very unethical trials. That didn’t just delay it, it set it back years. It demonstrates the importance of doing things in the right way from the very start.
What advice would you give to someone starting out in a career in medical AI?
Don’t dabble in it. If you decide to learn to code, do it seriously. Try to develop high quality code. Everyone can code. It’s like writing. We can all write a note but to write well takes serious commitment. So take it seriously, because there’s a tremendous amount to do. If you want to become a physician and then study AI, I think it’s worthwhile to do a residency. Because until you do a residency and become a specialist, you do not truly understand the depths of what it is to be practicing medicine and improving patients’ lives. If you seriously want to change things, you cannot dabble.
What would you tell your younger self?
I love these questions! If I bumped into my younger self in the street, I’d say, ‘It’s going to be alright.’ That would have been really worthwhile because at the time I was worried about a lot of stuff. So a simple, ‘It’s going to be okay’ would have gone a long way!
Where are the next big advances in tech medicine going to be?
We’re only just starting with AI. With my company, it’s currently one disease, one diagnosis, one very small step in the process to prevent people from going blind. But there are so many more diseases. So I’m excited about working towards receiving regulatory approval for the other AI applications my company is working on for glaucoma and macular degeneration, so we can touch even more patients’ lives. AI offers so much potential to improve the quality and accessibility of care. It’s all about reaching as many patients as possible to have the biggest impact on healthcare.
Finally, what do you consider your greatest achievement?
The fact that I’ve been able to utilise AI, not as a concept or as a technology, but have used it to improve patients lives, right now. I think that’s pretty cool.
Dr. Michael Abramoff is a keynote speaker at AIMed’s Global Summit taking place in Laguna Beach, CA on January 18-20, 2022.
View the full agenda and book here.