Ahead of his headline talk on responsible AI at the AIMed Global Summit 2023, the President of Responsible AI Institute, Seth Dobrin on taking a human-centric approach to AI in healthcare.

Seth, you trained as a human geneticist. What led you to specialize in artificial intelligence?

My doctorate research was focused in the area of neuropsychiatric disorders. at the time, the late 1990’s and early 2000’s, the amount of data needing to be analyzed exploded from the use of small numbers of genetic markers across relatively low sample numbers to tens-of-thousands, hundreds-of-thousands and soon after millions of markers across ever-increasing sizes of cohorts. Additionally, these experiments were being conducted on a new technology at the time — microarrays. These were assessed as multiple images that had to be read, normalized, and interpreted. The resulting data between the images and genetic marker data often end up being tens of gigabytes or more. This was one of the points of origin of what we today call ‘big data’; the other being astrophysics.

These new tools, R and Python were developed, with R being the tool of choice in the life sciences and Python in the physical sciences. We also had to discover new algorithms for analyzing this data as the higher resolution made traditional parametric and non-parametric methods (typically LOD Scores) obsolete and we had to account for and adjust results for having so many samples when using traditional statistical tools often hiding the actual results when using traditional statistical methods.

This led us to begin to use both supervised and unsupervised machine learning methods. I focused on the application of these tools as well as inventing new ways of scaling the lab works and the analysis for the first 15 years of my career in both human and plant genetics. During my time at Monsanto, I moved from just using these tools to solve biological questions to using them to solve business questions as well, which is where I have been applying my craft since about 2011.

You have a reputation for leading exponential change. What are the top 3 tips you would give anyone tasked with driving organizational change?

The technology problem is typically the easiest part of the change. Almost any engineering problem can be solved with the right team and enough money and/or time. The hard part comes with the implementation and adoption, but people often start to think about this at the end. In this context, especially in the medical field, I frame my advice as taking a human centric approach in the following ways:

  1. Human centric approach: Start with the humans involved – both those using the AI as well as those being impacted by the AI. They may or may not be the same set of humans
  2. Business value: What business, or medical problems are you going to solve to help make human life better, cheaper, or faster? What real, tangible value do they provide typically measured in cost savings or new revenue? Show value quickly, in 4-6 weeks, even if it is incremental value
  3. Feasibility: This is an especially challenging problem in the medical field as there are so many stakeholders – patients, providers, caregivers, medical institutions, payers, policymakers, and regulators. Once you understand the use cases and the business value for each use case, decide which ones are the most likely to get adopted so you can show value and build credibility

In the medical field, the best place to start is almost definitely the least interesting places – those that require the fewest stakeholders. For instance, ‘yes’ to operational problems with high value, short-term impact. Probably ‘no’ to using AI to make or aid in diagnoses. Some examples, of places I would start, are OR and ED room and staff utilization, patient, and staff, and equipment scheduling. 

The application of AI in clinical practice has enormous promise to improve healthcare, but it also poses ethical issues. How can AI systems prioritize “no harm” whilst making healthcare decisions? 

Any time AI impacts the health, wealth, or livelihood of a human, this concept of ‘first do no harm’ is critical. Since late November 2022, with the exponential rise of generative AI tools  popularized by ChatGPT, this is an even bigger issue that is driving an existential crisis in the AI field overall. This has led to institutions banning these tools and moratoriums proposed, neither of which is a good or even viable idea. Banning tools that add value lead to people finding creative ways to use them around the systems without any training or guardrails. Government-imposed moratoriums are unrealistic at best because governments do not act that fast, are not educated enough in this domain to make the decisions, and specifically around AI, there is no regulatory infrastructure that would be required to implement and enforce. any such moratorium.

Back to the question at hand though, the best way to do no harm is to start with the humans involved. Both the humans who will be using the AI and the humans impacted by the AI. In its most impactful form in healthcare, the human users would be the providers and/or payers and the humans impacted would be the patients. Starting from here you can understand and account for things such as disparate impact (bias in the outcome), equity, inclusion, diversity, etc throughout the entire process from scoping to operationalizing. I think the best way to illustrate this is through an example:

I want to build an AI tool to help make sure patients get better outcomes by leveraging my EMR to help predict sepsis earlier. 

Human using the AI me — a hypothetical provider

Human impacted by the AI — patients walking through my door

I ask you to pause and make a list of these considerations to ‘first do no harm’

You probably came up with:

  • Accuracy of diagnosis, maybe even the balance of false positives and false negatives. In this case, you would tolerate a high number of false positives and want the number of false negatives to be as close to zero as possible
  • Precision and recall – the ability to get the same answer over and over again (whether its the right or wrong one)
  • Bias – gender, race, ethnicity, maybe even economic situation or insurance status

Did you think about the sparsity of the data in your EMR for certain populations? Black and Brown communities are known to seek care at a significantly lower rate than White communities. Those that are economically disadvantaged and underinsured are similarly predisposed. We know there are differences in symptoms and responses based on environment, age, gender, and race so these are important parts of the problem and thus need to be addressed in the solution. How do you make sure you ‘do no harm’ to these communities?

How can healthcare professionals manage perceptions around trust of artificial intelligence?

This one is simple. Always, always, always think about the human impacted by the AI being developed as a design principle and not as an afterthought, and make sure the team building these systems have a diversity profile that represents the intended community being served. Be honest about where, when, and how AI is being used with the humans involved in your systems. This even means if the objective is to reduce or replace jobs, be honest with yourself and those impacted from the beginning.

How can we harness technology for good? What role do you see AI playing in the drive to address inequity in healthcare?

At its core, AI is just math and computer science that learns from data.

Math is not inherently biased, the data is, because it is the record of human decisions and humans are inherently biased. In fact, all of this talk about the bad things about AI are merely AI shining a mirror on the bad that is part of our society – racism, misogyny, hate, misinformation. AI can account for these and adjust for them if we start with the humans.

I end every interview, talk, podcast I do with the same statement:

“AI can lead us down one of two paths — it can propagate and magnify the past bad decisions of humans, or if done right it can actually reduce these and make the world a better place. We have a choice to make today, which one do we want?”

We were thrilled to have you speaking about responsible AI at the 2023 Ai-Med Global Summit. What did you enjoy about the event?  

Since my background is in human genetics, I am always interested in staying engaged and learning what is state of the art in the field. This event was a great opportunity to do that. 

You have already accomplished a great deal in your career. To what do you attribute your success?

I have changed careers multiple times which forces me to continually learn. This desire and drive to continually learn is especially important in today’s world where everything changes so fast. A decade ago this was an advantage in my career, today your job and your success depends on it.

We believe in changing healthcare one connection at a time. If you are interested in the opinions in this piece, in connecting with the author, or the opportunity to submit an article, let us know. We love to help bring people together! [email protected] 

Dr. Seth Dobrin is a globally recognized leading expert in AI. He currently is the CEO of Trustwise a company that helps organizations implant generative AI in a safe and secure manner. He was IBM’s Global Chief AI Officer and previously held senior positions at Fortune 500 companies. He believes AI has the potential to solve the world’s most pressing problems and advocates for its responsible use. He is a sought-after speaker and advisor.