Nicoleta J. Economou, PhD, Director of Governance and Evaluation of Health AI Systems for Duke AI Health and the Director of the Algorithm-Based Clinical Decision Support (ABCDS) Oversight on the necessity of fairness, transparency, and inclusivity during the development, implementation, and monitoring of health AI

How has your early career experience as a biochemist informed your approach to medical AI?

My extensive background in protein biochemistry, structural and molecular biology, and macromolecular crystallography has greatly influenced my approach to health AI. A significant milestone in my career involved investigating the binding mechanism of bacitracin, a vital component in topical antibiotics like Neosporin. During my PhD, I delved deeply into macromolecular crystallography, encompassing both wet lab techniques to obtain high-quality protein crystals and statistical and computational processes for refining and validating models of macromolecular structures.

As a basic scientist, I embrace a problem-solving approach that involves viewing complex issues as “black boxes” from various angles, considering different variables, parameters, and interactions to reveal potential solutions. My early experience in biochemistry has also equipped me with the ability to dissect problems into manageable components, uncovering hidden connections and mechanisms that may go unnoticed. My experience in basic research also provided me with a profound understanding of the intricate workings of biological systems, contributing to a unique perspective in developing robust, high-quality AI solutions for healthcare, explaining AI-enabled solutions to a wide audience, and communicating the importance of testing AI within the systems they are embedded in. Although my journey into the realm of clinical algorithms was not initially apparent, my expertise as a biochemist, coupled with curiosity and a versatile skill set, continues to guide my efforts in improving the accuracy and impact of health AI, revolutionizing patient care at Duke University and beyond.

How and when was ABCDS Oversight founded?

The Algorithm-Based Clinical Decision Support (ABCDS) Oversight framework took shape over three years ago in 2020 through a collaborative effort between Duke University School of Medicine and Duke University Health System. Months prior, I initiated discussions with Dr. Michael Pencina, the Director of Duke AI Health and Vice Dean for Data Science, and other experts who envisioned an evaluation process for clinical algorithms resembling the regulatory approval process for medical devices. At the time, no formal mandate existed for overseeing clinical algorithms in healthcare. Despite the challenge of raising the bar for quality and ethics without stifling innovation, Duke leadership, including the Dean of the School of Medicine, the Chancellor, and the Chief Quality Officer, released their mandate to implement governance for all AI and machine learning algorithms and tools in the Duke Health System. Supported by university and health system leaders, the ABCDS Oversight Committee was formed in January 2021. Together, we established the ABCDS Oversight framework, a comprehensive people, process, and technology framework for governing and evaluating clinical algorithms at Duke Health. As the founding program director, I contributed my expertise in software development best practices, working alongside other Duke leaders to conceptualize checkpoints throughout algorithmic tool development and deployment, ensuring the effective execution of plans.

How are you able to determine the impact that ABCDS Oversight is making? Is there a success story that you’re particularly proud of?

ABCDS Oversight is an ongoing initiative focused on continuous quality improvement. Currently, our portfolio comprises 52 registered algorithmic tools at various stages in their lifecycles, including 14 deployed AI/ML-enabled and 20 more preparing for deployment at Duke Health. We have observed significant benefits through our regular reviews and the establishment of our framework. Over the past two years, implementing this framework has allowed us to achieve transparency, leading to notable advancements in fairness and equity. Our framework has resulted in enhanced compliance with internal processes, such as the Institutional Review Board (IRB), reducing regulatory risks when appropriate. By fostering accountability, we ensure clear responsibilities for algorithm development, testing, and clinical ownership. Furthermore, we have gained a comprehensive understanding of how to deploy these tools effectively, ensuring they are fair, safe, efficacious, and aligned with our patients’ needs. As we continue to learn and refine our processes, we aim to consistently improve the quality of care we provide to our patients.

In terms of success stories, The ABCDS Oversight Committee has successfully identified various discrepancies related to fairness, bias, accuracy, and reliability in the health AI algorithms it evaluates. Through meticulous evaluations, this committee has raised concerns about algorithmic bias associated with factors such as religion and gender, as well as data curation issues. Recognizing and addressing these inconsistencies is deemed crucial by the committee to establish reliable and impactful AI solutions in healthcare. By prioritizing patient safety and promoting fairness, the committee plays a vital role in ensuring the trustworthiness and effectiveness of health AI algorithms.

What are the key steps healthcare organizations should be taking to ensure responsible AI development?

At Duke AI Health, we believe that fostering a culture of transparency and accountability is key, and healthcare organizations should be transparent about the limitations and risks associated with health AI solutions, engage in open dialogue with patients and other stakeholders, and establish mechanisms for feedback, reporting, and addressing concerns to promote responsible AI development that prioritizes patient well-being. Speaking from my experience at Duke, I also think healthcare organizations must lay down clear guidelines and establish ethical frameworks for health AI development to ensure that all AI/ML-powered systems align with what should be universal values of clinical impact, safety, transparency, fairness, and inclusivity. By prioritizing equity and diversity in their health AI teams and encouraging interdisciplinary collaborations and involving stakeholders from diverse backgrounds can help both small and large health systems identify and mitigate biases in AI/ML algorithms and ensure the development of more equitable and fair health AI solutions. Robust evaluation and validation processes before deployment, followed by continuous monitoring are also essential to ensure the safety, accuracy, and effectiveness of health AI technologies.

AI is seen by many as the key to true democratization of healthcare. What is your perspective on this?

As the Director of Governance and Evaluation of Health AI Systems for Duke AI Health, my perspective on the notion of AI as the key to true democratization of healthcare is one of cautious optimism. AI has the potential to transform healthcare by improving access, efficiency, and quality of care for all, regardless of their socioeconomic status or geographical location. By the way of streamlining decision-making, health AI solutions can not only free up healthcare professionals’ valuable time, allowing them to focus on more complex and critical patient and caregiver needs, but also relieve them from burnout. AI/ML-powered diagnostic systems can also boost accuracy and speed in detecting diseases, leading to early life-saving interventions and improved outcomes. However, we must tread cautiously to ensure that the benefits of health AI are equitably distributed enabling less-resourced healthcare delivery systems in the community to be able to evaluate health AI solutions they may be acquiring. Democratization of healthcare through AI also requires addressing key challenges such as data bias, as AI algorithms are only as good as the data they are trained on and local validation of algorithms is important. It is crucial to ensure representative and diverse datasets to avoid perpetuating existing healthcare disparities. Robust governance frameworks such as the ABCDS Oversight framework being deployed at Duke must be in place to address ethical concerns such as privacy, security, and informed consent in health AI technologies. Transparent evaluation processes and ongoing monitoring of health AI systems are crucial to maintaining trust and mitigating their potential risks. All in all, I believe health AI holds great promise in democratizing healthcare, but its development, implementation, and monitoring must be guided by principles of fairness, transparency, and inclusivity to truly benefit all individuals and communities.

Nicoleta J. Economou, PhD, serves as the Director of Governance and Evaluation of Health AI Systems for Duke AI Health and the Director of the Algorithm-Based Clinical Decision Support (ABCDS) Oversight leading the operations and framework design effort for the governance, evaluation, and monitoring of ABCDS software at Duke. Dr. Economou also leads all Duke AI Health initiatives relevant to the evaluation and governance of health AI technologies, along with leading operations of the Coalition for Health AI, a coalition establishing the guidelines and guardrails for health AI technologies. Previously, Dr. Economou led projects supporting a learning health system at Duke working alongside faculty and health system leadership to bring together people, processes, technologies, and data streams required to drive evidence-based continuous improvement and innovation in healthcare delivery and operations. Before joining Duke, she worked in the life science and pharmaceutical industry, where she managed clinical analytics portfolios to drive data-informed decisions for drug development with a key focus on clinical data review, clinical safety review, and clinical operations. Additionally, she developed, validated, and deployed risk models and helped design and define metrics for monitoring clinical trials using analytics software. Dr. Economou did her postdoctoral training at the UNC Eshelman School of Pharmacy and received her PhD in Biochemistry from Drexel University College of Medicine.

This fascinating topic of the governance of AI, along with others will be discussed at the annual Ai-Med Global Summit, scheduled for May 29-31 2024 in Orlando. Book your place now! 

We believe in changing healthcare one connection at a time. If you are interested in the opinions in this piece, in connecting with the author, or the opportunity to submit an article, let us know. We love to help bring people together! [email protected]