A trio of Stanford researchers have called for physicians to be hands on and set the agenda when it comes to implementing AI in healthcare and medicine.
In a paper titled, ‘Implementing Machine Learning in Health Care — Addressing Ethical Challenges’, published in the New England Journal of Medicine in March, the authors warned that biased data sets and profit-motivated programmers, companies, or health care systems could result in AI which gives unethical clinical recommendations.
David Magnus, PHD, one of the authors who is also the Thomas A. Raffin Professor of Medicine and Biomedical Ethics at Stanford said, “You can easily imagine that the algorithms being built into the health care system might be reflective of different, conflicting interests.
“What if the algorithm is designed around the goal of saving money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?”
The authors argued that to realize the enormous potential benefit of AI, physicians need to have a basic understanding of how algorithms are created, the data sets they’re based on, and how they reach their decisions in order to critically assess the decision support they provide.
They wrote, “Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes.”
As an example of ideal physician involvement, they point to a study where Stanford researchers are applying AI to routinely collected electronic health record (EHR) data to predict mortality of patients so that palliative care can be administered at the right time.
In this case, Magnus said, physicians and designers work closely to ensure that the incorporation of the predictions into the care equation includes guarantees that the physician “has a full understanding that the patient problems are answered and well-understood.”
Article extracted from News section of AIMed Magazine. Access Full Magazine Free Here.