BY: SARA GERKE, DANIEL B. KRAMER AND I. GLENN COHEN

Artificial intelligence (AI) offers new opportunities to improve diagnosis and treatment across a wide spectrum of cardiovascular conditions. In theory, algorithms driven by AI can interpret the torrent of physiologic data emerging from implantable and wearable devices to refine the diagnosis of conditions such as atrial fibrillation (a common arrhythmia that can increase the risk for stroke) and congestive heart failure (i.e., when the heart muscle cannot pump effectively enough to meet the body’s metabolic demands), in which timely identification can lead to meaningful treatment changes. AI may also help mine new and existing data sources to improve the precision of treatment delivery, identifying patients most and least likely to benefit from medications designed to prevent blood clots and strokes, or devices such as implantable cardioverter-defibrillators (ICDs) and cardiac resynchronization therapy (CRT). However, applying AI to these areas of critical clinical need raises key questions for regulators, clinicians, and patients. Thus, this article discusses the promise, alongside the legal and ethical challenges of AI in cardiology.

POTENTIAL OF AI IN CARDIOLOGY

Pacemakers are implantable devices that treat abnormally slow heart rhythms. Wires are placed into the heart through the venous system and attached to a “generator” placed under the skin on the chest, which includes the battery and software that governs device function. ICDs are similar, but also have the ability to deliver highvoltage shocks to restore a normal heart rhythm if patients develop a life-threateningly fast heart rhythm. In some patients with heart failure, a pacemaker or ICD system can include an additional wire placed on the back of the heart in order to “resynchronize” the pumping function of the heart, also called “CRT”. The generators for each of these device types collect massive amounts of physiologic information.

This includes data central to the function of these devices, such as battery and wire measurements, as well as more tangential information about patients’ heart rate profile, physical activity, temperature, and ambient arrhythmias. Devices implanted in patients with heart failure may also illustrate transthoracic impedance (a way to approximate fluid retention) and other metrics of disease status. Complex analysis of these data sets have yielded mixed results: Algorithms designed to target early problems with ICD leads, for example, can successfully identify device malfunction before patients are harmed.

By contrast, clinically meaningful heart failure interventions based on device data have not been widely integrated into patient care despite promising studies.

AI could also be leveraged to resolve two intractable problems related to ICDs and CRT in particular. Clinical guidelines for implantation of both device types draw upon modestlysized clinical trials, with professional society recommendations largely driven by the average treatment effects realized in these pivotal studies.

However, observational data demonstrates marked heterogeneity of treatment response to both ICDs and CRT. Most ICD recipients never receive an appropriate therapy from their devices, yet incur the lifelong expense and risk for complications. Similarly, a quarter of CRT recipients do not respond to treatment, and no current algorithms meaningfully improve that yield.

AI applied to existing data sources, or new areas such as electronic health records or pre implantation wearable diagnostics, could potentially improve the selection of device recipients and the associated cost-effectiveness of these expensive therapies. Pacemakers and ICDs often also include software for diagnosing new cases of atrial fibrillation. The simplest of these records high atrial rates and displays corresponding electrical signals for manual review. More sophisticated approaches, including the proprietary algorithm underlying the Apple Watch atrial fibrillation diagnostics, rely heavily on the variability in heart rate. Millions of patients with both implantable and wearable devices evaluating these arrhythmia patterns could potentially benefit from an AI-driven approach to improve the sensitivity and specificity of device-adjudicated atrial fibrillation.

BIAS AND FAIRNESS

AI applications to these and other cardiology projects will only be as good as the training data that is fed to them. While an algorithm can learn to identify new arrhythmias, guide selection for new device implants, or predict key outcomes if trained with the right data; the results may be limited for all or selected patients if the original derivation draws from biased data. Experts are concerned that AI could simply automate human biases, such as gender and racial biases, rather than remove them, especially if those biases infect the training data.

For example, an algorithm intended to optimize device placement for cost-effectiveness may reinforce or amplify socioeconomic disparities if patient factors such as medication compliance, frequency of doctor visits, and insurance status markedly influence the clinical outcomes of interest. Algorithms whose training data come from homogenous populations may also be simply inaccurate when generalized to more diverse cohorts. For this reason, it is essential to diversify data for AI training intended to be broadly applicable, but ensuring safeguards against amplification of bias remains an important problem.

INFORMED CONSENT

The use of AI to assist cardiologists in diagnosing or treating patients also brings new challenges for the patient’s right to informed consent. How can clinical and patient/consumer facing health AI accommodate informed consent as currently conceived and practiced, and what alterations to either AI or informed consent might be necessary? It is time to squarely face the question or whether informed consent is the appropriate paradigm for the use of AI in cardiology or whether modifications to informed consent are needed for this use? FDA-cleared approaches to arrhythmia identification incorporate proprietary “black box” computation, which asks clinicians to consider the extent to which they need to understand not just whether a specific technology works, but how it achieves those results.

How understandable should AI be when introduced to clinical practice, by either physicians or patients? How should its inclusion in therapeutics or decision-making be incorporated into informed consent? Does the use of AI alone require specific attention in an already-crowded informed consent process? A related but separate question is what information should AI be allowed access about the patient. In general, the more an algorithm knows about the patient, the more effective it is likely to be in achieving the outcome of interest. Patients might have a special sensitivity to AI access to particular kinds of data generated by the health care system or outside it (e.g., Google search results, geolocation, etc.). Some pacemakers and ICDs can connect via Bluetooth with an app on the patient’s smartphone or tablet.

The use of apps to communicate with pacemakers or ICDs or on wearables, such as the Apple Watch, will likely mean the involvement of user agreements that most patients/users have difficulties to understand and usually do not read. When can user agreements suffice as opposed to true informed consent?

In addition, frequent updates will make it even more difficult for patients/users to keep track of what is being changed in the software.11 The potential integration of AI into these data relationships makes confronting questions around consent even more urgent, as the inscrutability of these algorithms will make effective education for patients regarding risks and benefits particularly difficult.

DATA PRIVACY

In general, usage of patients’ medical data demands assurance of appropriate privacy protection. In the U.S., the Health Insurance Portability and Accountability Act (HIPAA) is the main legal safeguard against unauthorized use and disclosure of health information (though in some instances the Federal Policy for the Protection of Human Subjects (Common Rule) will also apply). HIPAA restricts some uses and disclosures of individually identifiable health information generated by “covered entities” such as insurance companies and health care providers and their “business associates.”Importantly, technological companies such as Google, Facebook, and Amazon are not HIPAA-covered entities and thus health data which is collected through wearables such as the Apple Watch are generally not protected under HIPAA. In contrast, the EU’s new General Data Protection Regulation (GDPR) is broader in its scope and applies to all personal data, including “data concerning health” such as non-health information that supports inference about health. Does the U.S. need to move to a privacy regime more like Europe’s? It is also worth noting that in the U.S., state privacy laws may also play an increasing role, especially the California Consumer Privacy Act of 2018 (CCPA) which will be operative from 1 January 2020 and will apply alongside HIPAA.

CYBERSECURITY

The new technologies used in the AI space are also vulnerable to cyber-attacks. Consider, for
example, the recall (in the form of a firmware update) of around 500,000 pacemakers in August 2017 by FDA due to fears that they could be hacked to alter the patient’s heartbeat or to run the batteries down. FDA released the Medical Device Safety Action Plan in April 2018 that, among other things, aims to advance cybersecurity to promote safer innovative technologies. While these initiatives are laudable, it is also crucial to have an internationally enforceable cybersecurity framework in place, since cyber-attacks pay no heed to national frontiers.11 Machine learning systems are particularly vulnerable to manipulation in health AI applications: one small alteration in how inputs are shown to a system can entirely change its output, thus, for example, classifying a mole as malignant with 100 percent confidence.

In conclusion, AI has the potential to transform healthcare, including cardiology. However, these innovations will raise ethical and legal challenges such as bias and fairness, informed consent, data privacy and cybersecurity. Stakeholders, especially AI makers and health care providers, need to address such challenges at the earliest stage possible to contribute to successful implementation of AI in cardiology and thus build patient/consumer trust.

Acknowledgements
Sara Gerke’s and I. Glenn Cohen’s research is supported by a Novo Nordisk Foundation-grant for a Collaborative Research Programme (grant agreement number NNF17SA027784). Daniel B. Kramer’s research is supported by the Greenwall Faculty Scholars Program.

For references please see this article featured in the AIM Magazine Volume 2 Issue 2.