Attorney Sarah Swank highlights five common fraud and abuse pitfalls to avoid when structuring AI solutions in healthcare.

The physicians

Clinical AI services often involve physicians. But what if the physician ora physician’s family member owns part of the AI company that is contracting with a hospital?

In this scenario, there are possible Stark Law and Anti-Kickback Statute implications. This is especially true with arrangements where a physician has ownership in a company that is contracting with a hospital to which that physician refers and provides designated health services as defined by those laws. These relationships must be reviewed closely, often requiring an exception or safe harbor depending on the arrangement.

Other financial arrangements between a physician and the hospital for AI services include arrangements such as medical director services, administrative services, clinical services, and other financial relationships. Generally, these arrangements should be in writing and signed by the parties prior to services starting. Even those arrangements that do not include a physician, and instead include healthcare organizations or technology companies contracting together, must consider the requirements of the Anti-Kickback Statute, an anti-bribery statute that includes possible civil and criminal penalties for non-compliance.

The technology

For data scientists, coders, informatics-focused physicians, and market disruptors, the technology of AI is an exciting game changer in healthcare. In healthcare transactions, companies should review their financial relationship regarding the technology. It is important to first determine the intended use of AI. The use of the technology will drive the fraud and abuse analysis. AI is a term that encompasses several types of technologies and cuts across both clinical and administrative functions in healthcare. This technology could include real-time data for clinical decisions. Concepts such as arm’s- length transactions may be new to those who contract outside the healthcare industry, but they are key concepts in fraud and abuse analysis depending on the transaction.

Let’s explore examples of how the technology arrangements implicate the fraud and abuse laws. If the AI solution is an imaging technology integrated into the clinical practice of the hospital, then the fraud and abuse analysis regarding AI will likely be no different than the decision to buy a new MRI. The physicians using the AI technology are doing so at the hospital as part of the clinical services of the imaging department. If the same technology, however, is provided by the hospital to the physician for use in the physician’s office, this analysis would change. In this later case, if the hospital provides a benefit to a referring physician for use outside the hospital, then a possible fraud and abuse concern exists for both the hospital providing the technology for free and the physician receiving the technology. Healthcare organizations should review the laws before giving away technology to physicians or other healthcare entities, including AI.

The patients

The ultimate goal of some AI is better patient outcomes and clinical decision making. Some believe more widespread patient-centered AI is not far off in the future. This would mean putting
AI into the hands of patients through wearable devices, smartphones, tablets, or otherwise. Providing free goods or services to Medicare beneficiaries can create a fraud and abuse issue called beneficiary inducement under the Civil Monetary Penalties law.

This law is concerned that providing free goods and services to Medicare beneficiaries will steer or influence them to seek healthcare services from that same provider. An example of a transaction that would need review includes a hospital providing free tablets to Medicare beneficiaries with embedded AI software.

The reimbursement

AI itself is not separately reimbursed but may be integrated into clinical or operational services. Taking a page out of telehealth legislation, technology adaptation in healthcare appears to be tied in part to the ability to be reimbursed for the services related to that technology. Reimbursement in healthcare is transiting from fee-for- services to value-based payments.

As part of this transition, healthcare organizations, including accountable care organizations, are entering into shared-savings and risk-contracting arrangements seeking to provide high-quality and cost-efficient care. AI focuses on moving the needle in these areas, especially with its intersection with population analytics. Some relief from these laws happens within CMS and CMS Innovation Center payment reform programs, which each have a set of waivers of the fraud and abuse laws. Do not assume because there is a waiver available that any arrangement may be acceptable under that waiver.

On the flip side of AI and reimbursement is AI related to reimbursement itself. Such as with electronic health records (EHRs). AI in this area can create potential fraud and abuse risk under the False Claims Act. If a claim is submitted to the government that is not accurate—such as the services were not preformed, not documented, or the services are upcoded (or given a higher acuity resulting in higher reimbursement) there is a potential for a false claim. AI developers can learn two lessons from earlier EHR roll out and adoptions. First, once the software is built to bill a claim incorrectly, it bills the claim incorrectly every time consistently for every patient, potentially causing a widespread separate billing problem. This leads us to the second lesson, which is to sample, audit, and test claims impacted by AI to ensure their accuracy and that correct assumptions have gone into the review of data and eventual billing of claims.

The marketing

Co-branding and marketing strategies are common in technology arrangements, including telehealth and, likely in the future, AI. When marketing a new AI venture, the technology and healthcare organizations must determine if both or either logos and organizational names will be used. In the case of a joint venture and creation of a new company, the use of the new company’s name and logo should be considered. In addition, the payment for marketing costs should be considered. In certain cases, the use of the healthcare organization’s name is a key consideration in the marketing of the deal.

The concept of marketing cost was discussed in OIG Advisory Opinion No. 11-12 as it related to telehealth. Although the advisory opinion is not law or a regulation, it is informative
to the division of marketing cost for a technology solution with clinical value and no separate reimbursement. Under the facts of that advisory opinion, each of the hospitals paid their own cost for marketing. The Department of Health and Human Services Office for the Inspector General (OIG) found this division of cost favorable to its review of the arrangement.

New regulations on the horizon

The Centers for Medicare & Medicaid Services and the OIG released proposed regulations in Fall 2019. These regulations are considered by many a major overhaul of the fraud and abuse regulations. Others wonder if the broad exceptions of categories of activities instead will create narrow lanes for innovation since they may be difficult to apply. Although not AI specific, perhaps more innovative transactions in AI may be permitted through governmental program waivers to the fraud and abuse laws or through new regulations.

Conclusion — pushing the go button

We are at the precipice, looking at the vast potential applications of AI in health care along with its challenges. It is an amazing view. Many of the legal frameworks of privacy, security, device regulations, standard of care, and health care fraud and abuse have not caught up with the exponential changes in medicine and technology. That being said, legal considerations cannot be ignored, including the fraud and abuse laws. When entering into AI transactions in healthcare, remember healthcare is a highly regulated industry and build compliance into your business models. Once a culture of compliance is established, it is easier to push the go button.

Sarah Swank is an attorney in Nixon Peabody’s Healthcare group, providing strategic, regulatory and operational advice to health systems, hospitals and academic medical centers, as well as large national and regional physician organizations and telehealth and other startups. Her areas of experience include artificial intelligence, telehealth, fraud and abuse, compliance, payment reform and ACOs.