As AI Use Cases Grow in Healthcare, Executives Scramble to Grab the Reins

By Eric Wicklund

As healthcare organizations move swiftly to embrace AI, leadership is struggling to understand how to make sure governance isn’t pushed aside.

But what does governance really mean in a hospital or health system? And who gets to decide how and where AI is used?

At the recent HIMSS AI in Healthcare Forum in Boston, issues of compliance and liability were front and center for health system executives looking to chart a clear and effective AI strategy. Sunil Dadlani, chief information and digital officer for the Atlantic Health System, said AI regulation must be handled carefully, so that it doesn’t curb innovation.

The challenge lies in deciding where innovation has to take a step back so that compliance and liability can be addressed.

As Albert Marinez, chief analytics officer at the Cleveland Clinic, said, AI introduces “the art of the possible” to healthcare. “We know that there are problems that we can solve with generative AI that we could never solve before,” he said at the HIMSS event.

“Healthcare should be proactive in the establishment and enforcement of AI governance and guidelines,” Jim Barr, MD, Atlantic Health’s vice president of physician value-based programs and CMO of ACOs, said in an e-mail to HealthLeaders. “Governmental oversight will occur, but those in healthcare should display our ability to fully understand the issues and regulate ourselves.”

“Your reason to use AI tools can’t be just the need to say we’re on the cutting edge,” he added. “With ACOs the challenge is designing and managing successful implementation while continually measuring impact and ROI. You need to take into consideration the existing pain points for clinicians, practices and patients, their willingness to change, deploy a transparent QA/validation process to build trust, and a clear customer value proposition.”

Developing a governance strategy

So where does governance fit into a health system’s strategy?

Ravi Parikh, MD, MPP, an assistant professor of medicine and health policy at the University of Pennsylvania, assistant professor of medical ethics and health policy at the Perelman School of Medicine and director of the Human-Algorithm Collaboration Lab, says federal efforts to establish a governance framework have resulted in vague guidelines that are a good starting point, but not enough.

“They’re sort of general guidelines on monitoring for bias and monitoring for performance drift,” he says. “But how that gets operationalized is actually really variable.”

The first step for many healthcare organizations is the creation of a governance committee, charged with managing how the health system negotiates vendor contracts as well as how AI is developed, tested, used and—most importantly—monitored.

Parikh says current committees are “very ad-hoc,” with a mixture of executives from areas such as clinical care, IT, legal, and finance. Few are including the patient voice, which could be a critical oversight as Ai products flood the consumer marketplace and patients ask for AI capabilities to plan and manage their healthcare.

Patrick Thomas, director of digital innovation in pediatric surgery at the University of Nebraska Medical Center, wondered at the HIMSS event whether healthcare leadership is even ready to govern AI for its patients. Patients and providers are doing their own research, he noted, forcing decision-makers to try to keep up.

Understanding the value of data

Beyond the makeup of a governance committee, a key function of that committee is to understand data and data analytics, especially when outsourcing AI technology.

In dealing with vendors, health systems need to understand what datasets are used and how that data can affect outcomes. For instance, a company that relies on data from a decidedly white population might not help a hospital or health system whose patient population is ethnically diverse.

And when errors, such as hallucinations, occur, it may be hard to get a vendor to correct them.

A governance committee also has to be perpetual, and that will cost time and money that smaller organizations don’t have. Many standards now being considered are for basic AI functions, rather than generative or predictive AI, which hasn’t matured enough to be used in healthcare. But those tools will come along soon, and the rules for governing them will have to evolve.

Parikh isn’t convinced that health systems or the federal government will be able to draft standards for an ever-evolving AI landscape. Instead, he expects organizations like the Coalition for Health AI (CHAI), the Trustworthy & Responsible AI Network (TRAIN), or the Digital Medicine Society (DiME) to create standards and adjust them as the technology evolves.

He also says the federal government could, in time, require healthcare organizations to become accredited to use different types of AI, possibly as part of a quality improvement program or even payment policy.

“We [could] have these accreditation systems that signal to developers which institutions are robust for both validating and deploying [AI] technology and which of those might not be certified for large language model generation … but might be more certified for other types of predictive AI solutions,” he says. “I suspect that people are going to realize that some health systems just have more capacity for governance and more data availability to be deploying these tools. And that’s a good thing for patients because we don’t want to be rolling these things out for patients where errors might be promulgated.”

Eric Wicklund is the associate content manager and senior editor for Innovation at HealthLeaders.