There’s Value to Generative AI in Healthcare—if Leaders Understand Its Limits
By Bevey Miner
Health systems have a front-row seat when it comes to identifying the issues that matter most in healthcare and developing real-world solutions, not just theoretical approaches.
“You could literally sit at a nursing station and within five days pick up pretty meaningful insights around what needs to be done,” Chris Waugh, vice president, chief design and innovation officer for Sutter Health, shared during ViVE.
So it’s easy to understand why ideas for AI innovation in health systems are exploding with use cases from clinical teams, especially when it comes to use of generative AI. But while large language models (LLM) like ChatGPT are poised to make substantial contributions to patient care, their immediate value likely won’t be derived in the ways clinicians and healthcare leaders think.
A recent study suggests LLMs “still exhibit notable limitations and challenges,” including the ability to generate relevant responses and avoid potential biases. And while leaders in digital innovation are making big bets on AI—something that came through loud and clear at ViVE—it’s also important to acknowledge the risks that come with relying on a technology that is still largely untested in healthcare.
Certainly, LLMs offer strong potential for lifting the administrative burden from clinicians, such as by capturing clinical notes for nurses and physicians and—with clinical oversight—generating discharge summaries and care plans. It also paves the way for communications that are more personalized or persona-driven.
“We have a deep bench of people evaluating these tools and deciding what gets implemented,” John Brownstein, senior vice president and chief innovation officer, Boston Children’s Hospital, shared during ViVE.
But while this technology—and AI in general—could make a substantial difference in easing clinicians’ workflows, we haven’t reached the point where generative AI could be used for clinical diagnosis or treatment direction. And, at a time when many healthcare organizations continue to struggle with margin pressures and labor shortages, it’s important to look for practical, non-risky ways to gain value from AI—and not just generative AI.
Here are key considerations for leaning into the power of AI while ensuring a responsible approach.
Recognize that AI is only as good as the data it draws from. “So long as there are clipboards in hospitals, unstructured data is going to remain a problem leaders will have to address,” said Mike Mosquito, CHCIO, PMP, CDH-E, who leads emerging technology and innovation special projects at Northeast Georgia Health System. Today, 30% of the world’s data is generated by healthcare—more than any other industry—yet only 57% of this data is used to make decisions. That’s because most healthcare data is unstructured data, consisting of text in open note fields, handwritten text—including on digital faxes—and images. Without the ability to transform this data into structured data, healthcare organizations lose important opportunities to gain a more complete view of the patients and populations they treat.
“As we add the AI label to everything, we’re going to have this unstructured data problem,” Mosquito said during ViVE, adding, “You have to find a way to archive that data integration at some point. You have to have that system.”
One area where AI can help solve this problem is through the use of natural language processing (NLP) and Machine Learning (ML) for intelligent data extraction. Leading health systems use NLP and ML to transform unstructured data from documents like PDFs, TIFFs, paper-based documents, scans, images and more into structured data and send it to clinicians and staff directly within their workflows. This significantly expands access to critical information that may be needed for care. It also avoids delays in treatment that could potentially impact outcomes.
Until recently, the technology to address healthcare’s unstructured data problem hasn’t existed, unless it was used to extract coding from clinical notes. Now, with NLP and ML, organizations can extract structured data from unstructured documents of any type. That’s a game changer for clinical care and efficiency and a vital tool for strengthening the power of AI analysis.
Establish the right governance for AI in health systems. Today, fewer than one out of five health systems that incorporate AI have AI-specific governance policies in place, a survey by KLAS Research and the Center for Connected Medicine found. Even fewer have policies that specifically address generative AI. Ideally, such policies would address the use of AI within the organization as well as data access considerations.
What’s holding health systems back from establishing AI governance in their organizations? Lack of internal expertise may be a factor. Now is the time to both lean into learnings from external health tech partners and build up internal knowledge to establish guardrails for AI innovation. As Bradley Crotty, MD, vice president and chief digital officer, Froedtert & Medical College of Wisconsin, shared during ViVE, “You definitely need a core competency of technology strategy in your organizations, even if you partner and outsource a lot of it, so that you can put things together. When you underinvest in that talent and that sort of ecosystem, you actually can’t drive the strategic results that you want.”
Ensure AI outputs rely on an identified source of truth. The need for a source of truth for AI-driven insight is one reason why some organizations are pursuing AI and data integration initiatives in tandem. Given that up to 80% of healthcare data is unstructured, this is an area where intelligent data extraction—powered by AI—can provide insights from source documents which is different than using prompt questions from generative AI. Adding workflow solutions also helps eliminate the risk of errors that result from manual data entry, which, up to now, has been the only way to garner data value from unstructured documents. Also important: This technology provides transparency into where the information used to inform AI outputs is coming from, since there is an unstructured source document to reference.
“You’ve got to be able to have good communication as part of a connected community to serve the patient,” Mosquito said. Establishing the infrastructure for a single source of truth in AI analysis is one step toward ensuring all organizations benefit from a mature data view.
Build AI literacy within healthcare leadership teams, clinical teams, and the board. The ability to make the right decisions around AI of any form depends on educating team members and key stakeholders on what AI can do—and what it can’t. Generative AI does have a lot of hype associated with it–and increasingly, healthcare leaders believe it’s worth the hype. But making the right decisions around whether and when to deploy any type of AI and how to appropriately oversee its use depends on the ability to cut through AI myths and misinformation. Only then can organizations operate from a place of credibility in their AI decision making.
By balancing the imperative to innovate with the need for a thoughtful approach to AI deployment, health systems can make decisions that provide true value for their organizations and those they serve.
Bevey Miner is Executive Vice President, Healthcare Strategy and Policy for Consensus Cloud Solutions.