Looking at 2025 Trends for AI In Healthcare
By Heather Bassett, MD
As healthcare organizations explore the potential of AI, they face a complex challenge: balancing innovation with caution. While some embrace AI as a solution to numerous problems, others remain concerned about its risks, particularly in high-stakes areas like patient care. How will healthcare leaders navigate this tension in 2025?
The coming year will see increased interest in healthcare organizations developing long-term AI strategies, optimizing workflows, managing risks, and ensuring responsible use of technology. A key theme throughout the year will be finding the right balance—both on an industry-wide level, and within individual business units—between embracing AI’s promise and safeguarding against its potential drawbacks, especially in critical areas such as data privacy and patient safety.
Emerging long-term AI strategies
Within healthcare organizations that experimented with generative AI (GenAI), two distinct camps emerged: one that approached GenAI believing it had the potential to solve every problem the industry could offer; another that preached caution, worried about the risks, or focused on its drawbacks. Now, we have a better understanding of the practical healthcare problems GenAI can solve. Teams can lean into their understanding to develop long-term AI strategies.
For many, that will involve making informed strategy decisions about partnering with AI vendors. Both parties will need to vet each other’s protocols, priorities, and mutual interests before signing any contractual agreements. Organizations that have a clearly defined long-term AI strategy will have an easier time understanding which vendors their own vision aligns with. With a better understanding of the technology, a more concrete, actionable dialogue can take place.
Optimizing workflows
Having a long-term strategy is great, but each unit within an organization must start somewhere on its journey with AI. For many, that will require being inefficient first and becoming efficient later. Setting your organization up for future success will require investments of time and resources in the near term.
Exactly how much time, and how many resources, will look different for every business unit. Doctors, nurses, and clerical staff, for example, will each have to adapt to their own AI workflows in their own way. Ultimately, the payoff will come in the form of more efficient processes that reduce the potential for employee burnout and mitigate staffing shortages, both now and in the future. A few early examples include hastening the time to hire job applicants, automating documentation of in-home clinician visits, and automating data-organization tasks.
Balancing risk vs. reward
The race to adopt and adapt AI tools carries a cost/benefit analysis in every industry. The analysis is especially critical in healthcare, in which a high-risk gamble can truly be a matter of life and death. But there are certain problems that AI can solve without a mistake resulting in, for example, a patient being prescribed the wrong medication or dosage. Figuring out where to draw the line of risk tolerance will be a critical challenge for organizations in the year ahead.
Of course, hospital medication errors happen without AI. Humans make mistakes. Part of setting that balance point between risk and benefit is combing the current landscape for processes that can be improved, not perfected, via AI. Retrieval-Augmented Generation (RAG) is a method by which large language models (LLM) dynamically retrieve and incorporate relevant information during the generation process, significantly reducing the incidence of hallucinations or factually incorrect outputs. The risks go down as technologies improve; 2025 will see organizations measure which processes are ready to be improved via today’s technology against their own risk tolerance.
Interoperability and data-sharing
The industry-wide move toward interoperability carries many challenges but also immense potential—improving access to data, creating more complete longitudinal patient datasets, and ultimately improving medical decision-making. Generative AI will help with data extraction, particularly from unstructured data, and in communication. As more systems buy in, our ability to access relevant patient data improves.
This trend also points to the importance of vetting third-party vendors for how they use AI. Learning how potential business partners will protect your organization’s data—especially your patients’ data—is critical. Expect regulatory restrictions to evolve to both encourage interoperability and keep patient data secure.
Responsible AI, new AI regulations
With changes in personnel at the federal and state levels underway, potential new government policies will stir unpredictability in the AI regulatory environment. A primary concern among government agencies is what organizations can do with Medicare patients’ data and AI. States are leading the way, with more regulations expected to come out as people become more familiar with the consequences around AI use-cases in healthcare.
In addition to new regulations, our industry-best standards and recommendations will evolve. CHAI, an independent nonprofit, has been proactive about promoting and defining responsible AI use in collaboration with industry stakeholders. Expect more conversations around maintaining a “human in the loop”: proceeding with caution rather than fully automating important processes in healthcare.
Dr. Heather Bassett is the Chief Medical Officer with Xsolis, the AI-driven health technology company with a human-centered approach. With more than 20 years’ experience in healthcare, Dr. Bassett provides oversight of Xsolis’ data science team, denials management team and its physician advisor program. She is board-certified in internal medicine.