Improving patient safety is one of the most urgent issues facing healthcare today. Patient Safety and Quality Healthcare (PSQH) is written for and by people who are involved directly in improving patient safety and the quality of care.
PSQH welcomes original submissions from all healthcare professionals on topics related to safety and quality. PSQH publishes a variety of articles, to reflect the breadth of work being done in this field: case studies, surveys, research, book or technology reviews, guest editorials, essays, and letters to the editor.
Now available online, the latest issue of Patient Safety & Quality Healthcareoffers articles on a wide range of topics and a continuing focus on education. The cover story, “A Model for Simulation-Based Interprofessional Team Learning,” is by Brian Patterson, a fourth-year medical student at Wright State University’s Boonshoft School of Medicine in Dayton, Ohio. Patterson’s article and another article in the issue, “Training for Integrated Multidisciplinary Care,” by Ramon Cancino, MD, share a connection to the Institute for Healthcare Improvement’s Open School. Patterson and Cancino are among more than a dozen members of the Open School who responded to a call for essays, which has developed into a series of articles in print and online that highlight interprofessional education.
The series began in the May/June issue with “Interdisciplinary Education: More Than Just Buzzwords?” by Josh Adams, a student at the California School of Podiatric Medicine at Samuel Merritt University in Oakland. The series continues this month online, with “Labor and Management Working Together to Improve Patient Satisfaction,” [SC1] by Preeti Jadhav, MD, a resident in internal medicine at Bronx Lebanon Hospital in Bronx, New York.
In addition to education, the current issue offers articles on improving care in behavioral health. Laura Young reports on a first-of-its-kind health information exchange in Arizona devoted to mental health information. Young, who is executive director of the Behavioral Health Information Network of Arizona (BINHAZ) explains,
Sponsored by seven nonprofit behavioral health organizations, BHINAZ gathers, routes, and queries data from a wide range of service providers, including substance abuse programs, crisis professionals, general mental health practitioners, and children’s behavioral health specialists, in three separate repositories for clinical data, documents, and patient consent.
In the second article, Judith Shields and her co-authors describe a patient satisfaction survey process developed by Liberty Healthcare to monitor and improve the quality of acute and primary care delivered to patients with behavioral health needs in a variety of settings, including in prisons.
By Victor Lee, MD
The Latin phrase “primum non nocere” means “first do no harm” and is a shared goal for clinicians across a wide variety of healthcare professions. It should also be a shared goal for all health information technology (IT) professionals. Health IT solutions, while intended to improve the quality, safety, and efficiency of patient care, may sometimes result in patient harm. Although a recent study shows an overall benefit from using health IT, it is still imperative that we continue to better understand and minimize any unintended consequences of these solutions.
One of the goals of the Food and Drug Administration Safety and Innovation Act (FDASIA) was to strike the right balance between fostering health IT innovation and optimizing patient safety. As required by FDASIA, the Food and Drug Administration (FDA), Office of the National Coordinator for Heath IT, and Federal Communications Commission released a Health IT Report on Proposed Risk-Based Regulatory Framework in April 2014. The report places most clinical decision support (CDS) solutions under a health IT category called “health management health IT functions.” It states that the FDA does not intend to focus its regulatory oversight on such functions because CDS is generally associated with a favorable benefit-risk profile. Instead, the FDA recommends that health IT stakeholders work together to mitigate safety risks.
To that end, the report also included a recommendation for the formation of a national Health IT Safety Center envisioned since the 2012 Institute of Medicine report as a public-private entity. It would convene stakeholders committed to using health IT to make care safer and to continuously improving the safety of health IT. On July 17, 2015, ONC released its long-awaited Health IT Safety Center Roadmap, which proposes the following four focus areas:
- Collaborate on solutions to address health IT safety-related events and hazards.
- Improve identification and sharing of information on health IT-related safety events and hazards.
- Report evidence on health IT-related safety and on solutions.
- Promote health IT-related safety education and competency.
The Health IT Safety Center will depend on public-private stakeholder collaboration to succeed. Since the Health IT Safety Center will not conduct regulatory enforcement activities, and participation will be voluntary, we will only harvest the wisdom of the crowds if we can engage stakeholders in open dialogue about safety issues and best practices that can be shared non-punitively and disseminated broadly.
At a minimum, we need representation from providers and health IT implementers within healthcare organizations. We also need representation from the vendor community, including EHR, CDS, and other health IT vendors who will work alongside researchers and policymakers. Our experiences with patient safety are likely to be more similar than different, and our collective learnings could be extrapolated across the industry.
All healthcare organizations must work toward developing and enhancing a culture of safety. While the Health IT Safety Center may work toward the four focus areas, it is incumbent upon all constituents in the health IT ecosystem to leverage the work produced and to be engaged and committed to advancing patient safety through their own local programs.
This call to action must be communicated to organizations through the actions of executive leadership. Will they engage in the Health IT Safety Center? Are they staffing roles that are accountable for patient safety? Do they support a systems approach to patient safety through transparent organization-wide processes as opposed to blaming individuals? Are safety issues being addressed in a timely manner? These are some potential barometers of a safety culture.
In my workplace, we embrace the Swiss cheese model of system accidents that says a systems approach to understanding healthcare errors is far more likely to result in productive change than blaming individuals. Hence, there is a strong orientation toward internal process definition and change for CDS development and implementation.
For example, it is possible that medication orders entered in the order set templates may generate potential patient safety issues if written erroneously. Simply entering a decimal in the wrong position or selecting the incorrect unit of measure from a pick list can result in prescriptions that are orders of magnitude different than the intended doses. Therefore, we invest heavily in quality assurance processes to review all medication orders. This is done from both a standalone perspective (e.g., does this medication order fall within normal dose limits for any condition and in any population?) as well as in the context of a given order set (e.g., an antibiotic dosage might be acceptable for an adult with community-acquired pneumonia but not for a pediatric patient).
Processes have also been developed to escalate the evaluation of potential patient safety issues and the rapid implementation of corrective actions if necessary. As an example, our order sets and plans of care can be transferred electronically to electronic health record (EHR) systems. Whenever updates are applied to our software or to that of our EHR vendor partners, subtle changes can sometimes produce unexpected results as we test the integration between systems. Our employees are empowered to trigger our evaluation process to call on clinical and technical resources to evaluate whether potential patient safety issues are present—and if so, their severity and likelihood. This work then informs subsequent steps to correct and prevent the issue.
We look forward to elaborating on these and other best practices with the health IT community via the Health IT Safety Center, as well as learning other safety best practices from our peers. Openness and transparency in this forum can go a long way toward filling the holes in patient safety and minimizing unintended consequences of CDS and other health IT solutions.
What an expert diagnostician and physician podcasters have learned from radio’s favorite car mechanics.
One can only imagine how radio personalities Tom Magliozzi and his late brother, Ray, would react to news that physicians are taking their approach to solving automotive problems and applying it to improving medical diagnosis. Recently, patient safety experts have begun to pay more attention to understanding and preventing diagnostic failure; those efforts often focus on the cognitive process—clinical reasoning—that physicians use in diagnosis. Looking beyond traditional lectures and textbooks, some experts find that the Magliozzi’s method of car repair—build a relationship, take the history, generate a hypothesis, and eventually arrive at a diagnosis and treatment plan—also works well in medicine.
In weekly episodes of Car Talk on National Public Radio, the Magliozzi’s entertained listeners and themselves with real-time analysis of problems described by callers in need of help with their cars and, sometimes, life’s other problems. The program became a local favorite soon after it debuted in 1977 on WBUR, Boston University’s radio station. Ten years later, it went national and developed cult status.
In JAMA in 2011, expert diagnostician Gurpreet Dhaliwal, MD, analyzed the Magliozzi’s cognitive method and recommended that medical students and faculty learn from “media’s finest example of clinical reasoning” (p. 918).
More recently, two internists in New Zealand weave references to the Magliozzis’ Car Talk—including folksy theme music and friendly style—through a new podcast series called IM Reasoning, “conversations to inspire critical thinking in clinical medicine and education.” The series begins with a brief introduction focused on hosts Nic Szecket and Art Nahill, both of whom settled in NZ as adults. Nahill is originally from the Boston area, which may explain the series’ close ties to Car Talk. As of early August, three episodes are available: “Setting the Stage, How Drs Think,” “Biases,” and “Differential Diagnosis and Problem Representation.” On the website, links to supporting articles accompany each episode.
Listening to the podcasts, I am reminded of the universal appeal and relevance of many lessons in safety and quality improvement. Szecket and Nahill don’t dumb down their discussion of clinical reasoning and case-based examples, but their approach—like the Magliozzi’s—is honest and kind in a way that should appeal to anyone with an interest in how humans think and work together to solve problems.
(Hat tip: I learned about the podcast series through messages posted to a Listserv moderated by the Society to Improve Diagnosis in Medicine.)
In July, investigative news organization ProPublica published a “Surgeon Scorecard,” an interactive database that supplies consumers with information about the performance of individual surgeons for specific operations. To construct the scorecard, ProPublica used five years of Medicare data (2009–2013) covering 2.3 million surgical procedures and 17,000 surgeons. Patients were enrolled in Medicare’s fee-for-service program, and results were risk-adjusted for the patient’s age and health, as well as the hospital where the surgery took place. Patients admitted from emergency departments or nursing homes were excluded. The scorecard covers eight common surgeries, including total knee and hip replacements, and uses inpatient mortality and 30-day readmissions as indicators of patient harm.
In addition to giving consumers access to information, the scorecard and related materials give policy experts and healthcare professionals a springboard for deeper conversations about how best to measure and publish comparative data about provider performance. ProPublic.org includes links to the interactive scorecard and supporting materials:
Surgeon Scorecard by Sisi Wei, Olga Pierce, and Marshall Allen for ProPublica
In addition to the searchable database, the scorecard website guides consumers through understanding what information is available and how it was constructed. The website links to many other resources, including:
How We Measured Surgical Complications by Olga Pierce and Marshall Allen
A brief description of the methodology used to create the Surgeon Scorecard
Assessing Surgeon-Level Risk of Patient Harm During Elective Surgery for Public Reporting by Olga Pierce and Marshall Allen
Detailed analysis of the development of the Surgeon Scorecard
Why We Are Publishing Surgeons’ Complication Rates by Stephen Engelberg
‘Dr. Abscess’ and Why Surgeon Scorecard Matters by Stephen Engleberg
Jha, who consulted on the project, acknowledges that the scorecard is imperfect but believes it is an important, disruptive first step toward transparency.
By Paul Batalden and Earl Conway
The poet William Stafford describes the way that words sometimes seem like magic to come together and live together. In some ways, his simile—“like magic”—remains true about “Every system is perfectly designed to get the results it gets” as noted in the Editor’s Notebook column in the July/August 2008 issue of Patient Safety & Quality Healthcare.We write today to offer some additional context.
We worked together as co-chairmen of the U. S. Quality Council of the Conference Board in the 1970s. While together, we learned that both of us had been tremendously influenced by the thought and practice of Dr. W. Edwards Deming. His invitation to think in terms of results generated by systems within which we all work were deeply important to us then—and now. His teachings build on the Theory of Variation and the need to redesign (i.e., improve) the system to narrow variation as the principal cause of error.
Earl had the benefit of knowing and working with David Hanna at Procter and Gamble. Hanna, a student of organization development and expert who eventually put his thoughts together in a book, Designing Organizations for High Performance, as part of the Addison-Wesley [now Prentice Hall] OD series of volumes. He shared with Earl (and in his book, p. 36) the insight of his P&G colleague from the U.K., Arthur Jones: “All organizations are perfectly designed to get the results they get!” Earl brought that insight to the attention of the U. S. Quality Council in one of our meetings.
As Paul thought about the profound truth of that insight and of its use for work with health professionals, some of whom were less interested in “organizations” than in “systems,” Paul created a corollary to the Jones words: “Every system is perfectly designed to get the results it gets” and has shared the words with many others.
We write to share our joy in bringing these additional words of context to others—but in some ways still appreciating the poet’s insight that words come together and live together in what seems—at some level—to be “magic.
Saying, “We can do better,” Jim Bagian declared that root cause analysis (RCA)—the subject of a new report by the National Patient Safety Foundation (NPSF)—offers uncommon potential to improve safety and that, in general, healthcare has not used it well and wasted opportunities to prevent future harm.
Bagian and Doug Bonacum were co-chairs of the working group that developed the new report for NPSF. They also led “Patient Safety Science: Successful Practices to Optimize Root Cause Analysis,” an all-day workshop offered the day before NPSF’s annual Patient Safety Congress, held earlier this year in Austin, Texas. With their time and lively discussion, those who attended the workshop endorsed Bagian’s sentiments about “doing better.”
Following introductions, the workshop began with attendees sharing their success stories and disappointments related to RCAs. Some described their organizations’ RCA processes as being well developed and effective, while others expressed frustrations related to poor training, inadequate leadership, and lack of follow up. Bagian and Bonacum identified action, improvement, and measurement as crucial components of RCAs and regretted that the name “root cause analysis” leaves out the most important steps.
The report, RCA2: Improving Root Cause Analyses and Actions to Prevent Harm, renames the process root cause analysis and action—“squaring” the A—and says that its only purpose is to prevent harm:
If actions resulting from an RCA2 review are not implemented, or are not measured to determine their effectiveness in preventing harm, then the entire RCA2 activity may be pointless. (p. 2)
The report, which was developed with support from The Doctors Company Foundation, is available for free download on the NPSF website. Jim Bagian and Doug Bonacum will discuss the report during a free webcast at 1 p.m. ET on Wed, July 15, 2015. Shortly thereafter, a recording of the webcast will be available on the report’s webpage.
Ensuring that clinicians are treating the patient they intend to treat—accurate patient identification (ID)—is one of many perennial patient safety problems. Some of the perennials, including handwashing, wrong site surgery, and patient ID, seem simple enough on the surface. They remain unsolved, however, because they are deceivingly complex. It is understandable that “improvement fatigue” may set in for organizations and individuals who have known for years that they have an unsolved problem, sometimes one they’ve been addressing unsuccessfully for years.
The annual conference known simply as HIMSS will take place in Chicago next week, April 12–16. By far the largest educational program and exhibition focused on health information technology each year, HIMSS is daunting and exciting. Each year, preparing to attend HIMSS begins to demand attention shortly after New Years. The conference usually takes place in February. When it is held in Chicago—where the parent organization, the Health Information Management and Systems Society is based—the schedule is moved back to April for obvious reasons. Of course, last time HIMSS was held in Chicago in April, it snowed. This year, few of us will be shocked by cold precipitation after the winter we’ve had.
This is Patient Safety Awareness Week (PSAW), which is focused on the theme “United for Safety.” The National Patient Safety Foundation (NPSF) explains that the theme is meant to rally all stakeholders—from care providers, corporate executives, and vendors, to patients and their families—around a commitment to “keeping patients and those who care for them free from harm.” The campaign emphasizes patient engagement, communication among patients and clinicians, and on how the quality of relationships affects the delivery of care. United for Safety is also the theme of NPSF’s Patient Safety Congress, to be held next month in Austin, Texas.
On January 28, the Institute for Safe Medical Practices (ISMP) published an alarming report exposing the dirty secret about drug safety in America. Its report properly chronicled that pharmaceutical companies are largely responsible for collecting and reporting adverse drug events to the Food and Drug Administration (FDA) and most notably, that they’re doing a substandard job at it.
Although consumers and providers may also report adverse events, the FDA Adverse Events Reporting System (FAERS) relies primarily on reports from pharmaceutical companies. Those pharmaceutical companies view this task as burdensome and expensive. According to the Wall Street Journal, they have outsourced it for the lowest possible cost or allocated the fewest resources to it internally. Clearly, there’s a conflict of interest in a system that relies on self-regulation. And that conflict is only amplified when those tasked with monitoring post-approval drug safety turn around and outsource the responsibility to the lowest bidder. The result is a drug safety monitoring system that, according to ISMP, “suffers from a flood of low quality reports from drug manufacturers.”
As many expected, a malpractice lawsuit was filed on Monday, January 26, against the physicians and endoscopy clinic that treated the late Joan Rivers. CNN reported that the family of the 81-year-old comedian wants to "make certain that the many medical deficiencies that led to Joan Rivers' death are never repeated by any outpatient surgery center."
As this lawsuit unfolds in the months ahead, it will examine alleged errors made at Yorkville Endoscopy in New York City, including failing to identify Ms. Rivers’ deteriorating vital signs and respond in a timely manner. Significantly, the lawsuit will also review numerous deficiencies cited by the New York State Department of Health regarding basic patient rights, including failing to obtain informed consent for each procedure performed.
The high-profile nature of this case should be expected to increase the scrutiny of healthcare organizations’ informed consent processes and documents. Articles in the lay press will likely advise the public to do the following...
In title of every post in his long-running blog, Mark Neuenschwander offers readers a snapshot of what’s on his mind. From “I’ve been thinking about Tina, Sarah, Slinkies, and how improve may improve your group’s productivity,” to “…Dodgers, Webinars, leadership, and the importance of sitting near an exit,” Neuenschwander covers a wide swath of what’s important in life while always zeroing in on medication safety, point-of-care technologies (especially barcoding), and foundational principles of patient safety. He’s always entertaining, real, and informative.
Most recently, in “…drugs, wars, Christmases, and your hospital,” Neuenschwander describes a medication error that killed Loretta Macpherson in December 2014. He points out that although more than two-thirds of hospitals in the United States now (finally!) barcode scan patients and medications at the bedside, only five or six percent scan to verify the component ingredients in compounded products.
PSQH editor Susan Carr shares her experience hearing Don Berwick, who recently ran for governor of Massachusetts and is also previously CEO of the Institute for Healthcare Improvement and administrator of the Centers for Medicare and Medicaid Services, speak on election night.
I pay close attention to efforts to quantify the number of errors and amount of harm caused by medical error, and I am often uncertain what to make of the results at least as they sometimes are reported.
Numbers are powerful, especially when conveyed in concrete, familiar units.
…80,000 die each year partly as a result of iatrogenic injury, the equivalent of three jumbo-jet crashed every 2 days” (Leape, 1994).
…the results of the study in Colorado and Utah imply that at least 44,000 Americans die each year as a result of medical errors. The results of the New York Study suggest the number may be as high as 98,000 (Institute of Medicine, 2000).
Those numbers caught everyone’s attention years ago and became mantras for the patient safety movement. More recently, John James’ estimate that between 210,000 and 440,000 patients die prematurely each year due to preventable harm renewed discussion about quantifying the magnitude of the problem (James, 2013).
Our responses to the news that Ebola had been diagnosed in the United States for the first time reveal gaps in our understanding of how to protect others and ourselves from Ebola and other infectious diseases. When we overreact in fear and take comfort from actions that don’t actually make us safer, we may overlook aspects of our systems and institutions that really do put us at risk.
The case of Thomas Duncan, diagnosed in Dallas with Ebola on September 26, 2014, reveals how unreliable our systems can be, especially under stress. The actions of Texas Health Presbyterian Hospital Dallas, where Duncan went for emergency care when he first became ill, reveal broad problems with implications that reach beyond the immediate response to one patient with Ebola.
First, I must disclose a conflict of interest: I am co-author of one chapter in this edited collection, and the editor, Lorri Zipperer, is a close friend and colleague. I was pre-disposed to like this book, and as I spend more time with the other chapters, my respect for Lorri’s vision and the resulting text continues to grow.
Efforts to improve patient safety should be informed by the best evidence, information, and knowledge (EI&K) available, but often they are not. This is a familiar, if unexamined, problem in this time of “information overload,” but first I should review how the terms EI&K are defined and used in the book. These terms are common but not often used in the safety and quality improvement literature as precisely as they are in Patient Safety:
- Evidence is the result of research, of tested hypotheses, such as trials and studies published in peer-reviewed publications across all disciplines (not just medicine).
- Information is data that has been analyzed, organized, and printed/presented for a specific use.
- Knowledge is what individuals know, either implicitly or explicitly. Knowledge is dynamic, with elements of action or experience.
Many of us are familiar with the challenge posed by the abundance of evidence, information, and knowledge currently available about all things. It is exhilarating that we live in a time of rich and increasingly available resources, but it is rarely self-evident how best to access the EI&K we need or easy to feel confident that we’ve found the best advice on a given subject. How do we know, for example, that what we really need is on page 10 or 25 of our Google search results or will only appear if we use a particular search word. Social media such as Twitter and email discussion groups have made experts more accessible than ever, but knowing who has the answer to your question, having the time to search, and simply knowing where to begin, is not always easy. These challenges exist in patient safety, too, with potentially profound implications for patients and all who are involved in their care.
I had the radio on as I drove to the market, but I wasn’t really listening until I heard “It's very important to have a culture of safety that says, if you've got a problem, talk about it.” I didn’t recall ever having heard the phrase “culture of safety” outside of safety improvement circles.
AdverseEvents’ primary customers are health plans, PBMs, health systems, and hospitals. We provide these healthcare decision makers with important insight on drug safety concerns that were not revealed during clinical trials and are not being communicated by the manufacturer.
New guidance from the Centers for Medicare & Medicaid Services (CMS) recommends monitoring of patients receiving opioids.
A recent CDC report found that 1 in 25 hospital patients develop healthcare-associated infections (HAIs). According to the report, about 75,000 of these patients die during their hospital stay.