Radiology – Peer Review: Why Current Models Undermine Safety Culture

November / December 2012

Radiology

Peer Review: Why Current Models Undermine Safety Culture

The field of radiology is known for its rapid innovations in technology. We continually offer up exciting new ways to image the body, but when it comes to improving the accuracy of professional interpretations, little meaningful progress has been made in the last 50 years. This is true in part because current radiology peer review models are insufficient, and in some circumstances, even harmful to quality improvement efforts. To achieve our most important purpose—the provision of safe and high quality healthcare—radiologists must find a new and more effective way to conduct peer review.

Models for Peer Review
Despite the lack of substantive improvement in diagnostic accuracy, all radiology groups are under increasing pressure by hospitals, the Joint Commission and the payor community to participate in some form of peer review. Most commonly, radiologists conduct peer review utilizing the model developed by the American College of Radiology, known as RADPEER™. In the RADPEER model, radiologists evaluate each other’s work through the course of their normal reading activity, in which they compare prior studies to the current study being interpreted. Cases are self-selected, and the participants rate and document their level of agreement with the colleague’s interpretation of the prior study. The results of these reviews are then self-reported to the ACR for aggregation and analysis.

RADPEER
The RADPEER™ model is efficient and cost-effective because the peer review activity is easily embedded into normal radiology workflow. A significant limitation of the model is its lack of a statistically valid sampling methodology. Bias is also a problem because evaluators are not blinded to the identity of the original radiologist. These factors create doubt about the reliability of the program and when results are compared to other published data, it appears that the program substantially underestimates error rates. While research suggests that RADPEER goes too easy on its participants, limiting opportunities for effective learning, the draconian approach taken in more traditional peer review inhibits learning for a different reason.

Traditional Medical Peer Review
The traditional medical peer review model is entirely reactive, seeking to evaluate a known mishap that in most cases was associated with some harm to the patient. When an incident is referred for peer review, a multidisciplinary committee convenes to perform a holistic evaluation of the case, identifying all of the events that may have contributed to the patient’s poor outcome. Typically there are multiple factors involved, some of which may help mitigate the physician’s “fault” for the medical error. Ultimately it is the responsibility of the committee to determine if the physician deviated from the standard of care, and if so, then determine what the consequences should be. The process, while espoused by many as an educational experience, is in fact more similar to a judicial proceeding where the facts are investigated, testimony is heard, judgment is passed, and punishment is assigned in the forms of written warnings, restriction or revocation of privileges.

The American Medical Association’s Code of Medical Ethics addresses the need for peer review:

Medical society ethics committees, hospital credentials and utilization committees, and other forms of peer review have been long established by organized medicine to scrutinize physicians’ professional conduct. At least to some extent, each of these types of peer review can be said to impinge upon the absolute professional freedom of physicians. They are, nonetheless, recognized and accepted. They are necessary, and committees performing such work act ethically as long as principles of due process are observed.

Their writings on due process go on to confirm that peer review is essentially a judicial proceeding, describing fair hearing procedures and stating in clear terms that the purpose of the process is to enable physicians to “pass judgment on a peer.”

Proponents of the traditional medical peer review process point to its inherent purpose, which is patient protection; the need to protect patients from medical error is undeniable and well documented. In its seminal 1999 report, To Err Is Human: Building a Safer Health System, the Institute of Medicine asserted that at least 44,000 people, and perhaps as many as 98,000 people, die in hospitals each year as a result of medical errors that could have been prevented. Thirteen years later, the Institute of Medicine’s new report, Best Care at Lower Cost: The Path to Continuously Learning Health Care in America (2012), warns that the healthcare system has failed to make adequate progress in the arena of patient protection. The report suggests that up to one-third of hospitalized patients are harmed during their stay, and it further estimates that an estimated 75,000 deaths could have been prevented in 2005 if all states delivered care on par with best practices.1

1This analysis is based on a broad scorecard of indicators such as healthcare access, prevention, population health, mortality etc. that ranked performance of states to each other.  Across all the dimensions evaluated, Vermont was the highest-ranking state. If all states did healthcare as well as Vermont, 75,000 deaths could have been prevented.

The recommendations in the report seem improbable, if not impossible, when attempted within the confines of the traditional medical peer review model. In addition to its premise that financial incentives should be designed to encourage safer care, the report also points to the need for transparency about performance and a cultural commitment to learning and safety. Traditional peer review’s focus on punishment undermines the willingness of physicians to freely admit and examine their mistakes. While hospital leaders would dispute the idea that the purpose of peer review is punitive rather than educational, the facts tell a different story.

Research by the Agency for Healthcare Research and Quality (AHRQ) points to the fact that the majority of physicians feel their organizations employ a blame-oriented rather than solutions-oriented approach to error prevention. In AHRQ’s Hospital Survey on Patient Safety Culture: 2012 User Comparative Database Report (2012), more than half a million staffers representing 1,128 hospitals were surveyed. The majority of respondents—including physicians—reported feeling that their mistakes are held against them. This is particularly interesting when considered against the backdrop of the recent Office of the Inspector General (OIG) report entitled Hospital Incident Reporting Systems Do Not Capture Most Patient Harm (2012). Based on its study of 2008 claims, the OIG concluded that 86% of incidents that harmed Medicare beneficiaries were not reported to the hospital’s incident reporting system. The report named the following reasons for this significant failure in reporting: lack of staff training on what harm should be reported, time constraints, and the belief that someone else would take care of the reporting. While not cited as a possible reason for non-reporting, in light of AHRQ’s findings, it is highly probable that fear of punishment was another reason physicians failed to report incidents.

Case Study
To illustrate the depth of the problem, consider the following actual case, where only individual and institution names have been changed for confidentiality reasons. Dr. Jones is a teleradiologist with a large group that holds contracts to read for a number of County Hospitals. He specializes in reading for emergency departments and interprets approximately 19,000 x-rays, ultrasounds, and CT cases per year, which is a volume within the typical range for a radiologist with his case mix. Based on double-blind peer review of more than 1,400 randomly selected cases in 2011, it is known that Dr. Jones has a clinically significant error rate of 0.9 percent. This rate compares favorably to industry research indicating that the range of clinically significant interpretive error for radiologists in community practice is between 0.8 and 9.2 percent. Extrapolated out, a typical radiologist reading Dr. Jones’ case load would be expected to make between 152 and 1,748 errors per year, and Dr. Jones makes about 171 errors per year. Dr. Jones receives feedback on all errors detected in his group’s peer review program and discloses all of his clinically significant errors to the referring physician and hospital to enable course correction in the patient’s treatment if needed. The approach that Dr. Jones’ radiology group takes to conduct peer review is consistent with the recommendations of healthcare quality thought leaders, who emphasize the importance of continual, objective assessment of performance. Dr. Jones’ decision to disclose his errors is consistent with the Institute of Medicine’s recommendations in favor of transparency and accountability.

Taking all of those facts into account, consider the following correspondence sent to the group by the chief quality officer for one of the county hospitals for which Dr. Jones reads.

Steve [administrator, radiology group]:

Concerning Dr. Jones, the Medical Staff requested that he no longer provide service to County Hospital after peer review results were reported to the Medical Executive Committee (MEC) in which a subdural hemorrhage, intraventricular hemorrhage and subarachnoid hemorrhage were missed on a 61-year-old motor vehicle accident patient. The patient also had a blood clot and was placed on anticoagulant therapy when the brain CT scan was reported as negative. The error in reading the brain CT scan was found 48 hours after the initial read by Dr. Jones. Anticoagulant therapy was stopped immediately and thankfully, there were no adverse outcomes for the patient. Julie [radiology group’s quality director] was a tremendous help in facilitating the peer review, so she is aware of the case.

Thank you for your attention to the concerns of County Hospital. Since Dr. Jones is due for reappointment in June, it would be to his advantage for the group to send a letter stating privileges are no longer needed at County Hospital rather than having the MEC deny his reappointment, which he would always be required to disclose in the future when applying/reapplying for privileges at all other facilities.

Thanks very much,
Jennifer
[hospital’s chief quality officer]

Consider for a moment the implications of this correspondence for Dr. Jones, his radiology group, for the hospital, and for future patients. This error made by Dr. Jones was discovered 48 hours after the initial reading through the group’s internal peer review program, in which cases are randomly selected and evaluated through an objective, double-blind review process. The error was previously unsuspected by the patient’s referring physician or County Hospital, and it would likely have only been detected by the hospital if the patient’s condition had seriously worsened, prompting additional brain imaging. Because Dr. Jones’ group detected the error, and because he rapidly disclosed the error once it was detected, the unsafe course of anticoagulation therapy was discontinued. This is the classic “no harm” situation, in which the patient received improper treatment as a result of the interpretive error but harm was narrowly avoided.

By looking for errors, the radiology group may have avoided the malpractice liability associated with a bad outcome, but in the process they exposed themselves to pressure from the hospital to remove Dr. Jones from the account. This was difficult for the group to deal with administratively because they relied heavily on him for coverage at the hospital. By disclosing the error, Dr. Jones also lessened the chances of a malpractice suit, but in exchange he suffered loss of income from this hospital and was threatened with a negative reference that could limit or preclude him from securing other employment in the future. The hospital’s actions created liability for their institution by threatening the credentials of a physician without due process. Its threatened credentialing action was based on a false allegation that the physician’s professional conduct and competence fell short of the usual standard of care, which is not supported by the compendium of peer review results, which demonstrate that Dr. Jones performs better than most radiologists.

Assuming that most individuals or organizations will act in accordance with their own self-interest, this case could have a devastating effect on the care of future patients at County Hospital. Every physician member of the hospital’s Medical Executive Committee got a stark reminder during their deliberations that making a mistake—even one—can have drastic negative consequences. The not-so-subtle message that a case like this sends to providers throughout the hospital is that it is prudent to conceal mistakes if you wish to avoid punishment.

But the case could have broader implications beyond the walls of County Hospital. Based on this experience, the large radiology group might reasonably conclude that by conducting proactive peer review it is simply inviting trouble with clients by needlessly calling attention to its own mistakes. This would translate for the group to more than a thousand significant errors per year that are no longer detected should the program be suspended. While this would certainly lead to more malpractice risk, the group could simply conclude that the increased premium expense associated with a very occasional catastrophic case is a good trade-off to retain their reputation and hospital contracts. At the individual provider level, Dr. Jones might conclude that his own best course of action is to hide future mistakes to prevent further damage to his career. This will surely interfere with his learning and performance improvement, as has been written about extensively in the literature on just culture in medicine. Cases like this reinforce the problem identified by the Institute of Medicine, which is that despite all we know about quality improvement, our healthcare system is still riddled with barriers, pitfalls, and disincentives that hinder serious progress toward safer and more effective care.

If neither the prevailing peer review model in radiology (RADPEER) nor the traditional model of peer review can achieve real improvements in the safety or quality of care, what then is the alternative? A next-generation peer review model that addresses the need for statistical validity, objectivity, and expedience is imperative to create a culture of safety and improvement.

The model should be based on evidence, encourage learning for providers, and save patient lives. It should help hospitals make fact-based credentialing decisions and avoid the negative financial ramifications that inevitably occur when radiology care is chronically substandard. The model of radiology peer review described in Dr. Jones’ radiology group actually exists, but it is not widely accepted for the reasons illustrated in the case study. It is an evidence-based and effective process that when applied in most settings leads to punishment in the forms of economic harm and diminished professional standing. The scenario described in the case study is not an isolated occurrence, but rather a circumstance that constantly plays itself out in varying degrees of severity. For this reason, wide adoption of substantive and meaningful radiology peer review seems doubtful in the near term.

The challenge facing radiology is not how to make patient care safer; we already know how to do that using models like the one described in this case study. Instead, the challenge is to figureout how to thrive during the transition. Affecting this kind of change isn’t an easy business as the Institute of Medicine acknowledges in their ambitious new report. Their preface begins with a reminder that the physician who discovered the profoundly significant impact of hand washing on patient survival was not celebrated for his discovery, but was instead professionally ridiculed until he left the practice of medicine and later died in a mental institution. And the sad postscript to that story is that half of physicians today still do not wash their hands before seeing patients. The status quo is a powerful force to be reckoned with, but despite the odds, we owe it to patients to move forward. It is time for hospital leaders to establish a legitimate culture of safety in their institutions. It is also time for radiologists, as well as other healthcare providers, to take the high road when it comes to peer review and simply do it right.

Teri Yates is the executive director of the Radiology Quality Institute, a research organization dedicated to the identification and promotion of radiology quality standards and process improvements. She is also the chief quality and risk officer for Radisphere, a national radiology group serving community and rural hospitals with a standards-based model that improves quality and physician satisfaction. Yates may be contacted at teri.yates@radispheregroup.com.

References
Agency for Healthcare Research and Quality. (January 2012). Hospital survey on patient safety culture: 2012 user comparative database report. Rockville, MD.
American Medical Association. (n.d.). Code of medical ethics. Retrieved November 3, 2012, from American Medical Association: http://www.ama-assn.org/ama/pub/physician-resources/medical-ethics/code-medical-ethics.page
Institute of Medicine. (1999). To err is human: Building a safer health system. Washington, DC: National Academy Press.
Larson, D., & Nance, J. (2011). Rethinking peer review: What aviation can teach radiology about performance improvement. Radiology, 259(3), 626-632.
Larson, P., Pyatt, R., Grimes, C., Abudujeh, H., Chin, K., & Roth, C. (2010, December). Getting the most out of RADPEER™. JACR, 543-548.
Siegle, R. L., Baram, E. M., Stewart, R. R., et al. (1998, March). Rates of disagreement in imaging interpretation in a group of community hospitals. Academy Radiology, (3), 148-154.
Office of the Inspector General. (January 2012). Hospital incident reporting systems do not capture most patient harm. Department of Health and Human Services, Office of Evaluation and Inspections.
The Institute of Medicine. (2012). Best care at lower cost: The path to continuously learning health care in America. Washington DC: National Academy Press.