American College of Surgeons: Four Critical Elements of an Effective Quality Improvement Process

May / June 2012
alt

American College of Surgeons

Four Critical Elements of an Effective Quality Improvement Process

More than a decade since the Institute of Medicine’s (IOM) landmark To Err Is Human report put a spotlight on quality improvement and patient safety, there has been little reduction in the rate of adverse events, according to The New England Journal of Medicine (2010).

Despite all of the programs to measure, assess, and improve quality since that report, the quality needle has been stubbornly hard to move.

Increasingly, clinicians and researchers understand why. First, most of the data used in quality improvement programs are inadequate to measure and improve quality (Ingraham, 2010; Davenport et al., 2009); and second, quality improvement programs are inconsistent in requiring hospitals and providers to set and meet standards, and to develop and maintain the appropriate processes and infrastructure to meet those standards.


This is the first in a series of columns by Dr. Clifford Ko of the American College of Surgeons. In this column, Ko discusses the key tenets of quality improvement and then specifically breaks down an important element of any quality program: data collection. Data collection is one of four key principles of continuous quality improvement, but many healthcare organizations today may not be collecting the best data or using their data effectively. This series will focus on the elements of robust data and what they mean for quality improvement: clinical versus administrative data sources, risk-adjustment, post-discharge outcomes measurement, and national benchmarking, as well as the power of collaboration and sharing data to improve care.
 

We know hospitals cannot improve quality if they cannot measure quality, and they cannot measure quality without valid, robust data. Physicians and hospitals need data strong enough to yield a complete and accurate understanding of the quality of surgical care compared with that provided by similar hospitals for similar patients. Collecting the right data is also a key step to improving care—one of four steps to continuous quality improvement. Quality improvement programs work best when hospitals and providers are held accountable to four principles:

  1. Set and meet relevant and significant standards.
  2. Build the right resource infrastructure to treat their patients.
  3. Collect robust data.
  4. Submit to a process of verification in which they allow their facilities, processes, and outcomes to be examined in an external audit.

Lessons Learned in Pursuing Quality for 100 Years
A century ago, a surgeon in Boston first had the idea to track patient outcomes and evaluate care. Dr. Ernest Codman tracked patients using “End Result Cards,” on which he noted patient demographics, diagnosis, treatments, and the outcome of each case. A pioneer in quality, Codman went on to help found the American College of Surgeons and its Hospital Standardization Program (later to become The Joint Commission). From these early learnings sprung the first initiative to track and improve cancer care, and the Commission on Cancer was formed in 1922. In 1950, the Committee on Trauma was created to improve all phases of care for injured patients and prevent injuries. And in the early 2000s, recognizing the critical importance of robust data collection to quality improvement, surgeons established the ACS National Surgical Quality Improvement Program (ACS NSQIP®) in the private sector.

Based on a nearly century-long history of quality improvement, ACS has found these four principles are key to improving quality and establishing a system of continuous quality improvement.

1. Set the standards.
The core for any quality improvement program is to establish, follow, and continuously reassess and improve best practices. It could be as fundamental as ensuring that surgeons and nurses take precautions to protect the patient from infection during an operation; as urgent as assessing a critically injured patient in the trauma center; or as complex as guiding a cancer patient through treatment and rehabilitation. In each case, it is important to establish and follow best practices as they pertain to the individual patient and, through constant reassessment, to keep getting better.

For instance, the Commission on Cancer (CoC), has continually improved the treatment of cancer by setting higher and higher clinical standards based on collecting outcomes data and other scientific evidence.* The CoC sets education requirements and qualifications of clinicians who practice in CoC-accredited cancer programs, and establishes the eligibility standards, qualifications, and categories of accreditation for participating cancer programs as well. The CoC’s new cancer program accreditation standards, introduced this year, emphasize a patient-centered, coordinated care approach. The CoC accredits more than 1,500 cancer programs that care for more than 70% of cancer patients in the United.States and Puerto Rico.

2. Build the right infrastructure.
To provide the highest quality of care, hospitals must have in place appropriate and adequate structures, such as staffing levels, number and type of specialists, appropriate equipment, and robust IT systems.

For instance, the ACS Committee on Trauma (COT), has established for trauma centers (Levels I – IV) the appropriate staffing levels and expertise, processes, and facilities and equipment needed to treat seriously injured patients (2006). The most advanced (Level I) centers must have certain specialists—including orthopedic surgeons and neurosurgeons—on call 24 hours a day, as well as specialized imaging equipment and other tools. There is good reason that the trauma systems direct the most challenging cases to the nearest Level I center—Level I trauma centers have been scientifically shown to reduce death by 25% (MacKenzie et al., 2006). Having the right infrastructure in place is vital to improving care and saving lives.

3. Collect robust data.
Studies show quality programs based on administrative data miss half or more of all complications. Using data from medical charts that track outcomes after the patient leaves the hospital and are part of a continuously updated database provides a clearer picture of the patient’s care. Data should also be risk-adjusted, to account for the condition of the patient and should also be adjusted to account for the risk of the procedure the patient had.

These are the principles of data collection upon which ACS NSQIP is built. The ACS NSQIP program has its history in the Veterans Health Administration, where surgeons saw a 27% decrease in post-operative mortality, 45% decrease in post-operative complications, a reduction in average length of stay from nine to four days, and increased patient satisfaction (Khuri et al., 2002).

In 2001, ACS developed a version of the program for private sector hospitals, and today more than 400 hospitals of all sizes, types, and surgical volumes around the country participate. ACS NSQIP hospitals assign a trained clinical staff member, called a Surgical Clinical Reviewer (SCR), to collect clinical, 30-day outcomes data for randomly selected cases. Data are risk adjusted and nationally benchmarked, so that hospitals can compare their results to hospitals of all types, in all regions of the country. The data are fed back to participating sites through a variety of reports; guidelines, case studies, and collaborative meetings help hospitals learn from their data and implement steps to improve care.

ACS NSQIP hospitals have seen significant improvements in care; a 2009 Annals of Surgery study found 82% of participating hospitals decreased complications and 66% decreased mortality rates. Each participating hospital prevented, on average, between 250 and 500 complications a year (Hall et al., 2009). Given that major surgical complications have been shown in a University of Michigan study to generate, on average, $11,626 in extra costs, such a reduction in complications not only improves outcomes and save lives, but greatly reduces costs (Dimick et al., 2006).

4. Verify through a third party.
Hospitals and providers must allow an external authority to periodically verify that the right processes and infrastructure are in place, that outcomes are being measured and benchmarked, and that hospitals and providers are proactively responding to these findings.

For instance, ACS NSQIP data are regularly reviewed by outside reviewers to ensure that the data and the collection methods are consistent with the processes and definitions established by ACS. Likewise, CoC reviews its participating cancer programs every three years as part of its re-accreditation process, while ACS assigns expert reviewers to verify that ACS trauma centers meet the appropriate criteria for their designated level.

The Continuous Quality Loop
Given the intensifying focus on requiring hospitals and providers to be accountable for improving care through measurement, public reporting, and pay-for-performance programs, it is critically important that the data used to measure performance are fair, accurate, and robust. In addition, this data should be used as part of a four-step cycle of analyzing reliably collected, well-defined and risk-adjusted data; using the data analysis to identify clinical improvement opportunities; then performing quality improvement with best practice guidelines and cases studies; and then evaluating improvement with data collection (Sachdeva et al., 2004). In this way, surgeons and hospitals become learning organisms that consistently improve their quality.

By improving quality, we protect the safety and well-being of our patients, who often depend on us for their lives and their quality of life. At the same time, better quality care is often more efficient care. By avoiding complications, errors and readmissions, we not only serve the patient better, we also reduce the financial burden on the overall healthcare system.

Clifford Ko serves as director of the American College of Surgeons Division of Research and Optimal Patient Care. He is a practicing surgeon and serves as professor of surgery and health services at UCLA Schools of Medicine and Public Health, director of UCLA’s Center for Surgical Outcomes and Quality, and a research scientist at RAND Corporation. He holds a medical degree, BA in biology, and MS in biological/medical ethics from the University of Chicago, and a MSHS in health service research from the University of California. Dr. Ko can be contacted at cko@facs.org.

* The CoC is a consortium of professional organizations ACS helped to organize that is dedicated to improving survival and quality of life for cancer patients. There are 1,433 CoC-accredited cancer programs across the country, overseen by a network of 1,600 physician volunteers who treat more than 70% of cancer patients each year in the United States and lead state and local quality improvement initiatives.

References
American College of Surgeons Committee on Trauma. (2006). Resources for the optimal care of the injured patient. www.facs.org
Davenport, D. L., Holsapple, C. W., Conigliaro, J. (2009, September–October). Assessing surgical quality using administrative and clinical data sets: A direct comparison of the University HealthSystem Consortium clinical database and the National Surgical Quality Improvement Program data set. American Journal of Medical Quality, 24(5), 395-402.
Dimick, J. B., et al. (2006). Who pays for poor surgical quality? Building a business case for quality improvement. Journal of the American College of Surgeons 202, 933-937.
Hall, B. L., et al. (2009). Does surgical quality improve in the American College of Surgeons National Surgical Quality Improvement Program: An evaluation of all participating hospitals. Annals of Surgery, 250, 363-376.
Ingraham, A. M.,  et al. (2010). Association of Surgical Care Improvement Project infection-related process measure compliance with risk-adjusted outcomes: Implications for quality measurement. Journal of the American College of Surgery, 211, 705-714.
Khuri, S. F., Daley, J., & Henderson, W. G. (2002). The comparative assessment and improvement of quality of surgical care in the Department of Veterans Affairs. Archives of Surgery, 137, 20-27.
Landrigan, C. P., Parry, G. J., et al. (2010, November 25). Temporal trends in rates of patient harm resulting from medical care. New England Journal of Medicine, 363(22), 2124-2134.
MacKenzie, et al. (2006, January 26). A national evaluation of the effect of trauma-center care on mortality. New England Journal of Medicine, 354, 366–378.
Sachdeva, A. K., & Blair, P. G. (2004). Educating surgery resident in patient safety. Surgical Clinics of North America, 84, 1669-1698.