Editor’s Notebook: The Measurement Challenge
Measurement is central to current discussions about the state of patient safety, as well as federal healthcare reform and efforts to move toward value-based purchasing. Using data to understand, drive, and evaluate improvement efforts has a long history. In fact, the patient safety movement was launched by a memorable data point: medical errors cause between 44,000 and 98,000 deaths each year in the United States (Kohn, Corrigan, & Donaldson, 2000). And the axiom “you can’t improve what you can’t measure” is a touchstone for quality improvement.
Current research and commentary reflect growing appreciation for the importance of getting measurement right. Measuring the outcomes of complex healthcare processes is especially difficult and, if done wrong, can lead to serious unintended consequences.
A recent report from the National Patient Safety Foundation (NPSF) includes a recommendation to “create a common set of safety metrics that reflect meaningful outcomes.” The report describes the challenge—essentially figuring out on a national scale the who, what, where, and why of measurement—and recommends four cogent and daunting tactics, including the creation of a portfolio of “patient safety and outcome metrics across the care continuum” (2015, p. 20).
At the Institute for Healthcare Improvement’s National Forum in December 2015, Don Berwick, co-chair of the task force that created the NPSF report, called for cutting in half all metrics currently being used and then cutting them in half again:
We need to tame measurement that has gone crazy. Far from showing us our way, these searchlights trained on us blind us. … I’m sure that if we focus on meaning, focus on learning, we can know what we need to know with 25% of the cost and burden of today’s measurement enterprise.
Medical journals are full of examples of measurement pitfalls. Commenting on research into the connection between do-not-resuscitate status and hospital mortality rates, Leora Horwitz (2016) says the right policy “will be based more on philosophy than on science” (p. 105) and warns that mortality rates currently oversimplify complex dynamics.
In an essay on WBUR’s Cognoscenti blog, pediatrician and cancer patient Marjorie Rosenthal (2016) describes how measurement gone wrong feels at the patient level. In “Harming Patient Satisfaction in the Process of Measuring It,” she describes how reporting requirements led to awkward conversations with a social worker, harming the patient satisfaction the requirements were designed to improve. Although her experience says as much about lack of training as it does about data gathering, it illustrates how important the downstream effects of measurement are.
Susan Carr, Editor
SCarr@blr.com
References
Horwitz, L. I. (2016). Implications of including do-not-resuscitate status in hospital mortality measures. JAMA Internal Medicine, 176(1), 105–106.
Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (Eds.). (2000). To err is human: Building a safer health system. Institute of Medicine, Committee on Quality of Health Care in America. Washington, DC: National Academy Press. [Report issued 1999, published 2000].
National Patient Safety Foundation. (2015). Free from harm: Accelerating patient safety improvement fifteen years after To Err Is Human. Retrieved from http://www.npsf.org/?page=freefromharm
Rosenthal, M. S. (2016, January 11). Harming patient satisfaction in the process of measuring it. Retrieved from http://cognoscenti.wbur.org/2016/01/11/patient-surveys-health-care-marjorie-s-rosenthal