Using Information From External Errors to Signal a “Clear and Present Danger”
Biases that make it difficult to learn from others’ mistakes
There are several attribution biases that lead to normalization of errors and thwart our learning from mistakes, particularly the mistakes of others. Attribution biases refer to the way we evaluate or try to find reasons for our own behavior and others’ behavior. Unfortunately, these attributions do not always mirror reality.
First, we tend to attribute good outcomes to skill and bad outcomes to sheer bad luck—a bias called self-serving attribution (Montier, 2007). We have a relatively fragile sense of self-esteem and a tendency to protect our professional self-image (and the image of our workplace) by believing the same errors we read about could not happen to us or in our own organization. It was just terrible luck that led to the bad outcome in another organization, soon to be forgotten by all except the few who were most intimately involved in the event.
Next, we tend to quickly attribute the behavior of others to their personal disposition and personality, while overlooking the significant influence of external situational factors. This is called fundamental attribution bias. However, we tend to explain our own behavior in light of external situations, often undervaluing any personal explanations. This is known as the actor-observer bias. When we overestimate the role of personal factors and overlook the impact of external conditions or situations in others’ behavior, it becomes difficult to learn from their mistakes because we chalk them up to being caused by internal, personal flaws that don’t exist in us (Stangor, 2011). This tendency to ascribe culpability to individual flaws increases as the outcome becomes more severe—a bias called defensive attribution—making it especially hard to learn from fatal events.
Finally, we tend to be too optimistic and overconfident in our abilities and systems (Montier, 2007), particularly when assessing our vulnerability to potentially serious or fatal events. We thirst for agreement with our expectations that the tragic errors we read about could not happen in our workplace, seeking confirmation about our expectations of safety while avoiding any evidence of serious risk (Weick & Sutcliffe, 2001; Montier, 2007). We may even go through the motions of looking at our abilities and systems to determine if similar errors might happen in our organizations, but in the end, we tend to overlook any evidence that may suggest trouble (much like confirmation bias in which we see what we expect to see on a medication label, failing to see any disconfirming evidence). We subconsciously reach the conclusions we want to draw when it comes to assessing whether our patients are safe (Montier, 2007).
Seeking outside knowledge
Experience has shown that a medication error reported in one organization is also likely to occur in another, given enough time. Much knowledge can be gained when organizations look outside themselves to learn from the experiences of others. Unfortunately, recommendations for improvement, often made by those investigating a devastating error, go unheeded by others who feel they don’t apply to their organization. Still others have committees that are working on tough issues and doing their best, but they may only have an internal focus. Real knowledge about medication error prevention will not come from a committee with only an internal focus.