Patient Safety Concepts
Normalization of Deviance
In 1996, Diane Vaughan coined the term "Normalization of Deviance" to describe how NASA and its related industries rationalized their way into the decision to launch the Space Shuttle Challenger in 1986. Decades later, this safety concept still applies - and is very germane to many of our current QC practices in the laboratory.
The Normalization of Deviance: How Labs Drift into Failure
Sten Westgard, MS
September 2011
The July 2011 Clinical Laboratory News had an excellent article on "The Slippery Slope of Errors" which discussed the 1996 book by Diane Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. We covered this topic back in 2003 on the website and 2004 in the Nothing but the Truth about Quality manual. It's an important subject, and one worth re-emphasizing.
Normalization of Deviance: The concept revisited
A more recent book was published in 2011 by Sidney Dekker, an expert in patient safety, human error, and organizational resilience. This new book, Drift into Failure, gives a very succint definition for the Normalization of Deviance:
"signals of potential danger... are acknowledged and then rationalized and normalized, leading to a continued use under apparently similar circumstances. This repeats itself until something goes wrong, revealing the gap between how risk was believed to be under control and its actual presence in the operation." [Sidney Dekker, Drift into Failure, page 106]
Basically, the production pressures of an organization can force an incremental drift away from safety. Each step is small, and during "good" (or lucky) times, operations can continue without bad consequences. But each step away from safety leads to greater danger, until finally an error occurs, and the organization finds it is much more vulnerable, more impacted, and less able to recover.
The important distinction is that this increasing vulnerability doesn't occur because of bad acts (neither the workers nor the managers intend to make things worse), nor does it occur as a truly rational choice. The organization doesn't make a specific announcement, "We're now going to trade safety in exchange for more efficiency." Instead, a slow creep occurs, where pressures to get things done motivate people to engage in workarounds, improvisations, and other acts to "get it done."
Normalization of Deviance: The pattern
With the new book, Dr. Dekker provides a useful step-by-step of how this Normalization of Deviance takes place.
- "Beginning the construction of risk: a redundant system.... The belief that safety is assured and risk is under control. Redundancies, the presence of extraordinary competence, or the use of proven technology can all add to the impression that nothing will go wrong."
- "Signals of potential danger. Actual use or operation shows a deviation from what is expected. This creates some uncertainty, and can indicate a threat to safety, thus challenging the original construction of risk."
- "Official act acknowledging escalated risk. Evidance is shown to relevant people, a meeting may be called."
- "Review of the evidence. After the operation, discussions may ensue about who did what and how well things went."
- "Official act indicating the normalization of deviance: accepting risk. The escalated risk can get rationalized or normalized as in-family, as expected. It may be argued that redundancy was assured at multiple levels in the system. And, after all, the technology itself has undergone various stages of testing and revision before being fielded. All these factors contribute to a conclusion that any risk is duly assessed, and under control."
- "Continued operation. The technology will be used again...because nothing went wrong, and a review of the risks has revealed everything is under control."
The pattern as it occurs in the laboratory.
Perhaps the pattern already seems familiar. If not, let's put it into the QC context.
- We run controls. We assure quality by running those controls. And the instruments were validated when installed. And the instruments are very automated and advanced. And the instruments were cleared the FDA. And analytical errors are very rare these days.
- A control is out 2 SD.
- We admit that the control is out. But we repeat the control. This time it falls in.
- We review the QC chart. Perhaps this is the only 2s violation in a while.
- We decide that the 2s violation was just a "false rejection." So we note in the action log that we've repeated the control and it fell "in" and now we can go about our regular business.
- We keep running controls.
In this way, laboratories literally (actually) normalize deviance. Deviations are accepted as normal.
As we've said before, (and before and before), the use of 2 SD control limits is a MAJOR PROBLEM in laboratories. This tradition (and it is a tradition, not a practice based on science, it's really just a tradition handed down from the first generation of laboratory workers to the next) generates a lot of false rejections, causes a lot of bad responses, and can corrupt and corrode the quality system of the laboratory. The "Cry Wolf" effect can kick in, for instance - we get so accustomed to the outliers that we stop listening to them (and assume that every violation is a false rejection). Or we decide to artificially widen our limits until they are so wide we can't detect errors anymore.
[To see the extent and persistence of this problem, check out the article on The QC we really do from earlier this year.]
What do we do about it?
The patient safety experts and high reliablity theorists don't have a lot of optimism when it comes to the Normalization of Deviance. Given the inherent production pressures in any organization, particularly resource-constrained organizations, there isn't an easy solution. Dr. Dekker notes somberly that there should be a solution, but that it may not be possible in the typical organization:
"The solution to risk, if any, is to ensure that the organization continually reflects critically on and challenges its own definition of 'normal' operations, and finds ways to prioritize chronic safety concerns over acute production pressures. But how is this possible when the definition of 'bad news' is something that gets constantly renegotiated, as success with improvised procedures or imperfect technology accumulates? Such past success is taken as guarantee of future safety." [Dekker, ibid. page 108]
Thankfully, in the laboratory, we have a few options in our tool box, plus we can improve just by abandoning some of our antiquated tools. If we move away from 2 SD control limits, if we adopt a QC Design (Sigma-metric) approach, if we recognize that a "compliance" strategy is simply a race to the bottom, we can move away from deviance and start a path to excellence.
The important thing to realize is that this behavior is "normal" in organizations and if you find yourself with normalized deviance, it's not your individual fault. But once we recognize we're doing the wrong QC wrong, we do need start doing things right.