Basic Planning for Quality
QP-14: What's wrong with statistical quality control?
In the 50 years since we started using statistical QC in the laboratory, a large number of complaints have accumulated. Dr. Westgard sorts through the complaints to find solutions and an easier way.
Note: This material is covered in the Basic Planning for Quality manual. Updated coverage of these topics can be found in Assuring the Right Quality Right, as well as the Management and Design of Analytical Quality Systems online course.
There seems to be a lot of sentiment for changing current QC practices - getting rid of statistical QC and doing something different, whatever that might be. Here are some of the reasons that are given. As you'll see, many of these apparent problems or limitations can be overcome by applying the quality-planning process that has been described in this series of lessons. I hope this discussion will confirm the importance of applying the quality-planning process in your own work.
QC is not patient focused!
Today's quality control practices are indeed arbitrary and should properly be called "arbitrary control" rather than quality control. Laboratories need to take responsibility for improving analytical quality management, as discussed in QP 1: A Wake-up call for Quality Management. Improvements are needed because most laboratories base their QC practices on regulatory, accreditation, and professional recommendations (QP 3: Complying with Regulations, Standards, and Practice Guidelines), instead of selecting control rules and numbers of control measurements on the basis of the quality that is required for the test.
Patient focus requires an understanding of Total Quality Management (QP 2: Assuring Quality through Total Quality Management), the recognition of the importance of defining the quality required by a test, and the need to implement a quality-planning process. No QC technique can be patient-focused until you define the quality needed and select the QC procedure to assure that quality is achieved by the methods in your laboratory. QP 5: Defining Quality Requirements discusses the current recommendations for quality requirements and identifies available sources of information to help you get started.Defining the quality required for a test is the first step in the quality-planning process recommended here (QP 4: Devising a Practical Process). I have long advocated the need to plan the quality of laboratory testing processes. The general steps of a quality-planning process were first outlined in a book on Cost-Effective Quality Control that was published in 1986. Development of the chart of operating specifications in the early 90s made the planning process much more practical [2]. The use of normalized OPSpecs charts makes it possible for any laboratory to implement a manual quality-planning process that is quick and easy to perform.
Statistical QC procedures can be patient-focused if a laboratory defines the quality required for its patient services and selects appropriate methods and appropriate QC procedures. The problem is not the fault of statistical QC! The fault is the laboratory's lack of planning!
QC doesn't consider improvements in instrument performance!
Later generations of instruments have better precision and better stability that those in the past. It's correct to question whether we should do QC the same old way on these new instrument systems. The answer again is to plan the QC procedure properly and take instrument performance into account. The new CLSI guidelines [3, document C24-A3] outline the steps for planning a QC procedure as follows:
- Define the quality requirement
- Determine method performance - imprecision and bias
- Identify candidate SQC strategies
- Predict QC performance
- Set goals for QC performance
- Select appropriate QC
The 2nd step considers improvements in instrument performance and should lead to simpler QC procedures for highly automated 4th and 5th generation analyzers.
The quality-planning process devised here in QP 4 follows the steps of the NCCLS guidelines. The observed imprecision and inaccuracy of a method is accounted for in the 2nd step of the process. It clearly impacts on the QC procedures that are selected. We documented this ten years ago when describing how we changed from multi-rule QC procedures to single-rule procedures with wide limits (13.5s rule) for 14 out of 18 tests on a multitest chemistry analyzer [4,5] (as discussed in QP 10: Automated Chemistry Applications). The strategy is to individualize the QC design for each individual test on an instrument, rather than use a single QC procedure on all tests. To do that, you need a quick and easy quality-planning process, such as the manual process that utilizes normalized OPSpecs charts (QP 8: Implementing a Manual Planning Process).
"One size fits all" QC is not appropriate!
True, you should select the control rules and number of control measurements that are appropriate for each test by implementing a QC planning process. You should establish run length and frequency of control analysis on the basis of the stability of the method and it's susceptibility to problems. That will allow you to "size" or "individualize" the QC procedure for the quality required by your customers and for the performance observed for the methods in your laboratory. Follow the general CLSI C24-A3 guidelines or follow the more detailed planning process described in QP 8.
The applications shown here for automated chemistry tests (QP 10), blood gas tests (QP 11: Blood Gas Applications), immunoassays (QP 12: Immunoassay Applications), and coagulation tests (QP 13: Coagulation Applications) demonstrate the different "sizes" of QC that might be expected.
QC is not appropriate for unit devices!
There are both technical and business aspects to this argument. The technical argument is that unit devices can't be monitored by QC because each device is a separate and different. Sampling one device doesn't assure the next one is okay, therefore, it's argued that QC can't be used. However, all manufacturers claim that these unit devices are uniform, otherwise, they couldn't sell them at all. If they are uniform, then QC can be applied to monitor the general stability and performance of the devices, as well as operator proficiency in using the devices.
The business argument is that different QC is needed because the personnel involved in POC testing seldom have little laboratory training and no experience with QC. The solution offered by CLSI in the proposed guideline on "Quality Management for Unit Use Testing" is to develop a "sources of errors" matrix and identify specific methods for controlling each potential error [6]. This is consistent with our formulation of a Total QC strategy as described in QP 7: Formulating TQC Strategies. However, it is always most efficient to monitor as many of the individual error sources as possible by statistical QC, therefore our planning for a TQC strategy takes into account the error detection capability provided by statistical QC.
One important limitation of the CLSI approach is that the "method of control" for many of the individual error sources is often found to be operator training and competency, which is very difficult to evaluate and verify. Statistical QC may actually be the most quantitative way to monitor operating training and competency, as well as important operator variables in the testing process [7].
QC is too expensive!
Compared to what? Certainly there are techniques like instrument function checks and electronic QC that are less costly to perform, but are they actually less expensive? Function checks and electronic QC typically monitor only a few instrument variables and steps in the measurement process. What about the failure costs from errors in other steps that go undetected and impact negatively on patient treatment and outcome? We documented cost-savings from improved statistical QC designs for an automated chemistry analyzer (QP 10) by considering the laboratory failure-costs due to repeat runs alone. The cost-savings would be even greater if we were to consider the failure-costs due to improper patient treatment because of incorrect test results. But that more complicated assessment of outside costs isn't even necessary. There are sufficient savings within the laboratory to justify statistical QC.
Electronic QC and related techniques may be cheap for the laboratory, but expensive for patient care. The detailed checking of individual error sources, as recommended by the CLSI unit use guidelines, is also likely to be costly because it is an inefficient technique, at least as compared to the efficiency of statistical QC for monitoring many steps in the total testing process [8].
QC takes too much time!
What time are we talking about here - the time to analyze controls, the time to interpret control results, or the time for dealing with out-of-control signals? The real concern should be the turnaround time for reporting test results, where the QC problem in many laboratories is due to "false rejections" that require repeat analysis of controls, analysis of new controls, and re-runs of patient samples. Again, proper planning of QC procedures is critical to minimize false rejections and to provide appropriate detection of medically important errors. When this is done, a rejection signal should lead to trouble-shooting that eliminates analytical problems, rather than the wasteful re-work without improvement [9]. Problem-solving is a good investment in time because, in the long run, there will be fewer problems, fewer out-of-control situations, more rapid and effective problem-solving, and fewer delays in reporting patient test results.
The time that will be saved by careful selection of QC procedures will more than offset the time required to carry out the planning. With the quality-planning process recommended here (QP 8), it takes only a few minutes to do the planning. It will actually take more time to define the quality needed for the test and obtain the estimates of imprecision and inacuracy for the method. The savings come only after the planning, therefore you must make the initial investment in planning to achieve the savings in routine operation.
QC is too complicated!
There are at least two different issues here - one dealing with the difficulty in training personnel and the other the difficulty in implementing QC in a proper manner. The solutions for both are new technologies - in the first case Internet technology to support training and in the second computer technology to automate the QC process. Basic QC training is already available on http://www.westgard.org. These materials are also available in hardcopy format for laboratories that have limited access to the Internet [10].
The solution for implementation is to automate the whole QC process, from planning through implementation. An example of software that automates the QC selection process is provided by the EZ rules 3 program [11,12], which makes use of charts of operating specifications (OPSpecs charts) to show the relationship between the quality required, the imprecision and inaccuracy observed for the measurement procedure, and the error detection capabilities of different QC procedures.
It would be ideal, of course, to integrate the automatic QC selection function into the QC software in an instrument system, a QC workstation, or a laboratory information system. The laboratory could then specify the quality required for the test and the automated QC process would select the appropriate QC procedure, load and sample the control materials, acquire the necessary control data, interpret the control data, and release or reject patient test results. Rather than worrying about which rules to use, the laboratory's responsibility would be focused on the quality needed for the application of the tests.
QC is old!
I am beginning to take offense to the comment that "old" implies "no longer useful." I think experience is beneficial and leads to dependability. Statistical QC was derived from industrial statistical process control, which was developed in the 1930s. However, it continues to be the cornerstone of industrial production worldwide because it's a dependable, proven technique. Statistical QC has been a fundamental technique for improving the quality of test results in health-care laboratories. It takes time to become a "tried and true" technique, therefore we shouldn't discard this well-established technique without careful evaluation of the new technique. What is the new technique that's going to replace statistical QC? Where's the documentation of effectiveness? You shouldn't have to "trust me" or the manufacturer! Statistical QC gives you a dependable, proven, and independent technique for managing and controlling the quality of your work.
What's right with statistical QC?
There are lots of things that are wrong with the way we have implemented statistical QC in laboratories in the past, but these things can be corrected. Here's what's right about statistical QC - it's still the best technique available for managing the analytical quality in laboratory tests! The biggest potential for improving QC systems in laboratories today is to plan statistical QC procedures carefully, implement them properly, and perform them correctly.
References
- Westgard JO, Barry PL. Cost-effective quality control: Managing the quality and productivity of analytical processes. AACC Press, Washington DC, 1986.
- Westgard JO. Charts of operational process specifications ("OPSpecs Charts") for assessing the precision, accuracy, and quality control needed to satisfy proficiency testing performance criteria. Clin Chem 1992;38:1226-1223.
- NCCLS C24-A2. Statistical quality control for quantitative measurements: Principles and definitions; Approved guideline - second edition. NCCLS, Wayne, PA, 1999.
- Koch DD, Oryall JJ, Quam EF, Feldbruegge DH, Dowd DE, Barry PL, Westgard JO. Selection of medically useful quality-control procedures for individual tests done in a multitest analytical system. Clin Chem 1990;36:230-3.
- Westgard JO, Oryall JJ, Koch DD. Predicting effects of quality-control practices on the cost-effective operation of a stable, multitest analytical system. Clin Chem 1990;36:1760-4.
- NCCLS EP18-P. Quality management for unit use testing; Proposed guidelines. NCCLS, Wayne, PA, 1999.
- Westgard JO. Taking care of point-of-care QC. Clin Lab News Viewpoint, August 1997.
- Westgard JO. Electronic QC and the total testing process.
- Hyltoft Petersen P, Ricos C, Stockl D, Libeer JC, Baadenhuijsen H, Fraser C, Thienpont L. Proposed guidelines for the internal quality control of analytical results in the medical laboratory. Eur J Clin Chem Clin Biochem. 1996;34:983-99.
- Westgard JO. Basic QC Practices. Westgard QC, Madison, WI, 1998.
- Westgard JO, Stein B, Westgard SA, Kennedy R. QC Validator 2.0: a computer program for automatic selection of statistical QC procedures for applications in healthcare laboratories. Comput Methods Programs Biomed 1997;53:175-86.
- Westgard JO, Stein B. Automated selection of statistical quality-control procedures to assure meeting clinical or analytical quality requirements. Clin Chem 1997;43:400-403.