Questions
FAQ's about controls and out-of-controls
Frequently-Asked-Questions about control materials, new lots, expiration dates, recording results, etc. Also, some more discussion of the best way to apply and interpret the "Westgard Rules"
- How should I check out a new lot number of a control product?
- What should I do when control products with long expiration dates are not available (such as the case for many hematology control materials?
- What should I do when a control value is outside the calculated control limits?
- Do I have to record every result for a control material?
- Which QC data should be excluded when calculating monthly statistics (mean, standard deviation, and coefficient of variation)?
- Which QC results should be included in updating your mean and SD statistics?
- What is N?
- Will multirule QC assure ideal QC performance?
- How exactly do you apply the 41s control rule?
- How exactly do you apply the R4s control rule?
- How exactly do you apply the 10x control rule?
- What control measurements can be used when applying "bias" rules?
- What do I do about patient results that were reported before a problem was detected with a control rule applied across runs?
- Is there a multirule QC procedure for computerized immunoassay QC which fulfills all regulations?
- What references describe how to apply and interpret multirule QC?
- What training is available to help me decide what QC to use?
How should I check out a new lot number of a control product?
It is best to assay the new lot in parallel with the old lot, collecting 20 measurements over a two to four week period. Calculate the new means and standard deviations (SD or smeas in the terminology on this website) for each of the tests, then compare the SDs with your earlier estimates obtained on similar control materials.
What should I do when control products with long expiration dates are not available (such as the case for many hematology control materials)?
A practical strategy is to (a) purchase similar control lots from the same manufacturer, estimate the mean for the new lot on the basis of a limited number of measurements (say 9 to 16), (b) use that estimate of the mean with your earlier estimate of the standard deviation from the previous lot of control material to calculate new control limits, (d) update the mean calculations with each additional 10 to 20 measurements that are accumulated, and (d) update your estimate of the standard deviation with each successive lot. With control materials having a 30 to 90 day experiation period, it is probably reasonable to average the standard deviations from two to four successive lots to provide a good estimate of method imprecision. If the numbers of measurements per lot are quite different, you can calculate a weighted average that takes into account the amount of data for each lot number.
What should I do when a control value is outside the calculated control limits?
Assuming that the mean and standard deviation were determined from an adequate number of measurements and that the control limits have been properly calculated, the proper response to an out-of-control situation is to stop the testing process, reject the analytical run (do not report patient results), trouble-shoot the process, correct the cause of the problem, docment the corrective action, then restart the process and reanalyze the patient samples and controls.
Do I have to record every result for a control material?
For example, if I know the reason a control value is out of acceptable limits (e.g., there was a bubble at the bottom of the sample cuvette so there was a short sample), do I have to write that on a QC log or save the result via online data capture?
Yes, every control result should be recorded, particularly those that are related to problems with a method. These "bad" results should generally be excluded from calculations, but should be included in the database to develop a record of how often and what kind of problems occur.
Which QC data should be excluded when calculating monthly statistics (mean, standard deviation, and coefficent of variation?
Should QC observations from a rejected run be excluded even if they are within 2SD (or 2s limits in the terminology of this website) of the mean? For example, a 13s violation occurs for Level I, but the QC observations on Levels II and III are within 2 SD. Do you still include the QC observations from Levels II and III when calculating the monthly statistics?
Remember that the control limits are supposed to describe the variation expected when performance is stable, i.e., when there are no problems occurring. Therefore, it is advisable to eliminate all control values from all runs that have been rejected, even if some of those control values are within 2 SD of the mean.
Which QC results should be included in updating your mean and SD statistics?
When you have results on two different control materials and one of them falls outside of 2s limits, then is repeated and falls within the 2s limits so the run is accepted, do you record both the original and the repeat values,or only the one that is in-control?
This question poses the other side of the issue discussed in the previous question. Again, the purpose is to characteristize and correctly describe the variation expected in stable operation, which suggests that the control data from all runs that are judged to be in-control should be included and the control data from all runs judged to be out-of-control should be excluded. A complicating issue here is the apparent practice of repeating all results outside of 2s, which is discussed in more detail in QC - The Out-of-Control Problem.
What is N?
Is N the number of replicates of one control material (i.e., one level) or is N the number of levels of control materials (e.g., bi-level, tri-level, or quad-level materials)?
We consider N to be the total number of control measurements that are available for inspection when using common Levey-Jennings type QC charts or multirule type QC procedures where it is possible to combine the measurements from different materials to accumulate a higher N (and higher error detection) for evaluating control status. These measurements may be replicates on one level or material, individual measurements on two or more materials, or replicate measurements on two or more materials. For example, if you assay a single material and make two measurements on that material, N is 2. If you assay two materials (as required by US CLIA regulations) and make single measurements on each, N is 2. If you assay two materials and make duplicate measurements on each, N is 4. If you assay three materials and make single measurements on each, N is 3. If you assay three materials and make duplicate measurements on each, N is 6. With the use of mean/range or cusum type of QC procedures where it is more difficult to combine the measurements from different control materials, N is more likely to be the number of replicates on an individual material.
Will multirule QC assure ideal QC performance?
Currently our laboratory applies the 12s warning rule and the 13s3s, 22s, R4s, 41s, and 10x rejection rules for all assays that are monitored with two levels of controls. For three levels of controls, we apply for all assays the 12s warning rule and the 13s, 22s (hematology only), 2of32s (other assays), R4s, and 12x rules. Will applying all these rules assure 90% error detection and less than 5% false rejections for all our assays?
No, different QC procedures (different control rules, differenct Ns) are needed with different tests because these tests may have different quality requirements and your methods may have different levels of precision and accuracy. The multirule procedures that you are using certainly will provide as good performance as is possible with low numbers of control measurements, such as Ns of 2 or 3, but you may not need all these rules if your methods have very good precision and accuracy.
How exactly do you apply the 41s rule?
If you have 4 consecutive values just a little bit over the 1 standard deviation limit, is the 41s rule violated?
Yes, the reason for defining decision criteria is to make sure everyone interprets control data exactly the same way. "Close" is hard to interpret exactly, therefore, a strict intrepretation needs to be made. Remember, however, that you don't necessarily have to use a 41s rule if method performance fits well within the quality required for a test. It is also possible in some situations to use a 41s rule as a warning rule to initiate prospective action (such as preventive maintenance) rather than retrospective action (rejection of a run).
How exactly do you apply the R4s rule?
Is there a difference on how the R4s rule is implemented manually as a "counting" range rule and via computer as a "calculation" range rule?
First, remember that the original paper recommended that the range rule be applied within a run in order to monitor random error. If used across runs, it is possible that systematic changes between runs will be picked up and misinterpreted as changes in random error. While some analysts have recommended using a range rule across runs, we still think it is best to restrict its use to within a run.
Second, concerning manual vs computer applications, manual applications are easier if the rule is used to "count" one control measurement exceeding a high or low 2s control limit and another exceeding the opposite 2s control limit, e.g., one measurement is out +2s and another is out -2s, therefore a range of at least 4s has been observed. In computer applications, it is possible to calculate the exact difference between the highest and lowest control values in a group, then determine whether that difference exceeds 4s, e.g., one measurement is +2.4s and another is - 1.8s, for a difference or range of 4.2s. With Ns greater than 4 per run, it is advantageous to use an exact range rule and also to use exact control limits in order to maintain a satisfactorily low level of false rejections, thus computerized application could provide a quantitative calculation that would certainly be advantageous with higher N multirule QC procedures.
How exactly do you apply the 10x rule?
If you have 10 consecutive values on the same side of the mean, but very close to the mean, is the 10x rule violated?
Yes, the reason for defining decision criteria is to make sure everyone interprets control data exactly the same way. "Close" is hard to interpret exactly, therefore, a strict intrepretation needs to be made. Remember, however, that you don't necessarily have to use a 10x rule if method performance fits well within the quality required for a test. It is also possible in some situations to use a 10x rule as a warning rule to initiate prospective action (such as preventive maintenance) rather than retrospective action (rejection of a run).
What control measurements can be used when applying "bias rules"?
For monitoring systematic errors with "bias rules" such as 22s, 2of32s, 31s, 41s, 10x, and 12x, is it correct that a QC violation nullifies the application of these "bias rules" to control data originating before the run in which the violation has occurred?
When the number of control measurements in a run is too low to apply rules such as the 41s, 10x, or 12x, e.g., N=2 and you can't apply these other rules within that run, it is possible to use previous control measurements from earlier runs to gain a sufficient number of measurements. If a systematic error has started and gone undetected, use of these measurements should increase the capability of detecting the problem. However, if a previous run has been rejected as out-of-control, then the control data in the out-of-control rule (and any previous runs) can not be used because it doesn't represent the performance of the method after the problem has been fixed. In this situation, it is often best to start up with additional controls in the first run after an out-of-control situation in order to apply some of these additional control rules.
What do I do about patient results that were reported before a problem was detected with a control rule applied across runs?
For example, I'm using a 13s/22s/R4s/41s control procedure with 2 control measurements per run. Yesterday's run was in-control and I reported the patient results. Today, when I applied the 41s rule across the 2 control measurements from today's run and the 2 control measurements from yesterday's run, there is a 41s violation. What do I do with the patient results I already reported?
This is a tough question and here are our thoughts about it. The purpose of applying the 41s rule across runs is to detect problems as soon as possible. Most likely a small systematic shift or systematic error began yesterday and by today there are enough control data to detect the change, therefore, you are becoming aware of the problem as soon as you can. That's the good news. You now know there is a problem and can do something to correct it. If you weren't using the 41s rule, you still wouldn't know about the problem.
The bad news is that yesterday's patients results might not be as good as you thought. Most likely there is a small systematic error (because it is only being detected by the 41s rule and not by the 13s or 22s rules which would be violated with larger errors). After you correct any problems with the testing process, you can reanalyze some of the samples from yesterday's run if they are still available, compare the old test values with the new values, and assess the size of the errors to determine whether they would affect the clinical use and interpretation of the test results. If the errors are large, you should reanalyze all the patient samples and issue correction reports with the next values. Document that the previous run has been checked, document your findings, and document what you did in terms of notifying and/or correcting any test results.
If you can't reanalyze samples from the previous run, then the results stand as reported. The control information available at that time indicated that the run was to be accepted. You did what was right in reporting the results based on the available information. Remember that the quality of laboratory services also depends on the turnaround time for a test result, particularly when needed for the critical care of the patient. We need to provide the best information we can at the time it is requested.
Is there a multirule QC procedure for computerized immunoassay QC which fulfills all regulations?
There certainly are multirule QC procedures that can be implemented via computer and will fulfill regulatory requirements for statistical QC. Some detailed example applications for immunoassay QC have been provided on this website by Dr. Neill Carey, as well as some illustrative examples of higher N multirule procedures in answer to earlier questions about immunoassay QC. These web-materials also include references to papers in the literature that specifically deal with QC design for immunoassays.
What references describe how to apply and interpret multirule QC?
The original reference paper for multirule QC, which was published in the Clinical Chemistry journal in 1981, provides a detailed explanation of these rules and their interpretation. [It will be available soon for viewing on this website with the aid of the Adobe Acrobat reader (which can be downloaded if needed)]. There are also explanations in several texts. See the following references.
- Westgard JO, Barry PL, Hunt MR, Groth T. A multi-rule Shewhart chart for quality control in clinical chemistry. Clin Chem 1981;27:493-501.
- Westgard JO, Barry PL. Improving Quality Control by use of Multirule Control Procedures. Chapter 4 in Cost-Effective Quality Control: Managing the quality and productivity of analytical processes. AACC Press, Washington, DC, 1986, pp.92-117.
- Westgard JO, Klee GG. Quality Management. Chapter 16 in Fundamentals of Clinical Chemistry, 4th edtion. Burtis C, ed., WB Saunders Company, Philadelphia, 1996, pp.211-223.
- Westgard JO, Klee GG. Quality Management. Chapter 17 in Textbook of Clinical Chemistry, 2nd edition. Burtis C, ed., WB Saunders Company, Philadelphia, 1994, pp.548-592.
- Cembrowski GS, Sullivan AM. Quality Control and Statistics, Chapter 4 in Clinical Chemistry: Principles, Procedures, Correlations, 3rd edition. Bishop ML, ed., Lippincott, Philadelphia, 1996, pp.61-96.
- Cembrowski GS, Carey RN. Quality Control Procedures. Chapter 4 in Laboratory Quality Management. ASCP Press, Chicago 1989, pp.59-79.
What training is available to help me decide what QC to use?
Can you recommend any video tapes and additional reference materials that discuss when to use single or multirule QC procedures? I understand we will need to define the quality required for each test, look at the precision and accuracy being achieved by our methods, then assess the probabilities for error detection and false rejection. I am not a statistician, so something easy to understand would be appreciated.
The ASCLS and Westgard QC have collaborated on two online courses: Basic QC Practices and the Multirule Minicourse ("Westgard Rules"). The first course is a comprehensive treatment of all aspects of quality control. The second minicourse is a quick but thorough explanation of what people often refer to as the "Westgard Rules".
The simplest quantitative approach for selecting appropriate single or multirule QC procedures is to use the OPSpecs Manual - Expanded Edition, which includes OPSpecs charts for Ns of 2 and 4 (appropriate for use with 2 levels of control materials) and Ns of 3 and 6 (appropriate for use with 3 levels of control materials).
There also are extensive training materials and some interactive tools available on this website. See the Archives section for a complete list of lessons on "QC Planning".
Finally, there is a complete course on Quality Control Planning for Healthcare Laboratories training CD and you can even get ACCENT continuing education credits for completing it.