WESTGARD QC
Testing Equivalent Quality: A Better Way
An updated version of this essay appears on the Nothing but the Truth about Quality book.
CMS has created "equivalent QC", but the regulations really only allow "equivalent quality testing." This difference is important - and if "Eqc" is here to stay, finding the equivalent quality in testing is the only way to find a real solution. Finding that quality can be done, and done simply, using Sigma Metrics.
- What is "Equivalent Quality"?
- How do we measure "Unquality"?
- How do we define Good Quality?
- What's wrong with CMS's "Equivalent QC"?
- What's wrong with current thinking about Quality?
- Why not use Sigma Metrics to evaluate Quality?
- What could be simpler?
- Why not try it?
- References
February 2004.
The phrase “equivalent quality testing” appears in the CLIA Final Rule [1] and is causing a lot of confusion, even at CMS. Their interpretation is critical because they provide guidelines for laboratory inspectors, which are then imposed on others, like you and me and the laboratory. CMS’s interpretation is first seen in the State Operations Manual [2] where “equivalent quality testing” is converted to “equivalent QC procedures.”
I’ve been writing about CMS’s interpretation and recommendations to laboratories for evaluating and implementing equivalent QC procedures, which, in my opinion, lack any scientific rationale. See the following for more discussion of the shortcomings:
- “Equivalent QC Procedures”
- “Appropriate QC Procedures”
- “Lies, Damn Lies, and Equivalent QC”
- “More on Quality-Less Compliance”
In this discussion, I want to propose a solution to the problem. While I still disagree with placing the responsibility for equivalent QC on laboratories, instead of manufacturers, if laboratories are to do this, there is a much better way!
What’s “Equivalent Quality”?
How do you evaluate equivalent quality! It turns out to be rather simple. It might even make use of the evaluation data that is recommended in the original CMS guidelines, but that data needs to be analyzed in a different way to properly characterize the performance of the method in the laboratory.
To understand the problem and its solution, we must begin by understanding the meaning of the words “equivalent” and “quality.”
- Equivalent implies equal - according to one dictionary “to have equal power, equal in force, amount, or value, like equal in signification or import, corresponding to or virtually identical esp. in effect or function, or equal in might or authority.”
- Quality is more difficult to define, but one well-accepted definition is that quality is the “totality of features and characteristics of a product or service that bear on its ability to satisfy given needs.” This definition comes from the American Society for Quality – the professionals in the field of quality. We might refine the last part of this definition – the phrase “to satisfy given needs” – to mean “to conform to the stated or implied requirements of users and customers.” That will make the definition more operative and easy to apply with existing recommendations on laboratory performance.
How do we measure “Unquality”?
The difficulty with quality is how to measure it! However, as laboratory scientists who regularly deal with characteristics such as accuracy and precision, we recognize that the measures of certain characteristics are related to the “lack of” the characteristics, e.g., inaccuracy and imprecision.
- Accuracy is determined by experiments that measure the inaccuracy, as described by the bias between a method and the correct or true values (ideally obtained from a reference quality method but also by comparison with existing field methods).
- Precision is determined by experiments that measure the imprecision, as described by the SD or CV of a method.
Likewise, we should understand that the measure of quality is unquality, i.e., the lack of conformance to requirements, which is universally described by defectives, defects, or defect rates.
Imprecision, inaccuracy, and unquality! They’re all determined by measuring the lack of agreement or the lack of conformance.
How do we define good quality?
The next step in making quality a quantitative characteristic is to define “good quality”, i.e., what is needed, required, or desired. We can speak of quality in quantitative terms only when we define how good something has to be. Whether the characteristic is turnaround time or accuracy, an essential step is to define the goal or requirement, e.g., an allowable time period of 60 minutes and an allowable error of 10 mg/dL.
- A test was reported in 47 minutes. Acceptable or defective?
- A test is correct within 12.4 mg/dl? Acceptable or defective?
To manage quality, to assess it, improve it, and ultimately assure that customer needs are satisfied, it is absolutely essential to define the requirement for good quality. Otherwise, quality is only a concept, does not have any practical meaning, and will not have any impact on operations and production.
What’s wrong with CMS’s "equivalent QC"?
Equivalent quality testing must have something to do with “testing that satisfies a defined requirement for quality.” To assess equivalency, the quality that is required must be defined. Equivalent to what? Equal to what?
For analytical performance, a quality requirement can be defined in terms of an “allowable total error,” such as specified in the CLIA proficiency testing criteria for acceptable performance. CLIA itself defines allowable total errors for some 80 or so tests, therefore, the CLIA regulations include minimum requirements for quality for many of the commonly performed laboratory tests.
For example, for measurements of interest in POC applications, which appears to be the main focus of the discussions in the recent news [3], CLIA defines the following allowable total errors:
- Blood gas pH should be correct within 0.04 pH Units
- Blood gas pCO2 should be correct within 5 mm Hg
- Sodium should be correct within 4 mmol/L
- Potassium should be correct within 0.5 mmol/L
- Glucose should be correct within 6 mg/dL or 10%, whichever is greater
One might think that CMS would make use of these PT criteria for defining equivalency, but they don’t. It’s probably too obvious and would take away the mystery in the regulations. Truth be told, CMS doesn’t know what they mean by equivalency or by quality (option 1), or if they do, they aren’t telling anyone (option 2), and if they don’t, they’re trying to make believe they do (option 3).
If you find that confusing, that’s also the nature of CMS’s options for equivalent QC procedures. The lack of clear thinking becomes obvious when you look at the “evaluation processes,” which allow a laboratory to reduce the frequency of testing external controls from daily to weekly to monthly (depending on the procedural controls included in the instrument). The evaluation process only requires the laboratory to run controls daily for 10 to 30 to 60 days (options 1, 2, and 3). The lack of any rejections during that period is taken to mean there are no defective test results, and by inference (and a great big leap of faith), that equivalent quality can be achieved by running only weekly or monthly controls.
But how can you draw any conclusion about defective results and equivalent quality if you have never defined how good a test should be? How could you detect medically important errors if you haven’t defined the requirement for good quality and designed the QC procedures to detect runs with bad quality? Can you assume that simply running controls will detect any and all errors? NOT TRUE! Running controls is not the same as assessing conformance to a defined quality requirement! That requires doing the right QC right, but the CMS recommended evaluation processes don’t!
What’s wrong with current thinking about quality?
Something is missing in current thinking about quality in healthcare today! Fundamental to the organization and operation of any business or service is the knowledge of the goals to be achieved and the requirements to be satisfied. For example, for test turnaround time, we may focus on providing 95% of our STAT testing results within one-hour. For analytical quality, we may focus on providing 90% assurance that test results are correct within the CLIA allowable total errors. These goals will not be achieved without conscious efforts to plan the processes carefully, monitor performance objectively, and make improvements when necessary.
A major problem in healthcare today is that quality goals are seldom defined or even acknowledged, as evident in the CMS guidelines on equivalent quality testing and equivalent QC procedures. Many quality programs, policies, and procedures are put in place that make it look like quality is being assessed and assured, even though the goals for good quality are never defined. How can we manage quality if we don’t define how good it needs to be?
Healthcare organizations often attempt to get around this shortcoming by benchmarking performance against peer institutions. None of them have defined what good quality should be, only that they want to be as good as their competition. The result is that we often consider error rates up to 10% to be okay, 5% to be good, 1% to be excellent, and anything better than that to be fantastic.
Here’s where current thinking is wrong! On the Sigma scale, 6 sigma performance is considered the goal for world class quality and 3 sigma performance is considered unacceptable for routine operation and production. A 10% error rate corresponds to 2.8 sigma performance, 5% to 3.2 sigma, and 1% to 3.8 sigma. Instead of error rates of 10% to 1%, we need to aim for 0.1% to 0.01% to 0.001% (4.6 sigma, 5.3 sigma, and 5.9 sigma, respectively). We’re off by orders of magnitude in our current thinking about acceptable error rates because we haven’t defined objective quality goals and therefore aren’t able to assess our performance against any goals.
Why not use Sigma metrics to evaluate quality?
Six Sigma Quality Management [4] is slowly making inroads in healthcare organizations and offers a real hope for improving quality management thinking and processes. The reason is that Six Sigma focuses on defects, which in turn requires that goals for good quality be defined.
Six Sigma provides a universal methodology for measuring quality by counting the defects, determining the defect rate as “defects per million” or “DPM”, and then converting DPM to a sigma-metric (by use of standard tables available in any Six Sigma text). Benchmarking can be done using the sigma-metrics, which first account for quality goals and secondarily allow universal comparisons across processes, services, organizations, and industries.
When CMS says equivalent quality testing, are they asking for 3 sigma performance or 6 sigma performance? Will the CMS evaluation process allow laboratories to provide less than 3 sigma performance? No one knows, but it would be easy to find out! Just apply Six Sigma concepts to “equivalent quality testing” and evaluate method performance and the need for QC using sigma-metrics. The evaluation protocols would then focus on estimating the precision (SD or CV) and accuracy (bias) of the method, rather than assessing control status (which they can’t and don’t because of inadequate design). The protocols calling for 30 to 60 day evaluation would be sufficient to provide these estimates if control materials had known correct values or available peer values. It would even be okay to initially assume a bias of zero and just estimate the precision achieved within the laboratory.
These estimates of imprecision and inaccuracy would then be used to calculate the sigma metric for a method [Sigma = (TEa – bias)/CV, where TEa is the CLIA quality requirement for acceptable performance in proficiency testing]. For example, for blood pH, where TEa is 0.040 and the observed method SD is 0.005 and bias is 0.01, sigma would be 6.0 [(0.040-0.010)/0.005].
The QC recommendation could then be related to the demonstrated performance of the method relative to a stated quality requirement for the test. At least 6 sigma performance should be required for all three options that allow reduction from daily to weekly or monthly QC. Any process with only 3 sigma performance needs controls at least every shift (and methods with less than 3 sigma performance should be replaced by better methods). Methods in-between might use daily controls.
What more could be simpler?
If the goal is to provide equivalent quality testing, then CMS should modify the evaluation methodology to determine the sigma performance of the methods and relate the QC that is needed to the performance that is demonstrated. For instruments with procedural controls, only one testing protocol is needed to obtain 30 days routine control data on two different materials. For methods without procedural controls, the 60 day protocol could be used. The CLIA criteria for acceptable performance in proficiency testing would be used as the quality goals. Sigma could be calculated for a bias of zero to make things as simple as possible. Then the recommendation for frequency of running QC could be related to the sigma-metric determined on the basis of the minimum quality required by CLIA and the actual performance observed in the laboratory.
Why not try it?
At the time of this writing, CMS is on record as saying they will evaluate data being collected in the field to assess the performance of certain instruments and methods. Let’s hope they also apply the Sigma calculations as part of their evaluation before letting the SOM guidance on equivalent QC procedures loose in the field.
If CMS doesn’t evaluate the use of Sigma metrics to assess equivalent quality, then laboratories must take responsibility to do this themselves before making any reductions in QC. Let’s see if the methods are really world class and good enough to assure the safety of our patients! Let’s make sure we have the data to prove our assumptions and the evidence to support our practices.
References
- US Centers for Medicare & Medicaid Services (CMS). Medicare, Medicaid, and CLIA Programs: Laboratory Requirements Relating to quality Systems and Certain Personnel Qualifications. Final Rule. Fed Regist Jan 24 2003;16:3640-3714. 714 (available at http://wwwn.cdc.gov/clia/pdf/CMS-2226-F.pdf)
- CMS State Inspectors Manual. (final document will be available at http://www.cms.hhs.gov/clia/)
- Auxter-Parham S. CLIA interpretive guidelines debut on CMS web site: EQC options now applicable to broader spectrum of laboratory testing. Clin Lab News 2004;30:1, 8, 10.
- Westgard JO. Six Sigma Quality Design and Control: Desirable precision and requisite QC for laboratory measurement processes. Madison WI:Westgard QC, Inc., 2001.
James O. Westgard, PhD, is a professor of pathology and laboratory medicine at the University of Wisconsin Medical School, Madison. He also is president of Westgard QC, Inc., (Madison, Wis.) which provides tools, technology, and training for laboratory quality management.