Questions
Questions from the 2021 Quality in the Spotlight Masterclass
On July 12, 2021, the Westgards took part in the Quality in the Spotlight "Masterclass" on POCT testing and quality. The virtual masterclass generated so many questions that we couldn't answer all of them in the time given. Here are a few questions we answered later.
Questions from the 2021 MASTERCLASS on POCT "Quality in the Spotlight"
Sten Westgard, MS
September 2021
The Antwerp conference, postponed from 2020, went virtual for 2021. This time it took the form of a shorter online Masterclass, focused primarily on Point-of-Care testing and quality. To that end, James O. Westgard and Sten Westgard took part, providing lectures and taking questions.
Here are a few questions that spilled over from the virtual conference and had to be answered offline. We're happy to share these with those who didn't have the ability to attend the class. For all the answers, you can purchase access at https://qualityinthespotlight.com/masterclass-2021/
Question: How do you recommend to handle the QC monitoring for a POC analyzer in which there is no built-in QC software (Levey-Jennings chart recording and visualization) and for which there is usually no tailored commercial QC material?
When there are few QC capabilities in a POC device, that means you should be placing a great emphasis on the _selection_ of the POC device. You need to choose a device that is of sufficient high quality that it will need as little QC as possible. Then your job on the QC side will be simpler.
Selection of a POC device should also consider these factors for QC (ability to generate LJ charts) and whether or not there are existing commercial QC materials. Ideally, you should choose POC devices of high quality, with software for QC charting, and a ready supply of commercial QC options.
However, outside of the ideal world, there are many POC devices that are (1) of poor quality, (2) have no QC charting features, and (3) have no controls available. In fact, 2 and 3 are probably related to 1. The worse the device is in performance, the less likely the manufacturer wants you to know that by being able to visualize LJ charts.
If there are QC options, but no built-in QC software, that doesn’t let you off the hook. Just because it’s inconvenient to manually enter QC data into a charting program doesn’t mean you don’t have to do it. In decades past, there was NO LJ software at all, period. It was done completely by hand. Then software emerged, but all data had to be manually entered. We live a wonderful era where many things are automated, so much so that we begin to assume everything should be seamless and easy. Entering data into a software program is hardly a crushing burden. If all that is preventing you from fulfilling your professional duty is typing on a keyboard, exercise those fingers, please.
When there is truly no commercial QC, however, this is a situation where the “Repeat Patient QC” (RPT-QC) can be a useful technique. Remember this is not simply repeating a patient and hoping it’s the same, there are defined limits on what that repeat should look like.
Question: Do we focus too much with sigma metrics on a single value without taking into account the uncertainty of each different component of the sigma equation? In other words, should we include a confidence interval for the sigma value?
We can put confidence intervals around everything: mean, SD, even individual patient results. If we were dedicated statisticians, we would do this. However, we don’t often include all the confidence intervals that we could, partly because it tends to confuse. If we hand a set of patient results and we have tripled the numbers because we included the confidence interval (high, low) as well as the “result”, we might find that the clinicians are a bit frustrated with us. Sometimes, a point estimate is what we desire.
The short answer to this question is, if you are worried about confidence intervals around something, collect more data. Worried about the confidence interval around your mean? Collect more data. Worried about the confidence interval around your standard deviation? Collect more data. Worried about the confidence interval around your Sigma-metric? Guess what the answer is… Collect more data.
David Burnett – of ISO 15189 chair fame – created a Sigma-metric confidence interval calculator. However, in the literature it has not been used, nor have we had a demand for it in 20 years.
If you are worried about the reliability of a number, collect more data. If you don’t like the Sigma metric, remember that it is optional – ISO 15189 does not mandate you calculate it. If you would prefer to focus on more than one number, remember you can calculate as many Sigma metrics as you want – you can calculate a Sigma metric for every control level if you want.
Getting more confidence is a matter of determination and will – if you double the number of controls you are running, you can narrow your confidence interval. It just depends on how much you are willing to commit your wallet to improve the confidence of any particular metric.
Question: Do you determine the sigma metric of the method based on QCM data (only)? If so, does it overestimate the sigma metric of the method?
This is a more fundamental question than the Sigma metric. If you are concerned that your QCM is biased and providing incorrect estimates of performance, you are always free to get a better QCM. Indeed, if you think a QCM is significantly non-commutable, it’s your professional responsibility to change to a better QCM. Selecting a commutable QCM has always been a best laboratory practice, Sigma metrics didn’t change that.
If you think the Sigma metric is being overestimated by a QCM, then the daily QC is also being misrepresented. If you believe that the distortion of a QCM on daily QC is acceptable, but not acceptable at a Sigma metric, you have performed an interesting feat of mental gymnastics: Your QCM is right for daily QC, but not for performance benchmarking.
Outside the ideal world, there is always some level of matrix effect and non-commutability present in today’s QCMs. We accept that a QCM is not perfect, but we use it for daily QC. If we accept that imperfection on a daily basis for QC charting, it’s not a great leap to accept that QCM for Sigma metrics too.
If you prefer, you can determine your imprecision through more commutable methods. The Sigma metric equation accepts imprecision from multiple sources. Most labs, however, run QCMs as a way to monitor imprecision.
Many more questions were posed and answered at the Masterclass. You can access those through the conference website.
We look forward to seeing you all at the 2022 Quality in the Spotlight Conference, March 21-22.