Quality Standards
11 out of 13 Alinity-i and Alinity-c assays fail to meet new measurement uncertainty requirements
Now that there are new measurement uncertainty goals, it's time to check if any instruments can hit them. Using the performance data from a recent study of the Alinity i and Alinity c instruments from a US laboratory, we assess the acceptability of the mu of their methods.
11 of 13 assays on the Alinity-i and Alinity-cannot hit desirable measurement uncertainty goals
Sten Westgard, MS
February 2025
A recent study in the looked at a laboratory in Montpellier, France at the performance of the Roche cobas 8000 series. They used a heterogenous mix of performance specifications, as well as a leading approach to risk-management (using Sigma metrics to inform the risk priority number RPN):
A-107 Sample Matrix Matters: A Precision Study of BioRad, TechnoPath, and Patient Pooled Samples Across 20 High Volume Chemistries and Immunoassays. K. Sobhani, AK Quizon, R. Masukawa, C Hernandez, I, Peteros, E. Manimtim
This is an unusual study, in that it measured imprecision three different ways, by Technopath and BioRad controls, as well as patient pooled samples. More impressive still are the number of samples measured: 1,400 results for each chemistry level, and 700 results for each immunoassay level. The patient pooled samples were run 1,400 times for the immunoassays, and 2,800 times for the chemistries. There's no shortage of data here! It was run in one week, so one small weakness of all this data is that it might be too optimistic. As we'll see, if this is the optimistic performance, god help the long-term performance...
Chemistry Analyte |
Plasma Pool |
DESIRABLE | MINIMUM | BIOVAR | |||||
u:result | u:rw | verdict | u:result | u:rw | verdict | EFLM min CV |
verdict | ||
ALT | 8.33 | 4.65 | 2.325 | FAIL | 6.98 | 3.49 | FAIL | 8.6 | PASS |
AST | 4.17 | 4.75 | 2.375 | FAIL | 7.13 | 3.565 | FAIL | 6.4 | PASS |
Calcium, total | 1.52 | 0.91 | 0.455 | FAIL | 1.36 | 0.68 | FAIL | 1.4 | FAIL |
Chloride | 0.5 | 0.49 | 0.245 | FAIL | 0.74 | 0.37 | FAIL | 0.8 | PASS |
CO2, total | 3.0 | 2.1 | 1.05 | FAIL | 3.15 | 1.575 | FAIL | 3.0 | FAIL |
Creatinine | 2.41 | 2.2 | 1.1 | FAIL | 3.3 | 1.65 | FAIL | 3.3 | PASS |
Glucose | 0.7 | 2 | 1 | PASS | 3 | 1.5 | PASS | 3.4 | PASS |
IgM | 1.57 | 2.95 | 1.475 | FAIL | 4.43 | 2.215 | PASS | 4.4 | PASS |
Potassium | 1.16 | 1.96 | 0.98 | FAIL | 2.94 | 1.47 | PASS | 2.9 | PASS |
Sodium | 0.59 | 0.27 | 0.135 | FAIL | 0.4 | 0.2 | FAIL | 0.4 | FAIL |
The short answer: 9 out of 10 chemistries on the Alinity-c cannot meet desirable uncertainty goals. 7 of 10 cannot meet the minimum uncertainty goals. The EFLM biological-variation-derived minimum CVs are far more forgiving: only 3 of 10 assays cannot meet those goals.
While the study covered 10 immunoassays, only 3 of them actually have measurement uncertainty performance requirements (as of 2/5/25). So our pool of evaluation is a bit small.
Analyte | Plasma Pool |
DESIRABLE | MINIMUM | |||||||||
u:result | u:rw | u:cal | verdict | u:result | u:rw | u:cal | verdict | min CV | verdict | MAU des | ||
HcG, | 1.94 | 4.55 | 2.275 | 2.275 | PASS | 6.83 | 3.415 | 3.415 | PASS | |||
T4, Free | 3.24 | 2.8 | 1.4 | 1.4 | FAIL | 4.2 | 2.1 | 2.1 | FAIL | 3.6 | PASS | 4.8 |
TSH | 2.33 | 2.89 | 1.445 | 1.445 | FAIL | 4.34 | 2.17 | 2.17 | FAIL | 13.4 | PASS | 17.9 |
Of the three ia analytes 2 of 3 fail to meet the desirable or minimum measurement uncertainty specifications. The EFLM only provides goals for two of those analytes, and here the Alinity-i passes both.
Overall, 11 of 13 assays on the Alinity-i and Alinity-c cannot hit desirable goals, and 9 of of 13 cannot hit minimum goals. Only 3 of 12 of these assays couldn't hit the EFLM minimum CV specifications.
Of course, this is not taking into account any bias that might be present. The study didn't measure any bias, but it's naive to believe there is zero bias present in the methods. If that were to be included in the measurement uncertainty, even if we pretend the bias is merely an additional variance, even fewer analytes would pass.
There may be reasons why so many analytes failed to meet measurement uncertainty specifications. Despite the fact that these are patient plasma controls, and thus should suffer no matrix effects, there could still be some issue in the preparation, preservation or aliquotting of the samples.
The laboratory might not be operating the instrument correctly, or the instrument might have been malfunctioning the week that data was being collected.
Also, if the uncertainty of calibration and reference were smaller than the 50% of the u:result budget traditionally allocated to them, then more variance can be tolerated at the u:rw level. The study didn't include those numbers, and it's rare to see any study take those into account or report them.
What's far more likely is that the new mu goals are too demanding, unrealistically so, and their widespread application will result in unproductive stress and strife in laboratories around the world. Or, a far more likely outcome, these new mu goals will simply be ignored by most laboratories, much as most labs already ignore the bulk of the measurement uncertainty approach.
The performance measured by Technopath controls and BioRad controls was not the same as the patient pooled samples. If readers ask, we will extend this analysis to show both of their assessments for u:rw goals.