Patient Data as QC

One of the most important things we do as a laboratory is run Quality Control (QC) material on every assay we perform in order to ensure that the assay is working correctly and the test results are valid. The importance of QC cannot be overstated, as it allows us to confidently report analyte values that allow correct diagnostic, treatment and treatment monitoring decisions for each patient for which we provide service. And yet every laboratorian knows that running QC is often problematic.

QC material is not true serum, plasma, urine or CSF and as such, it frequently does not act like a patient sample. This is referred to as commutability – the ability of a synthetic or non-human-based material to act like a human sample in a test system. As hard as manufacturers try to make their QC product commutable, problems still exist.  Every tech knows that a shift in QC is not always reflected by a shift in patient sample results, and conversely a shift in patient sample results may not be mirrored by a shift in QC.

In some cases it may be possible to use patient samples as a kind of Quality Control.  For those tests with high volumes, calculating a running mean of all patient data each month can supply a nice overview of the performance of your assay. For example, if you run 5000 sodiums each month, the average of those 5000 will be consistent from month to month, as long as your analysis system is performing consistently. Very high or very low outliers will be smoothed out by the sheer volume of tests in the data set. Our average monthly sodium has run 140 or 139 mEq/L for the last year. If it were to run 142 or 137 one month, I would look at what happened in the system to shift 5000 sodiums enough to cause that difference.

A test that this system has been useful for is tacrolimus.  At roughly 600 tacrolimus tests per month, our rolling patient mean has been stable for the last 6 months at  7.9 ng/mL, with a range of 7.7 – 8.1 ng/mL. In September, the mean dropped to 6.8 mg/mL. The data was still distributed as it had been previously, with no increased number of values around the lower end, suggesting that the shift was systemic. A look back discovered a new calibrator and calibration. None of this had been reflected by a shift in the regular QC.

Use of patient data as QC will not work with low volume tests, or with tests having a wide reportable range, because a low number of outliers will affect the mean too much. For instance, diabetics in crisis with massively elevated glucose can skew a glucose average even if you run a couple thousand per month. Also this method takes a long-term view of the system. It will not pick up an assay failure on a given day. Despite those things, it can be very useful for looking at systemic issues that affect patient data rather than QC data.

 

???????????????????????????????????????????????????????????????????????????????????

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s