What does “harmony” mean to you? And how does it apply to lab testing?
One of the biggest problems that arise where lab testing is concerned is that tests run in two different labs will give you two different results unless the labs happen to be using the same equipment (and sometimes even then the results won’t match!) This is a huge problem for doctors of patients who use different laboratories for their testing or patients who move across the country and need to continue following lab test results. A prime example of this dilemma is the current state of T4 testing. The same CAP sample when analyzed using different assay methods for thyroid stimulating hormone (TSH) can yield results which range anywhere from 2.66 to 8.84 mIU/L. Although CAP samples are not always commutable with patient samples, thyroid testing on patients shows this same lack of harmony.
This example underscores the need for harmonization. In our increasingly small world, where nearly everyone will soon be using the electronic medical record, and all lab results on a patient will be in one place whether they were all performed at the same place or not, it will be paramount that the lab results for any given test can be compared. Efforts to date have successfully harmonized several important analytes, including creatinine (IDMS-creatinine), cholesterol and hemoglobin A1c. Efforts are on-going to harmonize vitamin D assays against the NIST standards. These harmonization efforts took a massive amount of coordination and work between the in vitro diagnostic industry, regulatory agencies and laboratory and clinical societies.
Laboratory professionals have long recognized this problem, and sought to inform non-laboratorians of the realities at every opportunity. Lack of comparable test results can lead to patient safety issues, including misdiagnoses and/or inappropriate treatment. Recently an international consortium has recognized the need for harmonization of all lab results and begun to work on the problem. Although this effort is just beginning and the road ahead is long until general test harmonization can occur, it is a road worth traveling.
-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.
More and more in this day and age, the laboratory is encouraged to reduce costs and streamline operations by using available resources in the most effective and efficient manner possible. One of the areas of the lab that is increasingly becoming a problem when it comes to cost reduction is the send out area. Since most labs can now perform the vast majority of their testing on automated chemistry and hematology analyzers, tests that must be performed at reference laboratories are increasingly esoteric, manual, and/or molecular diagnostic tests. And those tests are expensive.
As an example, my own lab sent out about 10 chromosomal microarray (CMA) tests in 2008; that number increased to 400 CMA tests in 2011 and is on track to be 865 in 2013. At $1400.00 each, the cost to the lab increased from $14,000 to $1.2 million over that time period. And that’s just one relatively inexpensive molecular diagnostic test. Some of the gene sequencing tests can run between $5000, and $10,000 per test.
Labs are trying a multitude of different schemes in order to try to curb these send out test costs. One method that is fairly effective is to have a “gatekeeper” – a person or persons who review and must approve every test that leaves the lab that costs over a pre-set amount. This particular method is probably one of the best for controlling send out costs, but it requires time and commitment on the part of the gatekeeper, and a willingness to interact with physicians who have ordered the tests that may be less than happy than someone is questioning their order.
Another method used for send out cost control is to include some indication of the cost of the test in the computer system. When the test is ordered, the ordering provider is aware of the exact cost of the test. Some institutions are using a dollar sign system to implement this. For example “$” may mean that a test costs under $50 and “$$$$$” may indicate a test costing over $5000, with other levels in between these two.
A third method is to have a lab “formulary.” Any test found in the formulary can be ordered with no problems. Tests that are not included in the formulary must be approved by the lab before being ordered and sent out.
Whatever method a laboratory uses, it is clear that some means of regulating the rising send out costs is going to be necessary for all labs. Until molecular diagnostic tests become automated and routine, they will continue to be expensive.
If we didn’t use reference intervals (RI), how would we know whether a person is “normal” or not? Or more accurately, how would we know whether a lab test result indicated health or disease? Reference intervals have been around as long as lab tests and they help clinicians diagnose and monitor a patient’s disease state. .
Most RI are developed using a specific patient population and should be used only with that population. However, some RIs are “health-based,” such as cholesterol and vitamin D. Both these analytes have RI that indicate what amount of the analyte should be present in a healthy individual, not how much is present in your specific population of patients. In general, health-based RI can be utilized in all populations, as long as the analyte assays are commutable. Thus these type of RI are often more useful than population-based intervals.
But should we be using reference intervals at all? One problem with population-based RI is that any given individual’s values may span a range that covers only part of the population RI due to biological variability. For example, an individual’s creatinine may be 0.6 – 0.9 mg/dL regularly. Since the RI for creatinine for his population is 0.4 – 1.4 mg/dL, a value of 1.2 mg/dL would not be flagged as be abnormal. However, 1.2 mg/dL may very well be an abnormal result for this individual We need to consider using reference change values (RCV) in addition to RI.
Reference change values are calculated values that are used to assess the significance of the difference between two measurements. Essentially, a RCV is the difference that must be exceeded between two sequential results for a change to be a significant change. The calculation requires knowledge of the imprecision of the analyte assay (CVA) and the biological variation (CVI) of the analyte. The formula for calculating RCV is: RCV=21/2 · Z · (CVA2 + CVI2)1/2 , where Z is the number of standard deviations for a given probability. Luckily, labs know the imprecision of their assays and there are tables available for biological variation.
It’s very likely that neither RI nor RCV by itself is adequate for interpreting analyte results. Using both may be a better alternative, especially using RCV for monitoring disease progression or therapeutic efficacy. Flagging sequential values that exceed the RCV—and reporting this change—should be considered.
A laboratory developed test (LDT) is any test that has been developed by an individual laboratory, often using instruments and/or reagents that have not been approved by the FDA for use as/in an in vitro diagnostic test. For example, measuring pH using a pH meter and pH calibrators from a scientific supply company is an LDT. So is performing a spun hematocrit, measuring acylcarnitines by tandem mass spectrometry, or performing newborn screening on dried blood spots. Even using an FDA-approved assay for samples or in a manner not specified by the manufacturer makes that assay an LDT. If you look around your lab, you may find that you’re performing an LDT without really thinking about it.
Who regulates these tests? The FDA regulates in vitro diagnostic testing, and LDTs fall under their purview. Until recently the FDA has used “enforcement discretion” and has essentially allowed CLIA regulations and CLIA oversight to ensure proper validation and monitoring of LDTs. CLIA regulation Subpart K, Section 493.1253 gives the specific parameters that must be properly validated in any non-FDA-approved assay. CLIA also regulates the proper usage and control of LDTs, just like any test performed in the laboratory. Is it necessary for LDTs to be regulated more highly than this?
In June of 2010 the FDA announced its intention of taking a more active role in LDT regulation in the future. They also held a public meeting to discuss their increased oversight. All laboratories which perform LDTs will do well to monitor developments in this newly intended enforcement of the FDA’s role, and keep abreast of changes coming out in the regulatory environment for these tests.