Vitamin Deficiency or Acute Leukemia?

67 year old patient with a history of uterine carcinoma (leiomyosarcoma), presented with pancytopenia and history of B-12 deficiency. CBC showed

  • WBC 4.1 K/ul
  • RBC *2.37 M/ul
  • Hgb *7.2 g/dl
  • MCV 91.1 fl
  • MCH 30.4 pg
  • MCHC 33.3 %
  • Platelets *25 K/ul

Peripheral blood differential count showed 3.5 % bands, 68.5 % Neutrophils, 3.5 % Eosinophils, 11.5 % Lymphocytes and 13.0 % Monocytes

Bone marrow differential count of the bone marrow showed 65.0 % Erythroid Precursors with 48.4% erythroblasts and 7% myeloblasts

Several erythroblasts were seen, which often had overlapping morphological features with myeloblasts. Erythroblasts had slightly coarser nuclear chromatin compared to myeloblasts and often had deeply basophilic vacuolated cytoplasm. Erythroid maturation was markedly megaloblastic /dysplastic and left shifted with marked preponderance of erythroblasts. Dysplastic forms characterized by presence of precursors with irregular nuclear borders along with few multinucleated forms and gigantoblasts were present.

Cells counted as myeloblasts had high N/C ratio, finer nuclear chromatin with occasionally distinct 1 to 2 nucleoli and scant cytoplasm.

Bone marrow with erythroid hyperplasia
Bone marrow with erythroid hyperplasia
Megaloblastic erthroid precursors with binucleate forms
Megaloblastic erthroid precursors with binucleate forms

Discussion:

The current WHO classification subtypes acute erythroid leukemia into two categories based on the presence or absence of significant myeloid component.

Erythroleukemia or Erythroid/Myeloid (FAB subtype A – M6a) comprises of more than 50% erythroid precursors among all nucleated cell population of bone marrow and more than 20% myeloblasts among non erythroid cells.

Pure erythroid leukemia (FAB subtype B – M6b) comprises of more than 80% immature cells of erythroid lineage with no evidence of a significant myeloid component

The most common reactive process that can mimic acute erythroid leukemia is megaloblastic anemia caused by vitamin B12 and folate deficiency. Features associated with pernicious anemia are hemolysis with increased mean corpuscular volume (MCV), hypersegmented neutrophils, leukopenia and thrombocytopenia increased LDH and urobilinogen. Bone marrow findings show hypercellular marrow witn marked erythroid hyperplasia. Other non-neoplastic diseases mimicking acute erythroid leukemia are post-chemotherapy recovery, parvovirus infection, drug effect, heavy metal intoxication and congenital dyserythropoiesis. A detailed clinical history, laboratory work up, peripheral blood and bone marrow examination, cytochemical, immunoshistochemical, flow cytometry, cytogenetic and molecular studies are required for the diagnosis of acute erythroid leukemia.

The oncologist was contacted and it was confirmed that B12 was repleted before the bone marrow study was performed. Diagnosis of acute erythroid /myeloid leukemia was only made after it was confirmed with the oncologist that patient was not B12 deficient at the time of the study.

Vajpayee,Neerja2014_small

-Neerja Vajpayee, MD, is an Associate Professor of Pathology at the SUNY Upstate Medical University, Syracuse, NY. She enjoys teaching hematology to residents, fellows and laboratory technologists.

Sample Stability and PO2–A Learning Opportunity

One of the interesting things about working in the field of laboratory medicine is that there are always opportunities for learning new things. Almost every call I get from my colleagues outside the lab allows me and the lab team these opportunities. And sometimes we are reminded of the reason we do the things we do, basically re-learning them.

Case in point: An ICU physician contacted the lab, understandably concerned. He had been monitoring the pO2 in a patient using an I-Stat point of care analyzer. Values had been in the range of 50-70 mmHg, and he had been adjusting ventilation on the basis of those results. A blood gas sample was sent to the main lab, analyzed on an ABL analyzer and gave a result of 165 mmHg, repeated shortly thereafter on a new sample with a 169 mmHg. Understandably, the physician wanted to know which analyzer was wrong and how he should be adjusting his patient’s ventilation.

We quickly did an investigation and determined an interesting fact that we hadn’t paid much attention to previously. A blood gas sample that is sent through the tube system that has any amount of air in the sample, will give falsely elevated pO2 result. We investigated this by collecting blood gas samples, running them on both the I-Stat and the ABL, and then sending them through the tube system and rerunning them on both instruments after tubing. The pO2 values matched on both instruments, both before and after tubing. But interestingly, if there was any air in the collection device when the device was sent through the tube system, the pO2 after tubing still matched on the two instruments, but the values were more than double the original values. If no air was present, there was very little change before and after tubing. We tested this by expressing all air from one set of samples before tubing and leaving air in the syringe on the other set.

The collection process for blood gas samples in our institution has always specified that the collector should express any air in the sample before sending the sample to the lab through the tube system, and after this incident the reason for that step became clear. However, the staff collecting blood gases on the floors needs to be periodically retrained in the collection, and the lab staff needs to be reminded that air in a blood gas syringe arriving through the tube station is a reason to reject the sample. We were reminded that education needs to be a continuous process. We also learned that when we discover the reason for a process, it’s a good idea to document that reason in order to both understand the need and to help motivate people to follow it.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Ammonia and Hyperammonemia

Ammonia is a small molecule that is produced as a part of normal tissue metabolism. Its formation results from the breakdown of compounds containing nitrogen, such as the amino groups in proteins and the nitrogenous bases in nucleic acids. In the tissues, ammonia is stored mainly in the form of amino acids, specifically the amino acid glutamine which has three amino groups. Normally, the body can remove excess ammonia easily via the liver pathway known as the urea cycle. This short, 4-step cyclical pathway converts two ammonia molecules into a small, water soluble urea molecule, making it able to be easily excreted in the urine. Without a functional urea cycle however, the body has no other adequate mechanism for getting rid of the ammonia that is constantly being produced by metabolism.

Liver damage or disease can disrupt the urea cycle, causing blood ammonia levels to rise. This is the most common cause of elevated ammonia in the adult population. In a pediatric patient, elevated ammonia is frequently seen as a consequence of an inborn error of metabolism (IEM). Many IEM, especially those in the urea cycle pathway, will result in elevated blood ammonia levels. In addition, in IEM causes, the ammonia concentrations may be well over 1000 µmol/L, when the normal range of ammonia is generally in the 30 – 50 µmol/L range. Elevated blood ammonia concentrations are serious because ammonia is toxic to the brain. The higher the ammonia concentration is, and the longer it stays high, the more brain damage that will occur.

Interestingly, the concentration of ammonia in the blood may not correlate with the neurological symptoms that are seen. Usually if the ammonia concentration is <100 µmol/L, the person will show no symptoms at all. Concentrations of ammonia in the 100 – 500 µmol/L range are associated with a wide variety of symptoms including: loss of appetite, vomiting, ataxia, irritability, lethargy, combativeness, sleep disorders, delusions and hallucinations. These patients may present with an initial diagnosis of altered mental status, and if there is no reason to suspect an elevated ammonia, the symptoms may lead to drug or alcohol testing. When ammonia concentrations are >500 µmol/L, cerebral edema and coma may be seen, with cytotoxic changes in the brain. Ammonia concentrations in the 1000+ µmol/L range are extremely critical and are treated aggressively with dialysis to pull the ammonia out of the system. In particular, urea cycle defects require close monitoring of ammonia and glutamine concentrations, with immediate response when they rise.

Laboratory testing for ammonia is often problematic as contamination can occur from a number of sources including atmospheric ammonia, smoking and poor venipuncture technique. In addition if the sample is not centrifuged and analyzed promptly, ammonia is formed by the continuous deamination of amino acids and the concentration increases by 20% in the first hour and up to 100% by 2 hours. Consequently samples to be tested for ammonia should be placed on ice immediately after being collected and transported to the lab for analysis as soon as possible. Many minimally elevated ammonia results are a consequence of poor sample handling. However, a truly elevated ammonia is a critical lab finding that should be addressed immediately.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Ketones

Most people who work in a clinical laboratory know a little about ketones or ketone bodies. The two facts that most people know include: 1) when you perform a urinalysis (UA), it includes a semiquantitative ketone result, and 2) high ketones are seen in diabetic ketoacidosis. But what is a ketone, where do they come from, and what are we measuring when we measure ketones?

In the laboratory medicine world, “ketones” refers specifically to acetoacetate (Acac), acetone and beta-hydroxybutyrate (BOHB). When the human body cannot utilize glucose, either because it is not present (fasting, starvation) or because it is present but cannot be used (lack of insulin to get glucose into the cells), the body instead breaks down fatty acids for energy. Fatty acids are mostly made up of long chains of carbons with hydrogens attached, so one of the main products of fat breakdown is 2-carbon acetyl-CoA. When a person is using lots of fats, like when they cannot use glucose, the production of acetyl-CoA exceeds the body’s ability to metabolize it via the Kreb’s cycle and it ties up lots of coenzyme A (CoA) needed for other processes. Thus, the body combines two excess acetyl-CoA into an acetoacetate, freeing up the CoA. The more acetyl-CoA produced from fat breakdown, the more acetoacetate produced. From there, the acetoacetate is converted to BOHB enzymatically or degrades spontaneously to acetone. BOHB is a dead end. Once there, the BOHB simply continues to build up until the production of acetyl-CoA no longer exceeds its utilization capacity. At that point, the BOHB is converted back to acetoacetate and then to acetyl-CoA for the body to be able to utilize it.

The most common form of ketoacidosis is probably diabetic ketoacidosis, in which blood glucose levels are high, but the glucose cannot get into the cells and be used, so fats are broken down for energy. At the height of a ketoacidosis, roughly 70% of the ketones in the body will be in the form of BOHB. This has implications for what we measure and for the monitoring of the treatment of ketoacidotic crises. UA dipstick ketones measure acetoacetate, and some will also detect acetone. None of the available UA methods measure BOHB. Thus, ketones measured in a UA will rise as ketoacidosis occurs, drop at the height of ketoacidosis as they are converted to BOHB, and then rise again as the condition is resolving and BOHB is converted back to Acac. A high Acac will occur both at the beginning and toward the end of the ketoacidosis, and Acac may actually be low at the height of a ketoacidotic crisis. BOHB on the other hand rises as the crisis evolves and drops as the crisis is resolved. The best test for following the resolution of a ketoacidotic crisis is repeat BOHB measurements.

BOHB is generally measured enzymatically on blood samples. BOHB response is maximal about 3 hours after glucose peaks. For example in a diabetic ketoacidosis, the peak BOHB will occur about 3 hours after the glucose peaks and in a normal patient given a glucose load, the BOHB will be lowest about 3 hours after the glucose peaks. During resolution of ketosis BOHB decreasing by half about every four hours as long as no more ketones are being produced. The test for BOHB is most commonly performed quantitatively using a kit adapted to the open channel on a chemistry analyzer. A point of care analyzer is also now available for BOHB.

Measuring ketones is most commonly used to monitor ketoacidosis, but ketone measurement can also be helpful in the differential diagnosis of some inborn errors of metabolism. For example, in fasting states, ketones should be elevated. If they are not, it can be an indication of disorders in fatty acid metabolism, or ketone metabolism itself. Additionally, in hyperammonemia states, the absence of ketones and acidosis indicates a urea cycle defect. Their presence suggests an organic acid disorder. Thus measuring ketones has multiple uses in medicine.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Tandem Mass Spectrometry in the Clinical Lab

Tandem mass spectrometry (MS/MS) is a methodology with so much versatility that new usages and applications seem to arise daily. MS/MS began as strictly a research tool, but over the last 20+ years it has made its way firmly into the clinical laboratory. The basic start of that transition came roughly 20 years ago when MS/MS assays were developed that allowed multiple intermediates of metabolism to be identified and quantified using a single punch from a dried blood spot. That development with this technology revolutionized newborn screening in the US over the next decade. Since then, more and more clinical uses for MS/MS have been recognized and developed.

Watching the growth of clinical MS/MS assays in hospital labs has been fascinating. As a reflection of this clinical emergence, journal articles containing MS/MS have increased in number over the same time period. For example, in 1998 the first MS/MS article appeared in the journal, Clinical Biochemistry. Between 1998 and 2003, 0 – 2 MS/MS articles were published each year, but from 2004-2007 that number was in the teens. From 2008-2010 more than 25 articles each year contained MS/MS technology, and from 2011 – 2013 that number ranged from 35-55 MS/MS-containing articles per year.

Initially, clinical applications using MS/MS were for limited assays, including newborn screening, confirmatory testing for inborn errors of metabolism (IEM), and for specific drugs, especially the immunosuppressant drugs. Testing quickly grew beyond drugs and toxicology using MS/MS methods, as the versatility of this testing became apparent. Assays began appearing for accurate measurement of Vitamin D, thyroid hormones and steroid hormones, to name only a few. In addition most MS/MS methods are sensitive enough that sample volumes requirements are small, or the assay can be performed using dried blood spots. Also, the ability to multiplex and measure multiple analytes in a single sample added to the utility of this methodology. Examples include 5 steroid hormones, 30+ analytes for IEM diagnosis or 200+ analytes for toxicology screening, all of which can be analyzed in a single sample within a short period of time.

As common as MS/MS assays now are in clinical laboratories, like early PCR technology, they have remained mostly manual tests run in specialized sections of the lab. They have required technical expertise and a love of hands-on, manual bench work. That is beginning to change with the advent of MS/MS instruments for bacterial identification and the entrance of MS/MS into the microbiology lab. This was the first MS/MS developed for a single, dedicated purpose and intended to require minimal manual intervention, either with day-to-day operation, or with maintenance and troubleshooting. This development clearly demonstrated that MS/MS can begin to approach the more plug-n-play type of technology needed for more fast-paced clinical labs. Developments like this will allow MS/MS to be integrated into more automated labs and ensures its future in the clinical laboratory.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Sweat Testing

August in Texas is a good time to write a blog post about sweat. In this case though, I’m going to specifically talk about testing collected sweat samples for chloride concentration. Sweat chloride concentrations are measured in people who are suspected of having Cystic Fibrosis (CF). Because CF has classically been considered a disease of childhood, sweat chloride testing is performed almost exclusively in pediatric institutions.

CF is a disease caused by mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) gene. This is a large gene which codes for a large, transmembrane protein that acts as a chloride channel. More than 1500 mutations have been detected in the CFTR gene, not all of which are known to cause disease. Thus, even though the full gene has been sequenced, CF remains a diagnosis which is made by a combination of the presence of characteristic clinical features, or history of CF in sibling, or a positive newborn screen, PLUS identification of a disease-causing mutation in the gene or protein or laboratory evidence of chloride channel malfunction such as an elevated sweat chloride level.

Collecting a sweat sample for testing is an interesting manual process. The first step involves stimulating the sweat glands to produce sweat. This is accomplished by a process called iontophoresis, in which a sweat-gland-stimulating compound called pilocarpine is driven into the skin using a small electrical current between a set of electrodes applied to the skin. After a 5 minute stimulation, the electrodes are removed, the skin is cleaned, and the sweat that is subsequently produced in that stimulated area is collected for the next 30 minutes. The collection is either via absorption of the sweat into a piece of gauze or filter paper, or by a collection device which funnels the sweat into a small plastic tube as it’s produced. The amount of sweat collected after 30 minutes is determined by weight if gauze or filter paper is used, and by volume if the tubing is used. There is a lower acceptable limit for both cases, below which the sweat collection is insufficient (QNS) and must be repeated. The process sounds simple, however collecting a sufficient quantity of sweat can be problematic, and collecting too little may cause falsely elevated results.

After collection, the amount of chloride present in the collected sample is measured. In a normal sample, the amount of chloride present is well below the measurement range of the usual chloride ion-selective electrodes found in chemistry or blood gas instruments. For this reason, the chloride concentration in a sweat sample is most commonly measured using a method called coulometric titration in which a silver electrode placed in the sample gives off silver ions during a current flow. The silver ions complex with the chloride and precipitate as silver-chloride. This reaction continues until all the free chloride is gone, at which point a timer stops. Quantification is accomplished essentially by comparing the time necessary to complex all the patient’s chloride versus the time necessary to complex a known concentration of chloride in a standard. Calculations are performed using the time and the weight or volume of the sweat collected, among other parameters.

The entire test is very manual. Collection of appropriate sweat samples requires training and practice. In general the QNS rate – how often an adequate collection is not achieved – is carefully monitored by the lab, the CF clinic and the CF Foundation which accredits the clinic. In addition, measuring the chloride in the sweat by chloridometer is not an automated process of placing the sample on an instrument and pushing a button to go. For these reasons, the CFF recommends not performing sweat testing unless you perform a minimum number per year in order to stay proficient. In this day and age of increasing automation, sweat chloride testing remains the anomalous, old fashioned test requiring significant technologist time and expertise.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Markers of Inflammation

I thought today I’d do a little discussion related to two of the more non-specific, questionably useful tests that we have in the laboratory test arsenal, C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR), and their use as markers of inflammation. I’ve left out procalcitonin on purpose since I’ve posted about that inflammatory marker previously. And I won’t discuss hs-CRP and its use in cardiovascular disease risk assessment.

CRP and ESR are referred to as inflammatory markers because both rise when inflammation is present. However neither marker provides much more information other than the presence of inflammation. That leads to the questions: how are these tests useful, and why do we need both?

Good questions! First though, what exactly are these markers? Both of these markers will increase in inflammation, infection and tissue destruction, but at different speeds and to different degrees. CRP is a protein and an acute phase reactant. It is produced by the liver and released in response to inflammatory cytokines, usually within hours of a tissue injury, an infection, or any other cause of inflammation. ESR on the other hand isn’t any kind of analyte at all, but rather a measure of the ability of the red cells to settle out in a blood sample. This settling is affected by the fibrinogen and globulin concentration in the blood as well as by the red cell concentration and how normal the red cells are. Thus besides inflammation, things like anemia, polycythemia and sickle cell disease also will affect an ESR. ESR also increases in malignancies, especially paraproteinemias and other states with abnormal serum proteins, and in autoimmune diseases. ESR elevations are used to support the diagnosis of specific inflammatory diseases, like systemic vasculitis and polymyalgia rheumatic. CRP is useful for monitoring patients after surgery and since it rises rapidly in response to bacterial sepsis, it is often used to monitor response to antimicrobial therapy. Considering the differences in these two “markers” it’s perhaps not surprising that they do not correlate well when compared against each other. Nor is it surprising that the lab has been unable to retire either test.

The pattern of usage for these tests in my lab has shifted in the last several years. In 2007 we ran almost equal numbers of both tests, about 500 per month of each. Eighty percent of the time, both tests were ordered simultaneously. Of those, 20 percent had one normal result and one abnormal result, 50 percent were both abnormal and 30 percent were both normal. Surprisingly, 50 percent of the time, only one single CRP and ESR was ordered, even though these tests are probably more useful when used to trend response and the majority of the time, one or both results were abnormal. This year in 2015 we are running about 1400 CRP per month and 900 ESR per month, and still 70 percent of those ESRs that we do run, are run simultaneously with CRP samples. The same services tend to order both tests, with many of the orders coming from GI, the ED, Orthopedics, Rheumatology, Oncology or Infectious Diseases. CRP tends to be ordered more frequently by general hospitalists, intensivists and Cardiology. ESR orders are STAT 30 percent of the time, while CRP orders are STAT about 22 percent of the time.

Both of these analytes are markers for the presence of an inflammatory process. CRP seems to reflect bacterial or septic processes and response to therapy to a better degree than ESR does, probably because CRP is one of the liver’s acute phase proteins and reflects liver response to injury. CRP also tends to respond more quickly than ESR, rising faster and then falling more rapidly. ESR on the other hand tends to reflect a more systemic response. With either analyte, a one-time order is a snap-shot in time. Thus often one of these markers is normal while the other is abnormal, which may explain why physicians tend to order both. Ordering either analyte as a one-time order will only tell you that inflammation is present, and the results of the tests must always be used in conjunction with other tests and clinical signs and symptoms in order to have any diagnostic efficacy. Sequential CRP or ESR samples allow for trending and helping to determine response to therapy, thus providing more useful information.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.