Chemistry Case Study: Unexplained Metabolic Acidosis

Case Workup

A 24-year-old female at 34 weeks of gestation was transferred from an outside hospital with history of nephrolithiasis and right side pyelonephritis, for which she underwent stent placement 2 weeks ago. She started experiencing severe pain and muscle spasms in her hip and was unable to move her leg due to the pain. She had decreased appetite and also noted vomiting. Her bilirubin and aminotransferases were found to be elevated. Additionally, her blood gas analysis showed a bicarbonate of 9 mEq/L, pH of 7.2 with 99% SpO2. Our clinical chemistry team was consulted on her low pH.

Patient’s laboratory workup is shown in the table below. We first ruled out some common causes of metabolic acidosis, including lactic acidosis and diabetic ketoacidosis. Ingestion of toxic alcohols was ruled out based on normal osmolality and osmolar gap. Normal BUN, creatinine, and their ratio ruled out renal failure.

Positive urinary ketones were noted, with an elevated anion gap. Serum beta-hydroxybutyrate was therefore measured and a result of 3.0 mmol/L (ref: <0.4 mmol/L) confirmed ketoacidosis. Patient had no history of diabetes and no recent alcohol consumption. On the basis of excluding other causes, and also considering her decreased appetite and recurrent vomiting, it is believed that ketoacidosis was caused by “starvation.”

Test Result Ref * Test Result Ref *
Albumin 2.0 3.5 – 5.0 g/dL pH 7.24 7.32-7.42
ALK 139 35 – 104 U/L pCO2 (V) 21 45-51 mmHg
ALT 177 5 – 50 U/L pO2 (V) 46 25-40 mmHg
AST 159 10 – 35 U/L O2 Sat (V) 72 40 – 70 %
Total Bili 2.0 0.0 – 1.2 mg/dL Glucose 74 65-99 mg/dL
Direct Bili 1.5 0.0 – 0.3 mg/dL Urine ketones 2+ Negative
Lactic acid 0.9 0.5 – 2.2 mmol/L Urine protein 2+ Negative
Protein 6.0 6.3 – 8.3 g/dL Chloride 104 98-112 mEq/L
Sodium 138 135-148 mEq/L CO2 9 24-31 mEq/L
Potassium 4.6 3.5-5.0 mEq/L Anion gap 25 7-15 mEq/L
Creatinine 0.6 0.5 – 0.9 mg/dL eGFR >90  >90 mL/min/1.73 m2
BUN 8 6 – 20 mg/dL Osmolality 286 275 – 295 mOsm/kg

* Reference ranges are for normal adults, not for pregnant women.


With optimal glucose level and sufficient insulin secretion, glucose is converted by glycolysis to pyruvate, which is then converted into acetyl-CoA and subsequently into the citric acid cycle to release chemical energy in the form of ATP. When glucose availability becomes limited, fatty acid is used as an alternative fuel source to generate acetyl-CoA. Ketone bodies are generated in this process, and their accumulation result in metabolic acidosis. In healthy individual, fasting is seldom suspected to be the cause of metabolic acidosis. Sufficient insulin secretion prevents significant free fatty acid accumulation. However, under certain conditions when there is a relatively large glucose requirement or with physiologic stress, 12 to 14 hour fast could lead to significant ketone bodies formation, resulting in overt ketoacidosis (1-3).

Ketoacidosis is most commonly seen in patients with diabetic ketoacidosis. Similar metabolic changes are seen with poor dietary intake or prolonged fasting and resulting acidosis is referred to as “starvation ketoacidosis” (2). During pregnancy, especially in late pregnancy, there is an increased risk for starvation ketoacidosis, due to reduced peripheral insulin sensitivity, enhanced lipolysis, and increased ketogenesis. In this setting, short period of starvation can precipitate ketoacidosis (1-2, 4). Other cases described with starvation ketoacidosis include patients on strict low-carbohydrate diet (5-6), young infants after fasting (7), and patients with prolonged fasting before surgery (3).

Although starvation ketoacidosis is rare, healthcare provider should be aware of this entity especially in pregnant patients, because late recognition and delay in treatment are associated with a greater risk for impaired neurodevelopment and fetal loss (2). Moreover, given the popularity of low-carbohydrate diet nowadays, starvation ketoacidosis should be considered when assessing patient’s acid-base imbalance in conjunction with their dietary lifestyles.


  1. Frise CJ,Mackillop L, Joash K, Williamson C. Starvation ketoacidosis in pregnancy. Eur J Obstet Gynecol Reprod Biol. 2013 Mar;167(1):1-7.
  2. Sinha N,Venkatram S, Diaz-Fuentes G. Starvation ketoacidosis: a cause of severe anion gap metabolic acidosis in pregnancy. Case Rep Crit Care. 2014;2014:906283.
  3. Mostert M, Bonavia A. Starvation Ketoacidosis as a Cause of Unexplained Metabolic Acidosis in the Perioperative Period. Am J Case Rep. 2016; 17: 755–758.
  4. Mahoney CA. Extreme gestational starvation ketoacidosis: case report and review of pathophysiology. Am J Kidney Dis. 1992 Sep;20(3):276-80.
  5. Shah P,Isley WL. Ketoacidosis during a low-carbohydrate diet. N Engl J Med. 2006 Jan 5;354(1):97-8.
  6. Chalasani S, Fischer J. South Beach Diet associated ketoacidosis: a case report. J Med Case Rep. 2008;2:45. Epub 2008 Feb 11.
  7. Toth HL, Greenbaum LA. Severe acidosis caused by starvation and stress. Am J Kidney Dis. 2003;42(5):E16.



-Xin Yi, PhD, DABCC, FACB is a board-certified clinical chemist. She currently serves as the Co-director of Clinical Chemistry at Houston Methodist Hospital in Houston, TX and an Assistant Professor of Clinical Pathology and Laboratory Medicine at Weill Cornell Medical College.

Vitamin Deficiency or Acute Leukemia?

67 year old patient with a history of uterine carcinoma (leiomyosarcoma), presented with pancytopenia and history of B-12 deficiency. CBC showed

  • WBC 4.1 K/ul
  • RBC *2.37 M/ul
  • Hgb *7.2 g/dl
  • MCV 91.1 fl
  • MCH 30.4 pg
  • MCHC 33.3 %
  • Platelets *25 K/ul

Peripheral blood differential count showed 3.5 % bands, 68.5 % Neutrophils, 3.5 % Eosinophils, 11.5 % Lymphocytes and 13.0 % Monocytes

Bone marrow differential count of the bone marrow showed 65.0 % Erythroid Precursors with 48.4% erythroblasts and 7% myeloblasts

Several erythroblasts were seen, which often had overlapping morphological features with myeloblasts. Erythroblasts had slightly coarser nuclear chromatin compared to myeloblasts and often had deeply basophilic vacuolated cytoplasm. Erythroid maturation was markedly megaloblastic /dysplastic and left shifted with marked preponderance of erythroblasts. Dysplastic forms characterized by presence of precursors with irregular nuclear borders along with few multinucleated forms and gigantoblasts were present.

Cells counted as myeloblasts had high N/C ratio, finer nuclear chromatin with occasionally distinct 1 to 2 nucleoli and scant cytoplasm.

Bone marrow with erythroid hyperplasia
Bone marrow with erythroid hyperplasia
Megaloblastic erthroid precursors with binucleate forms
Megaloblastic erthroid precursors with binucleate forms


The current WHO classification subtypes acute erythroid leukemia into two categories based on the presence or absence of significant myeloid component.

Erythroleukemia or Erythroid/Myeloid (FAB subtype A – M6a) comprises of more than 50% erythroid precursors among all nucleated cell population of bone marrow and more than 20% myeloblasts among non erythroid cells.

Pure erythroid leukemia (FAB subtype B – M6b) comprises of more than 80% immature cells of erythroid lineage with no evidence of a significant myeloid component

The most common reactive process that can mimic acute erythroid leukemia is megaloblastic anemia caused by vitamin B12 and folate deficiency. Features associated with pernicious anemia are hemolysis with increased mean corpuscular volume (MCV), hypersegmented neutrophils, leukopenia and thrombocytopenia increased LDH and urobilinogen. Bone marrow findings show hypercellular marrow witn marked erythroid hyperplasia. Other non-neoplastic diseases mimicking acute erythroid leukemia are post-chemotherapy recovery, parvovirus infection, drug effect, heavy metal intoxication and congenital dyserythropoiesis. A detailed clinical history, laboratory work up, peripheral blood and bone marrow examination, cytochemical, immunoshistochemical, flow cytometry, cytogenetic and molecular studies are required for the diagnosis of acute erythroid leukemia.

The oncologist was contacted and it was confirmed that B12 was repleted before the bone marrow study was performed. Diagnosis of acute erythroid /myeloid leukemia was only made after it was confirmed with the oncologist that patient was not B12 deficient at the time of the study.


-Neerja Vajpayee, MD, is an Associate Professor of Pathology at the SUNY Upstate Medical University, Syracuse, NY. She enjoys teaching hematology to residents, fellows and laboratory technologists.

Sample Stability and PO2–A Learning Opportunity

One of the interesting things about working in the field of laboratory medicine is that there are always opportunities for learning new things. Almost every call I get from my colleagues outside the lab allows me and the lab team these opportunities. And sometimes we are reminded of the reason we do the things we do, basically re-learning them.

Case in point: An ICU physician contacted the lab, understandably concerned. He had been monitoring the pO2 in a patient using an I-Stat point of care analyzer. Values had been in the range of 50-70 mmHg, and he had been adjusting ventilation on the basis of those results. A blood gas sample was sent to the main lab, analyzed on an ABL analyzer and gave a result of 165 mmHg, repeated shortly thereafter on a new sample with a 169 mmHg. Understandably, the physician wanted to know which analyzer was wrong and how he should be adjusting his patient’s ventilation.

We quickly did an investigation and determined an interesting fact that we hadn’t paid much attention to previously. A blood gas sample that is sent through the tube system that has any amount of air in the sample, will give falsely elevated pO2 result. We investigated this by collecting blood gas samples, running them on both the I-Stat and the ABL, and then sending them through the tube system and rerunning them on both instruments after tubing. The pO2 values matched on both instruments, both before and after tubing. But interestingly, if there was any air in the collection device when the device was sent through the tube system, the pO2 after tubing still matched on the two instruments, but the values were more than double the original values. If no air was present, there was very little change before and after tubing. We tested this by expressing all air from one set of samples before tubing and leaving air in the syringe on the other set.

The collection process for blood gas samples in our institution has always specified that the collector should express any air in the sample before sending the sample to the lab through the tube system, and after this incident the reason for that step became clear. However, the staff collecting blood gases on the floors needs to be periodically retrained in the collection, and the lab staff needs to be reminded that air in a blood gas syringe arriving through the tube station is a reason to reject the sample. We were reminded that education needs to be a continuous process. We also learned that when we discover the reason for a process, it’s a good idea to document that reason in order to both understand the need and to help motivate people to follow it.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Ammonia and Hyperammonemia

Ammonia is a small molecule that is produced as a part of normal tissue metabolism. Its formation results from the breakdown of compounds containing nitrogen, such as the amino groups in proteins and the nitrogenous bases in nucleic acids. In the tissues, ammonia is stored mainly in the form of amino acids, specifically the amino acid glutamine which has three amino groups. Normally, the body can remove excess ammonia easily via the liver pathway known as the urea cycle. This short, 4-step cyclical pathway converts two ammonia molecules into a small, water soluble urea molecule, making it able to be easily excreted in the urine. Without a functional urea cycle however, the body has no other adequate mechanism for getting rid of the ammonia that is constantly being produced by metabolism.

Liver damage or disease can disrupt the urea cycle, causing blood ammonia levels to rise. This is the most common cause of elevated ammonia in the adult population. In a pediatric patient, elevated ammonia is frequently seen as a consequence of an inborn error of metabolism (IEM). Many IEM, especially those in the urea cycle pathway, will result in elevated blood ammonia levels. In addition, in IEM causes, the ammonia concentrations may be well over 1000 µmol/L, when the normal range of ammonia is generally in the 30 – 50 µmol/L range. Elevated blood ammonia concentrations are serious because ammonia is toxic to the brain. The higher the ammonia concentration is, and the longer it stays high, the more brain damage that will occur.

Interestingly, the concentration of ammonia in the blood may not correlate with the neurological symptoms that are seen. Usually if the ammonia concentration is <100 µmol/L, the person will show no symptoms at all. Concentrations of ammonia in the 100 – 500 µmol/L range are associated with a wide variety of symptoms including: loss of appetite, vomiting, ataxia, irritability, lethargy, combativeness, sleep disorders, delusions and hallucinations. These patients may present with an initial diagnosis of altered mental status, and if there is no reason to suspect an elevated ammonia, the symptoms may lead to drug or alcohol testing. When ammonia concentrations are >500 µmol/L, cerebral edema and coma may be seen, with cytotoxic changes in the brain. Ammonia concentrations in the 1000+ µmol/L range are extremely critical and are treated aggressively with dialysis to pull the ammonia out of the system. In particular, urea cycle defects require close monitoring of ammonia and glutamine concentrations, with immediate response when they rise.

Laboratory testing for ammonia is often problematic as contamination can occur from a number of sources including atmospheric ammonia, smoking and poor venipuncture technique. In addition if the sample is not centrifuged and analyzed promptly, ammonia is formed by the continuous deamination of amino acids and the concentration increases by 20% in the first hour and up to 100% by 2 hours. Consequently samples to be tested for ammonia should be placed on ice immediately after being collected and transported to the lab for analysis as soon as possible. Many minimally elevated ammonia results are a consequence of poor sample handling. However, a truly elevated ammonia is a critical lab finding that should be addressed immediately.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.


Most people who work in a clinical laboratory know a little about ketones or ketone bodies. The two facts that most people know include: 1) when you perform a urinalysis (UA), it includes a semiquantitative ketone result, and 2) high ketones are seen in diabetic ketoacidosis. But what is a ketone, where do they come from, and what are we measuring when we measure ketones?

In the laboratory medicine world, “ketones” refers specifically to acetoacetate (Acac), acetone and beta-hydroxybutyrate (BOHB). When the human body cannot utilize glucose, either because it is not present (fasting, starvation) or because it is present but cannot be used (lack of insulin to get glucose into the cells), the body instead breaks down fatty acids for energy. Fatty acids are mostly made up of long chains of carbons with hydrogens attached, so one of the main products of fat breakdown is 2-carbon acetyl-CoA. When a person is using lots of fats, like when they cannot use glucose, the production of acetyl-CoA exceeds the body’s ability to metabolize it via the Kreb’s cycle and it ties up lots of coenzyme A (CoA) needed for other processes. Thus, the body combines two excess acetyl-CoA into an acetoacetate, freeing up the CoA. The more acetyl-CoA produced from fat breakdown, the more acetoacetate produced. From there, the acetoacetate is converted to BOHB enzymatically or degrades spontaneously to acetone. BOHB is a dead end. Once there, the BOHB simply continues to build up until the production of acetyl-CoA no longer exceeds its utilization capacity. At that point, the BOHB is converted back to acetoacetate and then to acetyl-CoA for the body to be able to utilize it.

The most common form of ketoacidosis is probably diabetic ketoacidosis, in which blood glucose levels are high, but the glucose cannot get into the cells and be used, so fats are broken down for energy. At the height of a ketoacidosis, roughly 70% of the ketones in the body will be in the form of BOHB. This has implications for what we measure and for the monitoring of the treatment of ketoacidotic crises. UA dipstick ketones measure acetoacetate, and some will also detect acetone. None of the available UA methods measure BOHB. Thus, ketones measured in a UA will rise as ketoacidosis occurs, drop at the height of ketoacidosis as they are converted to BOHB, and then rise again as the condition is resolving and BOHB is converted back to Acac. A high Acac will occur both at the beginning and toward the end of the ketoacidosis, and Acac may actually be low at the height of a ketoacidotic crisis. BOHB on the other hand rises as the crisis evolves and drops as the crisis is resolved. The best test for following the resolution of a ketoacidotic crisis is repeat BOHB measurements.

BOHB is generally measured enzymatically on blood samples. BOHB response is maximal about 3 hours after glucose peaks. For example in a diabetic ketoacidosis, the peak BOHB will occur about 3 hours after the glucose peaks and in a normal patient given a glucose load, the BOHB will be lowest about 3 hours after the glucose peaks. During resolution of ketosis BOHB decreasing by half about every four hours as long as no more ketones are being produced. The test for BOHB is most commonly performed quantitatively using a kit adapted to the open channel on a chemistry analyzer. A point of care analyzer is also now available for BOHB.

Measuring ketones is most commonly used to monitor ketoacidosis, but ketone measurement can also be helpful in the differential diagnosis of some inborn errors of metabolism. For example, in fasting states, ketones should be elevated. If they are not, it can be an indication of disorders in fatty acid metabolism, or ketone metabolism itself. Additionally, in hyperammonemia states, the absence of ketones and acidosis indicates a urea cycle defect. Their presence suggests an organic acid disorder. Thus measuring ketones has multiple uses in medicine.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Sweat Testing

August in Texas is a good time to write a blog post about sweat. In this case though, I’m going to specifically talk about testing collected sweat samples for chloride concentration. Sweat chloride concentrations are measured in people who are suspected of having Cystic Fibrosis (CF). Because CF has classically been considered a disease of childhood, sweat chloride testing is performed almost exclusively in pediatric institutions.

CF is a disease caused by mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) gene. This is a large gene which codes for a large, transmembrane protein that acts as a chloride channel. More than 1500 mutations have been detected in the CFTR gene, not all of which are known to cause disease. Thus, even though the full gene has been sequenced, CF remains a diagnosis which is made by a combination of the presence of characteristic clinical features, or history of CF in sibling, or a positive newborn screen, PLUS identification of a disease-causing mutation in the gene or protein or laboratory evidence of chloride channel malfunction such as an elevated sweat chloride level.

Collecting a sweat sample for testing is an interesting manual process. The first step involves stimulating the sweat glands to produce sweat. This is accomplished by a process called iontophoresis, in which a sweat-gland-stimulating compound called pilocarpine is driven into the skin using a small electrical current between a set of electrodes applied to the skin. After a 5 minute stimulation, the electrodes are removed, the skin is cleaned, and the sweat that is subsequently produced in that stimulated area is collected for the next 30 minutes. The collection is either via absorption of the sweat into a piece of gauze or filter paper, or by a collection device which funnels the sweat into a small plastic tube as it’s produced. The amount of sweat collected after 30 minutes is determined by weight if gauze or filter paper is used, and by volume if the tubing is used. There is a lower acceptable limit for both cases, below which the sweat collection is insufficient (QNS) and must be repeated. The process sounds simple, however collecting a sufficient quantity of sweat can be problematic, and collecting too little may cause falsely elevated results.

After collection, the amount of chloride present in the collected sample is measured. In a normal sample, the amount of chloride present is well below the measurement range of the usual chloride ion-selective electrodes found in chemistry or blood gas instruments. For this reason, the chloride concentration in a sweat sample is most commonly measured using a method called coulometric titration in which a silver electrode placed in the sample gives off silver ions during a current flow. The silver ions complex with the chloride and precipitate as silver-chloride. This reaction continues until all the free chloride is gone, at which point a timer stops. Quantification is accomplished essentially by comparing the time necessary to complex all the patient’s chloride versus the time necessary to complex a known concentration of chloride in a standard. Calculations are performed using the time and the weight or volume of the sweat collected, among other parameters.

The entire test is very manual. Collection of appropriate sweat samples requires training and practice. In general the QNS rate – how often an adequate collection is not achieved – is carefully monitored by the lab, the CF clinic and the CF Foundation which accredits the clinic. In addition, measuring the chloride in the sweat by chloridometer is not an automated process of placing the sample on an instrument and pushing a button to go. For these reasons, the CFF recommends not performing sweat testing unless you perform a minimum number per year in order to stay proficient. In this day and age of increasing automation, sweat chloride testing remains the anomalous, old fashioned test requiring significant technologist time and expertise.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Markers of Inflammation

I thought today I’d do a little discussion related to two of the more non-specific, questionably useful tests that we have in the laboratory test arsenal, C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR), and their use as markers of inflammation. I’ve left out procalcitonin on purpose since I’ve posted about that inflammatory marker previously. And I won’t discuss hs-CRP and its use in cardiovascular disease risk assessment.

CRP and ESR are referred to as inflammatory markers because both rise when inflammation is present. However neither marker provides much more information other than the presence of inflammation. That leads to the questions: how are these tests useful, and why do we need both?

Good questions! First though, what exactly are these markers? Both of these markers will increase in inflammation, infection and tissue destruction, but at different speeds and to different degrees. CRP is a protein and an acute phase reactant. It is produced by the liver and released in response to inflammatory cytokines, usually within hours of a tissue injury, an infection, or any other cause of inflammation. ESR on the other hand isn’t any kind of analyte at all, but rather a measure of the ability of the red cells to settle out in a blood sample. This settling is affected by the fibrinogen and globulin concentration in the blood as well as by the red cell concentration and how normal the red cells are. Thus besides inflammation, things like anemia, polycythemia and sickle cell disease also will affect an ESR. ESR also increases in malignancies, especially paraproteinemias and other states with abnormal serum proteins, and in autoimmune diseases. ESR elevations are used to support the diagnosis of specific inflammatory diseases, like systemic vasculitis and polymyalgia rheumatic. CRP is useful for monitoring patients after surgery and since it rises rapidly in response to bacterial sepsis, it is often used to monitor response to antimicrobial therapy. Considering the differences in these two “markers” it’s perhaps not surprising that they do not correlate well when compared against each other. Nor is it surprising that the lab has been unable to retire either test.

The pattern of usage for these tests in my lab has shifted in the last several years. In 2007 we ran almost equal numbers of both tests, about 500 per month of each. Eighty percent of the time, both tests were ordered simultaneously. Of those, 20 percent had one normal result and one abnormal result, 50 percent were both abnormal and 30 percent were both normal. Surprisingly, 50 percent of the time, only one single CRP and ESR was ordered, even though these tests are probably more useful when used to trend response and the majority of the time, one or both results were abnormal. This year in 2015 we are running about 1400 CRP per month and 900 ESR per month, and still 70 percent of those ESRs that we do run, are run simultaneously with CRP samples. The same services tend to order both tests, with many of the orders coming from GI, the ED, Orthopedics, Rheumatology, Oncology or Infectious Diseases. CRP tends to be ordered more frequently by general hospitalists, intensivists and Cardiology. ESR orders are STAT 30 percent of the time, while CRP orders are STAT about 22 percent of the time.

Both of these analytes are markers for the presence of an inflammatory process. CRP seems to reflect bacterial or septic processes and response to therapy to a better degree than ESR does, probably because CRP is one of the liver’s acute phase proteins and reflects liver response to injury. CRP also tends to respond more quickly than ESR, rising faster and then falling more rapidly. ESR on the other hand tends to reflect a more systemic response. With either analyte, a one-time order is a snap-shot in time. Thus often one of these markers is normal while the other is abnormal, which may explain why physicians tend to order both. Ordering either analyte as a one-time order will only tell you that inflammation is present, and the results of the tests must always be used in conjunction with other tests and clinical signs and symptoms in order to have any diagnostic efficacy. Sequential CRP or ESR samples allow for trending and helping to determine response to therapy, thus providing more useful information.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Serum Protein Electrophoresis in Children

Although most of the testing performed and the methodologies utilized in a clinical laboratory which serves a pediatric institution are very similar to those found in adult laboratories, a few differences stand out. Differences include devising ways to deal with small test volumes and different test menus than those found in laboratories that serve adult patients. One such test menu differences is the lack of serum protein electrophoresis (SPEP) testing in pediatric labs.

SPEP’s are essentially performed for the main purpose of helping to diagnose and then monitor the treatment of multiple myeloma. SPEPs provide this help by detecting, identifying by reflex immunofixation electrophoresis (IFE), and quantifying monoclonal gammopathies. Children don’t get multiple myeloma. After 20 years of signing out SPEP results at the county hospital next door, the youngest person with a diagnosis of multiple myeloma that I’ve seen was 24 years old. Thus in general SPEP’s are not ordered on children, nor performed in pediatric labs.

Recently however, I learned that although children don’t get multiple myeloma, they do in fact get monoclonal gammopathies. An SPEP ordered on a 7 month old patient in my institution came back with a very clear biclonal gammopathy, identified by IFE as an IgG kappa and an IgA kappa. This child has no bone marrow indication of abnormality, although she does have a deficiency of B-cells along with plasma cell infiltrates in the liver and duodenum.

A little searching determined that apparently, monoclonal spikes on SPEP’s in children are not at all unusual. A study published in 2014 (1) looked at 695 children who had SPEPs performed, and 11% of those children had a monoclonal gammopathy, although none of them had multiple myeloma. The most common associated diagnosis was ataxia-telangiectasia (22%), with a wide range of other diagnoses being found in these children, including some immunodeficiencies, autoimmune diseases, various hematological disorders and a few solid organ malignancies.

Thus it appears that monoclonal gammopathies are present in children and have an entirely different meaning than they do in adults. In addition, currently monoclonal gammopathies in children have no clear diagnostic utility. Perhaps that is the real reason we don’t routinely perform them in the pediatric population.

  • Karafin MS, Humphrey RL, Detrick B. Evaluation of monoclonal and oligoclonal gammopathies in a pediatric population in a major urban center. AJCP 141:482-487. 2014

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Bad Press

Have you heard the expression “there’s no such thing as bad Press?” This saying makes the assumption that getting your name out there is the important thing, whether it’s something good you did or something bad you did that put you forward. I think there’s some truth to this saying because people’s memory tends to be short. They’ll remember a name but not necessarily a context for that name. This probably explains why crooked politicians, even when it’s known that they’re crooked, continue to be elected.

Thinking about this saying from a lab perspective, it means that even when a mistake is made, you may be able to capitalize on it to make contacts outside the lab, to effectively put your name out there. Even if it is a significant mistake, or is a situation you had no control over, using it as an opportunity to introduce lab personnel and lab concepts to the greater medical community is a good thing. Okay, it may take them a bit to forget the bad incident, but they will remember you and now have a lab contact for other lab-related issues.

I was considering this in the context of notifying physicians of a reagent recall, on a reagent we have been using for about 3 months. Luckily, I have a good rapport with the majority of the physicians involved, but even when fielding negative phone calls from those who do not know me, I used this event as an opportunity, an introduction and an offer of lab help on any other issues they may be having.

As a field, the laboratory tends not to blow its own horn very much outside of lab and pathology. Because that’s true, we need to learn to grab opportunities when they arise, even if they arise from less than ideal situations. It’s also an opportunity to suggest that a laboratory professional sitting on various committees may prevent future issues. Being this “forward” sometimes places people firmly outside their comfort zone, but in this day and age of decreasing test reimbursement, and decreasing money in the medical and laboratory fields overall, being an integral part of the healthcare team is more important than ever.

So, the next time you have to notify other healthcare professionals that test results that were reported may be less than accurate, try considering it as an opportunity to create new contacts and build cross-medical-field relationships. Quickly acting on every opportunity to become a well-recognized and needed part of healthcare is the best way to keep our profession alive and flourishing.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.

Biomarkers of Renal Disease

When most laboratory professionals think of tests for renal disease, we think of creatinine and blood urea nitrogen (BUN). These two tests have been considered renal function tests for many years (creatinine for over 100 years), and yet neither is a very good marker of early damage to the kidneys.

Creatinine is a biomarker that really needs individual rather than population-based reference intervals. Each person has a range of creatinine values that are “normal” for them, and that individual range is generally much narrower than the population reference interval. Because the reference interval for creatinine is fairly broad, a person can lose 25 – 50% of their renal function before their creatinine rises out of the reference interval for “normal.” Thus creatinine does not detect early renal damage. BUN is also not great for detecting early acute renal damage. BUN concentrations rise when the kidneys sustain damage, but a rise in BUN is not specific to kidney damage. Other causes can elevate BUN as well, such as starvation and increased protein breakdown or intake. In general, BUN and creatinine provide the most useful information in conjunction with each other, and for trending when significant damage to the kidneys has already occurred.

For these reasons, there is a continuous search for better markers of renal damage, especially markers that indicate early renal damage, when perhaps something can be done to reverse it. Cystatin C is another biomarker that is being increasingly used to assess renal damage, and originally was hoped might outperform creatinine in detecting early renal damage. Cystatin C is a small protein which is freely filtered by the glomerulus and doesn’t have many of the drawbacks of creatinine, such as creatinine’s relationship to muscle mass. Unfortunately, although many studies have been done, cystatin C has not been shown to be better than creatinine at indicating early renal damage, and is considered a renal biomarker with uses similar to creatinine and BUN.

Currently, protein or albumin in the urine, and especially very small amounts of protein/albumin in the urine, is probably the earliest indicator of renal damage that is available in the US. Very small amounts of albumin in the urine, what has been called microalbuminuria, is one of the earliest indicators of renal disease.

A new biomarker for early acute renal damage that is gaining the most widespread acceptance is NGAL. Neutrophil gelatinase-associated lipocalin (NGAL) is being extensively studied and has been shown to detect early, acute kidney injury (AKI). In addition, NGAL levels have been reported to be associated with the amount and severity of renal damage. Already in use clinically in Europe, tests for this biomarker are currently working their way through the FDA in the US.

Some other new biomarkers for AKI that are being studied, but are not progressing toward general usage as quickly as NGAL include kidney injury molecule 1 (KIM-1), β-trace protein, liver-type fatty acid-binding protein (L-FABP) and interleukin-18. These are all proteins that appear to be up-regulated in response to AKI. Studies are on-going to see which of these biomarkers may be useful for detecting early AKI and for differentiating between types and causes of AKI. In addition, for all these new kidney biomarkers, studies are needed on the biomarkers’ efficacy in helping with clinical decision-making regarding treatment options and outcomes. It will be interesting to see if any of them become clinically useful tests for the detection of early acute renal damage.

-Patti Jones PhD, DABCC, FACB, is the Clinical Director of the Chemistry and Metabolic Disease Laboratories at Children’s Medical Center in Dallas, TX and a Professor of Pathology at University of Texas Southwestern Medical Center in Dallas.