Biomarker Testing for Cancer Patients: Barriers and Solutions, Part 2

As you may recall last month I shared common barriers to biomarker testing for cancer patients in the community. I also began to dive-in to a few solutions that I have seen implemented to overcome the barriers. Last month I shared solutions that may help with high cost and long turnaround times for biomarker testing. This month I would like to discuss issues with tissue including quantity.

Here are the top 10 barriers that I’ve seen to biomarker testing in the community:

  1. High cost of testing.
  2. Long turnaround time for results.
  3. Limited tissue quantity.
  4. Preanalytical issues with tissue.
  5. Low biomarker testing rates.
  6. Lack of standardization in biomarker testing.
  7. Siloed disciplines.
  8. Low reimbursement.
  9. Lengthy complex reports.
  10. Lack of education on guidelines.

Sample quantity and quality are both important when considering biomarker testing. If we don’t have enough material we cannot perform the test (quantity not sufficient or QNS). If we have poor quality we cannot trust the results. The old adage of garbage in garbage out holds true for biomarker testing just as it does for all other lab tests.  

I’ll start with sample quantity this month and cover quality issues next month. The issue here is that a variety of biopsy types are performed on patients depending on the location and size of a suspicious mass. Historically we only needed enough material for the pathologist to make a diagnosis. Now we often need enough material for diagnosis and biomarker testing. Some tumor types such as breast and ovarian cancers produce enough material in locations that are easily accessible that tissue quantity is rarely an issue, however other tumor types such as lung and pancreatic cancers there is often an issue with tissue quantity. These tumor types must be handled with care to ensure no tissue recovered is lost.

The first step in addressing tissue insufficiency is knowing where you are starting. Do you have an issue with quantity not sufficient (QNS) rate? If you don’t know how many of your cases are insufficient for biomarker testing, then you can’t determine if you have an issue. If your testing is performed at a reference laboratory, you can request your QNS rate from the lab. They may also be able to provide you with the national QNS rate and then you could benchmark yourself against your peers. It is important to have an accurate QNS rate, so if there are blocks that are not sent to the reference lab because the pathologist has determined the block to be exhausted (no tissue is left) then the QNS rate provided by the reference lab may be artificially low.

It is important to agree upon what is QNS. We consider a specimen to be QNS if we cannot perform biomarker testing on the block. Others may consider the block QNS only if there wasn’t sufficient material for diagnosis. We have to ensure there is enough tumor content in the tissue to proceed with biomarker testing, in our case 10% of the nucleated cells (not volume) must be tumor (determined by pathology review of an H&E slide). If we have enough tumor, we can still end up with a QNS block due to low DNA and RNA yield. So we need sufficient tumor and sufficient tissue. 

Here is a brief overview of solutions I have seen work to address limited tissue that can lead to high QNS rates:

  • Education. The person collecting the biopsy needs to understand how much material is needed. Remember we have moved the goal post. Sufficient material for diagnosis was enough in the past, now we need more material to perform biomarker testing. Educating the team on why we need more material is valuable in ensuring sufficient material is collected.
  • ROSE. Rapid onsite evaluation (ROSE) by a pathologist in the procedure room to determine sufficiency has been shown to decrease the repeat biopsy rate [1]. The pathologist can ensure the biopsy is being collected in a tumor rich region and help ensure areas of necrosis are avoided.  
  • Embedding cores separately. We often get core needle biopsies on lung cancer specimens. We prefer 3-5 cores. It is best practice to independently embed the cores in separate blocks. I have also seen labs that embed no more than 2 cores in one block. This would allow one block to be conserved for diagnosis and the other to be used for biomarker testing.
  • Visual cue for limited tissue. Someone far more creative than me developed a process in histology where in cases of limited tissue the tissue was embedded in a red cassette. This cassette color was a visual cue for everyone handling the block that the tissue was limited and care should be taken when facing into the block. This has evolved over time to a red bead being embedded beside the tissue. Any visual cue and an associated procedure to ensure tissue conservation can help ensure we are conserving tissue in cases where it matters.
  • Limited IHC Stains. The primary reason a biopsy is performed is for diagnosis. It is recommended that as few IHC stains as possible be used to make the diagnosis. This will conserve tissue for biomarker testing.
  • Unstained Slides. Cutting 15-20 unstained slides is considered best practices in tumor types such as lung where biomarker testing will be performed within 30 days. Long term storage of unstained slides is not recommended.
  • Reduce the number of times the block goes on the microtome, because every time the block is put back on the microtome it must be refaced. This results in wasted tissue. This can be prevented by thinking ahead and cutting everything you know will be needed while the block is on the microtome.

References

  1. Collins BT, Murad FM, Wang JF, Bernadt CT. Rapid on-site evaluation for endoscopic ultrasound-guided fine-needle biopsy of the pancreas decreases the incidence of repeat biopsy procedures. Cancer Cytopathol. 2013;121:518-24.

-Tabetha Sundin, PhD, HCLD (ABB), MB (ASCP)CM,  has over 10 years of laboratory experience in clinical molecular diagnostics including oncology, genetics, and infectious diseases. She is the Scientific Director of Molecular Diagnostics and Serology at Sentara Healthcare. Dr. Sundin holds appointments as Adjunct Associate Professor at Old Dominion University and Assistant Professor at Eastern Virginia Medical School and is involved with numerous efforts to support the molecular diagnostics field. 

Biomarker Testing for Cancer Patients: Barriers and Solutions, Part One

We are seeing an unprecedented amount of new targeted therapies for cancer treatment that are tied to diagnostic tests. Drug companies are heavily invested in ensuring the right patients get the right therapy. This is because it actually benefits pharma companies and patients. Patients get a very specific therapy that will likely improve their survival rate and improve their quality of life. By being selective and targeting only patient populations that are likely to respond based on the biology of their tumor, pharma companies show improvements over existing therapies which supports their request for FDA-approval.

With every pharma company tying their drug to specific rare biomarkers, broad molecular profiling such as NGS becomes more important than ever. We will never find the needle in the haystack if we don’t examine the entire stack. However, most cancer patient care occurs in the community where NGS testing is not usually offered locally. There are specific barriers to biomarker identification in the community setting. I will take the next few months to discuss specific barriers and how a lab might overcome these obstacles in order to increase patient access to precision medicine. Just as no barrier is identical between institutions, no solution will be one-size fits all. Feel free to cherry pick and modify solutions that you feel would address your local issues. Remember don’t let perfect be the enemy of the good. Small incremental improvements are impactful and generally require fewer resources than trying to revamp your entire process.

Here are the top 10 barriers that I’ve seen to biomarker testing in the community:

  1. High cost of testing.
  2. Long turnaround time for results.
  3. Limited tissue quantity.
  4. Preanalytical issues with tissue.
  5. Low biomarker testing rates.
  6. Lack of standardization in biomarker testing.
  7. Siloed disciplines.
  8. Low reimbursement.
  9. Lengthy complex reports.
  10. Lack of education on guidelines.

This month I will address the first two barriers that I commonly see with respect to biomarker testing. Molecular testing is expensive and turnaround time is often long. This was especially true for technology such as NGS. There are a few solutions to the high cost and long turnaround time for molecular testing that I’ve seen work well.

Solutions to costly molecular testing such as NGS:

  1. Insource NGS testing.
  2. Continue to send-out but renegotiate your contracts with reference laboratories to ensure pricing is as low as possible.

Let’s dig into the decision to insource NGS versus continuing to outsource testing. It’s easy for me to say insource the test and describe the benefits of doing so, but if your volume is low and you don’t have the facility or expertise, this solution is not likely to work for you. There is a new platform coming to market that claims to make it easier to insource NGS without extensive molecular expertise, however the company will need to provide data to support that claim. If they do show they can provide NGS testing with less expertise, then this could be a game changer for community labs looking to insource NGS testing.  

The benefits of insourcing testing include decrease cost of providing biomarker testing, decreased turnaround time on testing, and local provider input into the test menu. Some of the things that we considered when deciding to insource NGS was the cost to perform NGS testing versus sending it out, volume of specimens to be tested, expertise required, facility requirements, ease of workflow, did available panels meet our clinician and guideline needs, and if there was a comprehensive pipeline available from the vendor. We found a solution that fit our needs in all of these buckets.

After determining that insourcing NGS was the right thing to do for our health system we had to secure funding for the project. We prepared a business case using reference laboratory cost avoidance. This is an example business case for a NGS project:

  • Imagine that you currently send out 200 NGS tests per year for the same panel.
  • This reference lab NGS panel costs $3500 per sample.
  • You calculate that by insourcing the testing you can perform the test for $600 per sample (fully loaded with tech time, repeat rate, control cost, validation cost, QA cost, overhead).
  • This would save the health system $580,000 per year [($3500-$600)X(200 tests)].
  • Pretend the instrumentation required to perform the test in house cost $300,000.

Even the first year, the project could save the health system $280,000 ($580,000-$300,000). Subsequent years would be even more favorable. Showing a favorable return on investment (usually within a 5 year time period) would make it easy for the C-suite to approve insourcing this project.

Obviously money is not the only deciding factor when insourcing testing. I have to be able to perform a test cheaper, faster, and at least as well as the reference laboratory if not better or I will not insource a test.

There are a variety of reasons that you may not want to insource NGS testing. You may not have the expertise, facility, or volume for it to make sense to insource the testing. Are you stuck paying whatever your reference lab is charging you because you can’t in source the test? No.

If you have not negotiated the pricing and billing structure of your molecular pathology reference lab recently, it may be time to take a look around. Here are a few things to consider getting better pricing on send out testing:

  • Renegotiate. You can try to renegotiate with your current reference lab to decrease your contracted price.
  • Shop around. The molecular pathology lab market is growing. With competition comes better pricing.
  • Increase volume. You could try to standardize which lab your physicians are using to increase the volume to your reference lab. Most reference lab contracts are negotiated based on volume. So if you can increase the volume, it is likely that you can decrease the price you’re paying.
  • Direct billing. It is worth addressing who is billing the patient (and who has the highest risk of being stuck with the bill if the testing is not covered). Many molecular pathology labs now directly bill the patient (as long as the patient was not an inpatient within the last 14 days). You may want to explore this option when negotiating contracts.
  • Insurance coverage. You should also consider whether the test offered by the lab is approved for coverage by your most common payers.
  • Out of pocket costs. Many labs now have maximum out of pocket costs to patients that are reasonable. This ensures your patients are stuck with large bills.  

Whether you decide to insource or continue to outsource NGS testing, there are options that could decrease the cost and turnaround time for biomarker testing.

-Tabetha Sundin, PhD, HCLD (ABB), MB (ASCP)CM,  has over 10 years of laboratory experience in clinical molecular diagnostics including oncology, genetics, and infectious diseases. She is the Scientific Director of Molecular Diagnostics and Serology at Sentara Healthcare. Dr. Sundin holds appointments as Adjunct Associate Professor at Old Dominion University and Assistant Professor at Eastern Virginia Medical School and is involved with numerous efforts to support the molecular diagnostics field. 

Pieces of PCR Products

Molecular diagnostics tests come in many forms, but one of the simplest assays is a fragment based assay. The principle of such an assay is to perform polymerase chain reaction (PCR) on a segment of DNA. If there is a mutation, the PCR fragments will be different in size. Notably, this method is good for detecting mutations that cause the insertion or deletion of multiple nucleotides. This type of assay is not suitable for single base pair changes or small insertion/ deletions.

The fragment size is analyzed by labeling the PCR products with a fluorescent dye and then running them through a Sanger capillary sequencer. The fragments will be separated based on size and ideally give clean peaks with low background (Figure 1).

Figure 1. Sample fragment analysis plot (x-axis is time, y-axis is fluorescence intensity) with smaller fragments coming off earlier (more to the left on x-axis). Red peaks represent the molecular size ladder for calibration. Other colors represent fragments labeled with other fluorophores. The ladder also helps you ensure that fragments of different lengths are coming off of the analyzer at similar levels.

One common application of this assay type is to detect FLT3 internal tandem duplications (ITD). FLT3, Fms Related Tyrosine Kinase 3, is a tyrosine kinase growth factor receptor for FTL3-ligand, and regulates hematopoiesis. Mutations in FLT3 are found in 1/3 of Acute Myeloid Leukemia cases and confer a worse prognosis. FLT3 mutations lead to ligand-independent activation by either disrupting the auto-ihibitory loop of the juxtamembraneous domain through an ITD mutation or by an activating point mutation in the tyrosine kinase domain (TKD) (Figure 2).  

Figure 2. Mechanisms of FLT3 activating mutations through internal tandem duplication (ITD) in the juxtamembraneous domain or activating point mutations in the tyrosine kinase domain (TKD).

The type of FLT3 mutation is also important as there are tyrosine kinase inhibitors (TKI’s) that are being investigated for use in FTL3+ cases. Type I inhibitors bind FLT3 in the active conformation either in the ATP binding pocket or at the activation loop; these inhibitors are useful for both ITD and TKD mutations. However, Type II inhibitors bind inactive FLT3 near the ATP binding domain, so they affect ITD but not TKD mutations.As the site of ITDs is consistently in exons 14 and 15 of FLT3, primers flanking this region were designed to detect any mutations in this area (Figure 3). As some artifacts can arise from the PCR process and create false positive peaks, a green primer labels PCR products from one direction and a blue primer labels PCR fragments from the other direction, therefore enhancing specificity (Figure 4). A wild type (WT) sequence will thus be 327bp in either direction.

Figure 3. Depicted is a representation of the FLT3 JM region and the activating loop of the kinase domain. Green and blue dots with black arrows represent the relative positions of primers that target the JM region for ITD and yellow dots with black arrows represent the relative positions of the primers that target TKD mutations in the activating loop of the kinase domain. The yellow box has vertical black lines that represent the position of the wild-type EcoRV restriction digest sites. Image adapted from InVivoScribe.
Figure 4. A FLT3-ITD positive case is shown on the top with a longer segment present with both green and blue peaks present confirming a larger PCR product size. This mutation is present in only minority of cells that represent the aberrant AML population. Image adapted from InVivoScribe.

As mentioned previously, fragment analysis is not suited to detecting point mutations as would be found for TKDs. However, the FLT3 assay has overcome this issue. Investigators determined that the TKD point mutation at codon D835 disrupts the endonuclease recognition site of the enzyme ecoRV (Figure 3). Customized primers again produce a unique PCR fragment (149bp long), which when digested with ecoRV will produce a 79bp fragment in wild type FLT3. If a FLT3-TKD mutation is present the ecoRV will not cleave the fragment at this location, but another ecoRV cleavage site (right side of yellow box) will create a 127bp fragment (Figure 5). Without this second cleavage site, an enzyme failure could be interpreted as a mutation. Thus, the enzyme, ecoRV, must be active and only functional at a single site to produce a TKD mutation.

Figure 5. Panels representing PCR fragments that are undigested by ecoRV (top), digested and have a TKD mutation present (middle) and no TKD mutation detected (bottom). Image adapted from InVivoScribe.

References

  1. Daver Naval, Schlenk RF, Russell NH, and Levis MJ. Targeting FLT3 mutations in AML: review of current knowledge and evidence. Leukemia 2019; 33:299-312.
  2. https://invivoscribe.com/products/companion-diagnostics-cdx/. Last accessed December 8th, 2019.
  3. Pawar R, Bali OPS, Malhotra BK, Lamba G. Recent advances and novel agents for FLT3 mutated acute myeloid leukemia. Stem Cell Invest. 2014; 1(3). doi: 10.3978/j.issn.2306-9759.2014.03.03

-Jeff SoRelle, MD is a Chief Resident of Pathology at the University of Texas Southwestern Medical Center in Dallas, TX. His clinical research interests include understanding how the lab intersects with transgender healthcare and improving genetic variant interpretation.

The X-games of PCR

This is not your Mom’s PCR. These new kids on the block are making PCR extremely fast. PCR (Polymerase Chain Reaction) technology won the Nobel Prize for allowing molecular research to advance much more rapidly (for an interesting read on the quirky Laureate who gave up science to go surfing, read more here: Wikipedia ). It has become the most commonly used work horse of most molecular diagnostic assays, usually in the form of real-time PCR. It is used for a variety of purposes from detecting bacteria and viruses, identity testing for forensics and bone marrow engraftment, cancer mutation analysis, and even sequencing by synthesis used by Illumina for massively parallel sequencing.

This technique is still limited by requiring highly trained technologists to perform DNA extraction, time-consuming processing, and the time of real-time PCR itself. Overall, this process takes about a 5-8 hours. While this is much faster than in the past, it would be unacceptable for use in the point-of-care (POC).

But why would DNA testing need to be POC? The term sounds like an oxymoron in a field where many results have a 2-month turnaround time. There are certain circumstances where molecular testing would impact patient care. For instance, a doctor testing a patient in their office for a sexually transmitted infection would want to know if they have gonorrhea/ chlamydia so they could prescribe proper antibiotics. Similarly, POC molecular testing could be applied in a bioterrorism incident to test samples for an infectious agent. Or POC testing would benefit low-resource areas internationally where HIV testing could be used to manage anti-retroviral therapy in patients many miles from a laboratory.

For PCR as a test to be useful at the POC setting, it would have to provide a result within 10-15 minutes and be performed as a waived test. Two recent examples the demonstrate how this is possible have been highlighted at recent conferences of the American Association of Clinical Chemistry, which I just got back from: Extreme PCR1 and Laser-PCR.2

Extreme PCR refers to a technique of rapidly cycling the temperature of PCR reactions. The reaction occurs in a thin slide that evenly distributes the reagents, temperature and is clear to permit easy reading of fluorescence measurements (Figure 1). DNA Polymerase enzyme and primers to amplify the target DNA are added at much higher concentrations than normal (20x).

Figure 1. Thin reaction chamber for ultra-fast PCR.

This flies in the face of traditional PCR chemistry dogma as specificity would plummet and normal DNA could be amplified instead of target DNA. This would create a false positive. However, let’s think about what is actually happening with non-specific reactions. Primers are designed to match one region of DNA, which is very unique within the whole genome. However, the genome is so large that some segment may look very similar and be different in just 1 or 2 of the 20 base pairs that a primer matches. A primer could bind to this alternate region but less efficiently. So, the binding would be weaker and take more time to occur.

Therefore, by speeding up the cycling time to just a few seconds, only the most specific interactions can take place and non-specific binding is offset (Figure 2)!

Figure 2. Fluorescence from a dye that fluoresces when bound to double stranded DNA, which is increasing here within seconds (high point represents when the reaction temperature cools and dsDNA anneals, then low points represent heating to high temperatures).

Laser PCR does not report the use of increased reagents like Extreme PCR (it may be proprietary), but they boast a very innovative method to quickly heat and cool PCR reactions. GNA Biosciences use gold nanoparticles with many DNA adapters attached (Watch the video below for a great visual explanation!).

These adapters are short sequences of DNA that bring the target DNA and primers together to amplify the target DNA sequence. Then as the name implies, a laser zaps the gold beads and heats them up in a very localized area that releases the DNA strands. The released DNA binds another gold particle, replicates, rinses, and repeats. The laser energy thus heats the gold in a small area that allows for quick heating and cooling within a matter of seconds.

These new PCR methods are very interesting and can have a big impact on changing how molecular pathology advances are brought to the patient. On a scientific note, I hope you found them as fascinating as I did!

References

  1. Myrick JT, Pryor RJ, Palais RA, Ison SJ, Sanford L, Dwight ZL, et al. Integrated extreme real-time PCR and high-speed melting analysis in 52 to 87 seconds. Clin Chem 2019;65:263–71.
  2. CLN Stat. A Celebration of Innovation. AACC’s first disruptive technology award to recognize three breakthrough diagnostics. https://www.aacc.org/publications/cln/cln-stat/2018/july/10/a-celebration-of-innovation
  3. G. Mike Makrigiorgos. Extreme PCR Meets High-Speed Melting: A Step Closer to Molecular Diagnostics “While You Wait” Clin Chem 2019.

-Jeff SoRelle, MD is a Chief Resident of Pathology at the University of Texas Southwestern Medical Center in Dallas, TX. His clinical research interests include understanding how the lab intersects with transgender healthcare and improving genetic variant interpretation.

Next Generation Sequencing: Types of Variants

We have reviewed from start to finish the next generation sequencing wet bench process, data review and troubleshooting.  I’d like to take a more in-depth look at the types of variants that can be detected by the targeted amplicon NGS panels that our lab performs:  single nucleotide variants, multi-allelic variants, multi-nucleotide variants, insertions (including duplications), deletions and complex indels.  In our lab, we review every significant variant and variant of unknown significance in IGV to confirm the call is made correctly in the variant caller due to the difficult nature of some of these variants.  I have included screenshots of the IGV windows of each of these types of variants, to show what we see when we review.

Single Nucleotide Variants (SNV)

The most common (and straight forward) type of variant is a single nucleotide variant – one base pair is changed to another, such as KRAS c.35G>A, p.G12D (shown below in reverse):

Multi-allelic Variants

A multi-allelic variant has more than one change as a single base pair (see below – NRAS c.35G>A, p.G12D, and c.35G>C, p.G12A – shown below in reverse).  This may be the rarest type of variant – in our lab, we have maybe seen this type in only a handful of cases over the last four years.  This could be an indication of several clones, or different variants occurring over a period of time. 

Multi-nucleotide Variants (MNV)

Multi-nucleotide variants are variants that include more than one nucleotide at a time and are adjacent.  A common example is BRAF p.V600K (see below – in reverse) that can occur in melanoma.  Two adjacent nucleotides are changed in the same allele.  These variants demonstrate one advantage NGS has over dideoxy (Sanger) sequencing.  In dideoxy sequencing, we can see the two base pair change, but we cannot be certain they are occurring on the same allele.  This is an important distinction because if they occurred on the same allele, they probably occurred at the same time, whereas, if they are on different alleles, they were probably two separate events.  It is important to know for nomenclature as well – if they are on the same allele, it is listed as one event, as shown below (c.1798_1799delGTinsAA, p.V600K) as opposed to two separate mutations (c.1798G>A, p.V600M and c.1799T>A, p.V600E).  As you can see in the IGV window below, both happen on one strand.

Insertions/Duplications

Insertions are an addition of nucleotides to the original sequence.  Duplications are a specific type of insertion where a region of the gene is copied and inserted right after the original copy.  These can be in-frame or frameshift.  If they are a replicate of three base pairs, the insertion will move the original sequence down, but the amino acids downstream will not be affected, so the frame stays the same.   If they are not a replicate of three base pairs, the frame will be changed, causing all of the downstream amino acids to be changed, so it causes a frameshift.   A common example of a frameshift insertion is the 4bp insertion in NPM1 (c.863_864insCTTG, p.W288fs) that occurs in AML.  In IGV, these are displayed by a purple hash that will show the sequence when you hover over it.

Deletions

Deletions, on the other hand, are when base pairs are deleted from the sequence.  These can be in-frame or frameshift, as well.   An example is the 52bp deletion (c.1099_1150del, p. L367fs) found in the CALR gene in cases of primary myelofibrosis or essential thrombocythemia.

Complex Indels

Lastly, NGS can detect complex indels.  These, again, are a type of variant that we could not distinguish for sure using dideoxy sequencing.  We would be able to detect the changes, but not whether or not they were occurring on the same strand, indicating the changes occurred at the same time.  The first example is a deletion followed by a single nucleotide change – since these both occur on the same strand, they most likely occurred together, so they are called one complex deletion/insertion event (KIT c. 1253_1256delACGAinsC, p. Y418_D419delinsS).  First the ACGA was deleted, then a C was inserted. 

The last example involves multiple nucleotides changes all in the same vicinity (IGV is in reverse for this specimen as well).  Using HGVS nomenclature as in all the previous examples, this would be named RUNX1 c.327_332delCAAGACinsTGGGGT, p.K110_T111delinsGV.

rapp_small

-Sharleen Rapp, BS, MB (ASCP)CM is a Molecular Diagnostics Coordinator in the Molecular Diagnostics Laboratory at Nebraska Medicine. 

Genetic Results: Set in Stone or Written in Sand?

This month, I’m switching gears to another interest of mine: Molecular Pathology. I am currently in fellowship for Molecular Genetic Pathology which exposes me to unique, thought-provoking cases.  

Advances in genomic sequencing has allowed multiple genes to be analyzed in a single laboratory test. These so-called gene panels have increased diagnostic yield when compared to serial gene sequencing in syndromic and non-syndromic diseases with multiple genetic etiologies. However, interpretation of genetic information is complicated and evolving. This has led to wide variation in how results are reported. A genetic test result can either be positive (pathogenic or likely pathogenic), negative (benign or likely benign) or uncertain (variant of uncertain significance- VUS). A VUS may just be part of what makes each individual unique and doesn’t have enough evidence present to say that it is pathogenic or benign. Many results come back like this and can be frustrating for patients to hear and for genetic counselors and clinicians to explain.

Initial approaches to exclude benign variants through sequencing 100 “normal people” to determine the frequency of common variants in the population was fraught with bias. The “normal population” initially was constructed mostly of individuals with white European descent. Not surprisingly, lack of genetic diversity in control populations lead to errors in interpretation.

Fortunately, there are now several publicly available databases that exist to help determine whether gene variants are damaging. The first important piece comes from population sequencing efforts. These projects performed whole exome sequencing of hundreds or thousands of individuals to find variants that might be rarely expressed in a more genetically diverse population. If a variant occurs in a normal health population at a frequency >1%, then it likely doesn’t cause a severe congenital disease that would in turn prevent that genetic variant from being passed on.

The Exome Association Consortium (ExAC)1, which has been rolled into the larger gnomAD (genome aggregation database) database now contains sequencing information on 120,000 individuals (Figure 1). The smaller ESP (Exome Sequencing Project) was a project by the NHLBI division of NIH and sequenced several patients with different cardiovascular and pulmonary diseases.

Figure 1. Number and percent of various ethnicities present in 4 major population sequencing projects.

While there is ethnic diversity present in this database, the 1000 genomes project2 furthered efforts by searching all over the world to get genetic information from around 100 ethnically and geographically distinct sub-populations (Figure 2).

Figure 2. Geographic map of populations sequenced by the 1000 Genomes Project.

With use of these databases, we can effectively rule out rare polymorphisms as benign when they are expressed in several healthy individuals and especially when expressed in the homozygous state in a healthy individual. Before, it was common for a person of an ethnic minority to have different variants compared to predominantly European cohorts. In many cases, this led to uncertain test results.

One way to deal with these VUSs is for a lab to periodically review their test results in light of new knowledge. Although the CAP has a checklist3 item that requires a lab to have a policy about reassessing variants and actions taken. However, this item doesn’t require a lab to communicate the results with a physician and doesn’t specify how often to reanalyze variants. Before last year, there weren’t even any studies that indicated how often variant reanalysis should occur. Variant reanalysis had only been studied in a limited context of whole exome sequencing for rare diseases to improve the diagnostic yield4. However, this did not address the issue of frequent VUSs to determine how often they were downgraded to benign or upgraded to pathogenic.

One example of how reclassification can occur is illustrated in the case of a young African American boy who had epilepsy and received a genomic test that covered a panel of genes known to be involved in epilepsy in 2014. Two heterozygous VUS were reported back for EFHC1 (EFHC1 c.229C>A p. P77T and EFHC1 c.662G>A p. R221H), which causes an autosomal dominant epilepsy syndrome when one allele is damaged. However, this variant could later be reclassified as benign by looking at population databases. The ExAC database showed an allele frequency of 2.5% in African Americans and the 1000 Genomes database showed an 8.8% frequency in the GWD subpopulation (Gambian Western Divisions).

This case demonstrates the importance of reanalyzing genetic test results as medical knowledge continues to evolve. Recently studies looking at reclassification rates of epilepsy5 and inherited cancer syndromes6 have been published in JAMA journals and demonstrate that reclassification of variants is common. It is thus important for laboratories to periodically review previously reported variants to provide optimal quality results and patient care. I will elaborate on this further in the next blog post.

References:

  1. Lek M, Karczewski KJ, Minikel EV, et al. Analysis of protein-coding genetic variation in 60,706 humans. Nature. 2016;536:285-291.
  2. The 1000 Genomes Project Consortium, Auton A, Brooks LD, et al. A global reference for human genetic variation. Nature. 2015;526:68-74.
  3. Sequence Variants – Interpretation and Reporting, MOL.36155. 2015 College of American Pathologists (CAP) Laboratory Accreditation Program Checklist.
  4. Costain G, Jobling R, Walker S. Periodic reanalysis of whole-genome sequencing data enhances the diagnostic advantage over standard clinical genetic testing. Eur J Hu Gen. 2018.
  5. SoRelle JA, Thodeson DM, Arnold S, Gotway G, Park JY. Clinical Utility of Reinterpreting Previously Reported Genomic Epilepsy Test Results for Pediatric Patients. JAMA Pediatr. 2018 Nov 5:e182302.  
  6. Mersch J, Brown N, Pirzadeh-Miller, Mundt E, Cox HC, Brown K, Aston M, Esterling L, Manley S, Ross T. Prevalence of Variant Reclassification Following Hereditary Cancer Genetic Testing. JAMA. 2018 Sep 25;320(12):1266-1274.

-Jeff SoRelle, MD is a Molecular Genetic Pathology fellow at the University of Texas Southwestern Medical Center in Dallas, TX. His clinical research interests include understanding how the lab intersects with transgender healthcare and advancing quality in molecular diagnostics.

This work was produced with the guidance and support of:

Dr. Jason Park, MD, PhD, Associate Professor of Pathology, UT Southwestern Medical Center

Dr. Drew Thodeson, MD, Child Neurologist and Pediatric Epileptologist

Evaluating and Analyzing Next Generation Sequencing Specimen Results

Welcome back – in my previous blog we discussed how a run is evaluated on the Ion Torrent instrument. This quarter’s blog will review the individual specimen results from that run.

First off, we take a look at how many reads per specimen have been sequenced and how those reads performed over the areas that are targeted. For the AmpliSeq Cancer Hotspot Panel v2 that we run, there are a total of 207 amplicons that are created and sequenced. To assess the depth of coverage over these amplicons, we need to think about the biology of the tumor cells and the limit of detection of the assay. We feel confident that we can detect 5% variant allele frequency for single nucleotide changes, and 10% variant allele frequency for insertions or deletions. In order to be confident that we are not missing variants, we require the specimen has a tumor percentage greater than 20%. This is because, for a given tumor, it can be assumed that if it is mutated, it will be only heterozygous – only one of the two alleles will have the variant. This automatically halves the possible allele frequencies from any given tissue. If a colon specimen that we are given to test has a tumor percentage of 40%, it can be assumed that any variant will have a variant allele frequency of no more than 20%. Because of this then, we also require the amplicons that are sequenced to have at least 500x coverage – they need to be sequenced at least 500 times so that if we have a 5% mutation, we will see it in 25 of the reads and we can feel confident this is an actual change, as opposed to background noise.

Next, we look at the On Target percentage and Uniformity percentage (over 95% for each is expected). The On Target value tells us what fraction of the amplicons actually cover the 207 amplicons that are in the panel. Uniformity informs us of how even the number of reads is over all the 207 amplicons – were they all equally represented or were there a subset of these that had more coverage than the others? This information can actually lead us to further testing – if there is a subset of amplicons that have more coverage than the rest, and it they are all from one gene, this may indicate gene amplification. In these cases, the clinician is alerted and additional testing can confirm the amplification.

All of this coverage information is provided by one of the “plugins” we run after the basecalling and alignment are finished:

The most useful (and interesting!) information is gathered from the variant calling plugin. This plugin compares the specimen sequences with the reference sequences and reports the differences – the “variants”. Many of the variants that are detected are single nucleotide polymorphisms (variants that are detected in greater than 1% of the population). They could also be known artifacts of the sequencing itself. These are all analyzed and categorized in the validation of the assay and then can be filtered out when analyzing clinical data. After filtering out the known SNPs and artifacts, the somatic changes can then be evaluated. Generally, the panel will detect 15-20 variants, but after filtering only 1-4 variants will be somatic changes. Each change that is detected is reviewed using a program called IGV, shown below. We compare the sequence to confirm that what the plugin is reporting looks correct in the actual reads from the sequencer. See screenshots below of a subset of variants called, then filtered, and analyzed in IGV. While the plugin is exceptionally good at variant calling, no program is perfect and visualizing the data is still necessary to confirm there is not anything else going on in the area that is sequenced. The fastq file from the run is also run through a secondary software to compare results. The variants for each specimen are assessed for variant allele frequency, coverage and quality in both software.

VariantCaller Output

Filtered Calls: White cells means SNP, Blue cells mean possible somatic call

IGV Output for KRAS and STK11 calls:


Lastly, the results are brought into yet another software to be reported. This software will allow the pathologists to assign significance to the variants. It will also pull in any treatment information linked to the variants and then allow the pathologist to pick any applicable clinical trials in order to assist the clinician as much as possible. In future blogs we will take a look at cases like this to see interesting findings of oncology cases.

rapp_small

-Sharleen Rapp, BS, MB (ASCP)CM is a Molecular Diagnostics Coordinator in the Molecular Diagnostics Laboratory at Nebraska Medicine.