An Introduction to Laboratory Regulations – Part II (Testing Complexity)

Last month we reviewed the different federal regulatory agencies responsible for establishing laboratory testing guidelines, and a brief overview of the different roles each department has. This month we’ll attempt to demystify testing complexity (waived, non-waived, PPM) and why testing classification matters. Still to come, we’ll review the optional accreditations available to labs, and how accreditation differs from certification.

For all in vitro diagnostic tests, the FDA is responsible for categorizing each test based on their perceived complexity during the pre-market approval process. From least to most complex, the categorizations are waived, moderate complexity, and high complexity. The reason this is important is because with each jump in test category, the CLIA rules associated with performing testing will change – as will the permit designation required to perform testing. This includes things such as QC requirements, validation testing, and personnel requirements to define who can perform testing in the first place.

Waived tests are considered easy to use, with little to no chance that the test result will provide wrong information or cause harm if it is done incorrectly. This includes over-the-counter tests such as home use urine pregnancy kits, where if the sample is applied incorrectly or in insufficient volume there will simply be no result obtained at all. Many Point of Care tests fall under this category, with testing performed in a wide variety of locations including physician offices, urgent care clinics, imaging centers and nursing homes. Locations performing waived testing only are still required to obtain an appropriate CLIA Certificate of Waiver. (See the reference links at the end for a list of all FDA approved CLIA-Waived tests.)

For waived testing, laboratories must follow the manufacturer’s instructions for testing, including the stated FDA approved intended use, without any deviation. If the procedure is modified, or the test is used with specimens not approved by the FDA – the complexity classification of the test will change from waived to high complexity. A common situation where this occurs is with fingerstick whole blood glucometers. Most device manufacturers on the market today for point of care glucose testing are not FDA approved for use with critically ill patients. Using these waived meters for patients deemed “critically ill” based on your local institution’s designation would change the complexity of testing from waived, to high, for this population of patients as it would be considered “off-label use” – meaning you are using it against FDA recommendations and approved forms of use for the test/instrument.

Another caveat to be mindful of is your local state regulations. Certain states (NY, especially) have very strict rules regarding testing complexity designation. In NY, all tests performed within the same designated laboratory space will have the same testing complexity designation. Meaning that if you have a moderate complexity CBC analyzer in the same room you perform your waived urine pregnancy tests – both are now considered moderate complexity. Even though you’re following the manufacturer’s instructions for the pregnancy kit, using only approved specimen types, and the kit is on the FDA approved CLIA-Waived list – that test is now moderate complexity just because it is in the same room as other higher complexity tests. That same pregnancy kit is considered waived when kept separate in the emergency department, but becomes moderate complexity (or higher) when used in the central laboratory.

Nonwaived tests refer to both moderate and high complexity testing. After the FDA has approved a marketing submission, their CLIA categorization of the test follows by utilizing a scorecard to grade the test complexity on 7 different criteria. All phases of testing (preanalytic, analytic and postanalytic) are evaluated in these steps:

  1. Knowledge – low scores require minimal scientific and technical knowledge to perform the test, and knowledge needed can be easily obtained through on-the-job instruction.
  2. Training & Experience – low scores require minimal training and limited experience to perform the test.
  3. Reagents & Materials Preparation – low scores have stable and reliable reagents, and require no special handling, precautions, or storage conditions. They typically come prepackaged, premeasured, and ready for use; whereas high scores may include manual steps such as volumetric measurements and/or reconstitution.
  4. Characteristics of Operational Steps – low scores have automatically executed steps (such as dispensing specific volumes of sample/reagent, temperature monitoring, or timing of steps); high scores require close monitoring or control, precise temperatures or timing, accurate pipetting or extensive calculations.
  5. Calibration, Quality Control, and Proficiency Testing Materials – low scores have all required reagents, controls and PT material commercially available and products are stable.
  6. Test System Troubleshooting & Equipment Maintenance – low scores have automatic troubleshooting or self-correction of errors (failed internal QC will automatically repeat), or requires minimal judgement. Equipment maintenance will be performed by the manufacturer or is minimal and easily performed, whereas high scores require decision-making and direct intervention to resolve most issues, or maintenance tasks require special skills and abilities.
  7. Interpretation & Judgement – low scores require minimal interpretation and judgement for resolution of problems or determination of test results.

Low scores indicate low complexity, with tests obtaining a total score of ≤12 being categorized as moderate complexity. Tests with final scores >12 are categorized as high complexity.

PPM: Within the category of nonwaived tests is a subcategory referred to as Provider Performed Microscopy (PPM). These are tests that are performed directly by a clinician during a patient visit, and require the use of a microscope limited to bright-field or phase-contrast microscopy. Based on the nature of the sample obtained, testing must be performed immediately at the time of collection as delays could compromise the accuracy of test results. As controls are typically not commercially available for these tests, the testing is restricted to clinicians only as knowledge and judgment is required to confirm testing accuracy and correlation to the clinical presentation.

Tests allowed under a PPM certificate are mostly related to OB/GYN procedures, with a full list available through CMS here:

https://www.cms.gov/Regulations-and-Guidance/Legislation/CLIA/Downloads/ppmplist.pdf

So why does it matter?

So the next time you receive a request to add a new test at your laboratory, you’ll be armed with a fairly long list of the requirements that come with that test based on its complexity. Coming up next month we’ll discuss the difference between laboratory certification and accreditation, along with the benefits of obtaining accreditation for your lab.

References

  1. Electronic Code of Federal Regulations: https://www.ecfr.gov/cgi-bin/text-idx?SID=1248e3189da5e5f936e55315402bc38b&node=pt42.5.493&rgn=div5
  2. CLIA-Waived Analytes: https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfClia/analyteswaived.cfm
  3. CLIA Complexity Database: https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfCLIA/Search.cfm?sAN=0
  4. FDA Approved Devices: https://www.accessdata.fda.gov/scripts/cdrh/devicesatfda/index.cfm


-Kyle Nevins, MS, MLS(ASCP)CM is one of ASCP’s 2018 Top 5 in the 40 Under Forty recognition program. She has worked in the medical laboratory profession for over 18 years. In her current position, she transitions between performing laboratory audits across the entire Northwell Health System on Long Island, NY, consulting for at-risk laboratories outside of Northwell Health, bringing laboratories up to regulatory standards, and acting as supervisor and mentor in labs with management gaps.

The Disaster Risk Assessment

There are multiple types of risk assessments required when managing a laboratory safety program. OSHA’s Bloodborne and Airborne pathogens standards require assessing the risk of employees’ exposure to particular lab hazards. Risk assessments can be used to determine whether or not to add an emergency eyewash station, and all lab chemicals need to be assessed for the hazards they pose. These are just some assessments that are needed, and there are particular steps to take when performing them. But what about the lab emergency management plan? Should the lab perform a risk assessment for that? The answer is yes, although the terminology used may be different. To prepare a disaster readiness plan for the lab, the risk assessment that is needed is known as a Hazard Vulnerability Analysis (HVA).

The Centers for Medicare & Medicaid Services (CMS) requires that all healthcare facilities use an “all-hazards” approach when considering emergency preparedness and planning. While some laboratories may be included with the facility-wide disaster plan, the lab should absolutely have its own plan with specific instructions that apply directly to the department. That means the lab should also consider an all-hazards approach.

It may seem daunting to try to consider every possible disaster that could occur in the department, but that is not exactly what the directive from CMS dictates. An all-hazards approach means that emergency plans should be scalable or flexible so that it can be used for many types of disasters. The plan should focus on the lab’s ability to continue to offer services, especially those deemed critical, as a disaster situation unfolds.

The first step to the plan creation is the risk assessment- the Hazard Vulnerability Analysis. The HVA can be a table that lists all of the potential types of disaster; natural, man-made, facility-specific, etc. List as many as you can think of, and be sure to include specific disasters that may be particular to your locale (earthquakes, blizzards, etc.). Rate each disaster type by probability, severity of impact, and level of readiness of the lab to respond. Using that data, you can calculate the risk percentage for each emergency type.

One other requirement imposed by CMS is that facilities must include emerging infectious diseases as one potential type of hazard class. With the advent of particular diseases in the past years like Ebola, Zika, and certain influenza types, it is important to consider how an outbreak would affect lab operations and staffing. The risk level of infectious diseases may vary as incidents and outbreaks occur in particular geographic regions or if pandemics arise.

The HVA should be reviewed and updated as necessary each year. Things change that can affect what is on your HVA list. The addition of a nearby airport might make you consider adding airline disaster to the HVA. A change in weather patterns could occur as well. In 2011 a surprise earthquake in Virginia made state facilities re-look at their HVA list of possible emergency situations. Also, the actual list of disasters might not change, but there may be a change in the potential of a particular incident occurring.

If your lab or facility has not yet performed the HVA risk assessment, there is no need to panic. There are several model HVA tools available on line that can be used. As with any risk assessment, be sure to keep documentation readily available, review it each year, and make sure staff are trained about not only the HVA process, but in how to use the emergency management plan as well. There is a great amount of work that can go into preparing for a disaster, and training and drills for your staff will help to facilitate a smoother activation of the plan when the real emergency situation occurs.

Dan Scungio, MT(ASCP), SLS, CQA (ASQ) has over 25 years experience as a certified medical technologist. Today he is the Laboratory Safety Officer for Sentara Healthcare, a system of seven hospitals and over 20 laboratories and draw sites in the Tidewater area of Virginia. He is also known as Dan the Lab Safety Man, a lab safety consultant, educator, and trainer.

Tips for Performing Internal Lab Audits

In previous blog posts we discussed some hints and tips for how to survive when your lab is being inspected. Today we get to flip things around and let you be the inspector. Whether it’s an internal audit of your own laboratory, or an external inspection of a peer laboratory, we’ll discuss some ways to help keep you on track to cover the most important aspects of the overall testing process in a limited amount of time.

For external audit preparation, the CAP has a wonderful training program that all volunteer inspectors are required to take prior to participating in an inspection. For labs that are not CAP accredited, they still have helpful information on their website that is free and open to all: https://www.cap.org/laboratory-improvement/accreditation/inspector-training. CLSI document QMS15-A (Assessments: Laboratory Internal Audit Program; Approved Guideline) is another great resource to use when planning your audit.

The primary role of an auditor is to review policies, processes, and procedures to identify any inconsistencies (does your SOP match the manufacturer recommendations, and is staff following the SOP as written). Audits should focus on collecting objective evidence and facts, rather than subjective opinions. For example, staff failing to document required weekly maintenance tasks, as opposed to an auditor simply not liking the particular form the tasks are being documented on.

Define the Objective of the Audit

Laboratory leadership should be involved in the planning process to help define the scope and expected goal of performing the audit. This can range from an overall assessment of general laboratory quality and safety, to a more directed and focused audit on either a single department, instrument/test, or test process (specimen collection, physician notification of critical values, etc). The format for the audit findings should also be discussed – will the site require a formal, written report outlining all observations detected, or will a simple informal summation discussion be sufficient?

Draft a Schedule for the Audit

Once the scope of the audit is defined, a tentative schedule should be created so all staff involved in the audit process are aware and available to participate. If the audit will encompass multiple departments and all phases of testing (pre-analytic, analytic, post-analytic), it may be necessary to split the audit up over multiple days, or to recruit multiple auditors. The frequency of audits will depend on the perceived risk to quality based on previous findings or complaints received, but at a minimum should be completed annually.

Prepare for the Audit

Reach out to the local management team of the site being audited for help in gathering the information you’ll need to prepare. This can include things such as a testing activity menu, list of new instrumentation or new test validation studies, employee roster if personnel and competency records will be reviewed, and copies of previous audit/inspection results to check for corrective action implementation and sustainability. Review the information provided, and use it as a guide for where you feel your efforts should be focused on based on highest risk.

Utilize a Patient Tracer

Ask the site to pull all related records and reports for a particular patient sample by choosing a date, and specifying any particular characteristics for the specimen that you want to follow (such as age of the patient, sex, or focusing on abnormal/critical results). By asking the sites to prepare a patient tracer ahead of time, this will reduce the amount of time spent waiting and digging for specific files or log sheets as they are already organized and ready when you walk in for the audit. Tracers should adhere to the defined scope/objective of the audit, and will help you follow the path of a specimen through the entire process from pre-analytical, analytical, and finally post-analytical phases.

Pre-analytical: Include any specimen collection instructions or a printout/photocopy from the test directory for each test requested. This information should be compared to the information within the applicable SOPs to ensure they match and are both current and accurate. Physician orders can be included to confirm that the correct test was ordered and performed based on what was requested by the clinician.

Analytical: Copies of the related SOPs for the test being reviewed should be included. Ensure the SOPs have all required elements, including a current, valid signature of approval from the medical director. Instrument QC and maintenance logs for the day of testing, calibration records, and patient correlation studies should also be reviewed, along with the reagent lot-lot validation performed. When available, copies of the actual instrument printouts should be included to check for accuracy in result transcription. Training and competency records for the staff who performed any handling or testing of the specimens in question may also be reviewed.

Post-analytical: Check for supervisory review of patient log sheets and QC records, along with appropriate corrective actions documented as applicable. Review the patient results in the same format that is seen by the physician: confirm reference ranges and units of measure are accurate, interpretive notes are valid and appropriate, test methodology is stated when applicable, abnormal values are flagged, and confirm result transcription accuracy from the original instrument printout. Proficiency testing results should be reviewed for any unsuccessful events to confirm sustainability of corrective actions.

Conduct the Audit

Perform an objective review of the documents provided, along with any affiliated records and logs based on the scope of the audit (temperature logs, reagent inventory records, decontamination records, etc). As with an official inspection, be transparent with the staff as issues are identified so they can have an opportunity to clarify any confusion, or locate additional records that may be missing or incomplete. Document any discrepancies or possible issues noted, as well as any good lab practices observed that should be celebrated. When logging your findings, be specific and provide as much details as possible so the staff can quickly identify what was found and make the needed corrections (SOP numbers, dates, instrument serial numbers, etc).

In addition to reviewing documentation, perform a direct observation of the staff doing specific tasks. Are they following the steps outlined in their procedures, or are deviations noted? Rather than a formal interview, ask the staff to explain what they are doing, or why they are performing certain steps in a particular order. Again, the audit is not meant to be punitive or to ‘catch someone in the act’, but rather to help identify areas for improvement or clarification so that testing processes can be improved and standardized among all staff members. Asking open ended questions will provide more information than directed ones. For example, “Show me how you would access testing instructions if your computer network was down” as opposed to “Where are the paper versions of your SOPs?”

Prepare an Audit Report

The audit findings should be summarized for the site based on the format agreed upon during the initial planning stage (written report, verbal discussion). Whenever possible, similar findings should be grouped together so the location can identify systemic problems that need to be addressed on a more global level (expired reagents found in multiple departments, staff failing to utilize appropriate PPE in multiple departments, etc). Depending on the number and severity of the issues identified, sites may prefer to have the observations grouped by department as well for easy assignment of follow-up action items to the department leaders. Issues should also be ranked by risk severity so that the site knows where to focus their improvement efforts first: 1) Patient care and employee safety issues; 2) Regulatory compliance gaps; 3) Recommendations for improved overall good laboratory practice.

Implement Corrective Actions

Any issues identified during the audit should be assigned to a specific person for follow-up, along with an anticipated date of completion. Perform a proper root cause analysis to identify why the issue happened, and then decide how to correct it and prevent it from happening again. Depending on the scope of the audit, the audit team members may be involved with these tasks, or this may fall to the sole responsibility of the management team being inspected.

Evaluate the Effectiveness of the Audit

The utility of the audits will depend greatly on the commitment of laboratory leadership to both implement, and sustain, effective corrective actions based on the quality gaps identified. This can be assessed by the overall level of compliance with the regulations being checked, and comparing the results of this audit to previous and subsequent ones to hopefully show a downward trend in potential citations detected. The audit team should obtain feedback on the audit process to assess the inspected lab’s overall satisfaction with the program, the amount of support offered to the inspected laboratory, effectiveness of communication between the teams, and any potential areas for improvement in the process.

Performing internal audits is a great way to meet regulatory, accreditation, and customer requirements. It allows you an opportunity to identify non-conformances and risks that can affect both quality, and patient/employee safety. By performing regularly scheduled internal audits, not only will staff members become more experienced and better prepared for the official external inspections from regulatory and accrediting agencies, but the laboratory will move from a culture of reactive, corrective actions to that of a proactive model of continual improvements.


-Kyle Nevins, MS, MLS(ASCP)CM is one of ASCP’s 2018 Top 5 in the 40 Under Forty recognition program. She has worked in the medical laboratory profession for over 18 years. In her current position, she transitions between performing laboratory audits across the entire Northwell Health System on Long Island, NY, consulting for at-risk laboratories outside of Northwell Health, bringing laboratories up to regulatory standards, and acting as supervisor and mentor in labs with management gaps.

Proficiency Testing (PT) Part 3: Quality Indicators

Last month we discussed the rules associated with evaluating your PT results, and how to investigate any unsuccessful surveys. In the last of this 3-part series we’ll review ways to utilize your PT reports to check for trending in your patient values – shifts, trends and bias. Your PT results can help show you developing problems and allow you to correct them, before they become failures or begin to affect patient care. Before declaring a failure as a ‘random error’, be sure that it truly is.

Accuracy & Systematic Errors

Accuracy describes how close your measured value is to the reference value – did you obtain the correct result? This will be affected by systematic errors, such as using expired or degraded reagents, changes in lot numbers or calibration values, or instruments with analytical lamps or lasers near the end of their use life. Systematic errors are reproducible inaccuracies that occur in the same direction; all results will be falsely low or all results will be falsely high. If systematic errors are present, all results will show similar deviations from the true value. Bias is a measure of how far off your results are from their true intended value.

Precision and Random Errors

Precision on the other hand refers to the overall agreement of results upon replicate testing – will you get the same value if you repeat the test? Precision is affected by random errors, such as incomplete aspiration of a sample or reagent due to fibrin clots or air bubbles, operator variability in pipetting technique, or temperature fluctuations. Random errors are statistical fluctuations in the measured data due to the limitations of the assay in use. These errors will occur in either direction from the mean, unlike systematic errors that will be on the same side. Imprecision can be measured and monitored by evaluating the standard deviation (SD) and coefficient of variance (CV) for an assay.

https://blog.forecast.it/the-difference-between-accuracy-and-precision

Let’s look at some example PT results from CAP, and see what hints these reports reveal to us.

  • Albumin: Although all results passed and were graded as ‘acceptable’, there are still issues that should be looked into. For the last 3 surveys in a row, the plot shows that nearly all samples have been on the same right side of the mean. When comparing the value of the % relative distance from the first survey to the most recent one, you can see that the values are trending worse and getting closer to being unacceptable if the pattern continues. Additionally, be mindful of the standard deviation index (SDI) value reported. This is a measure of your bias, and how far off your values are from the mean. It should be defined within your Quality System Manual (QSM) the values which should trigger an investigation, but as a general rule, anything >±2.0 indicates a potential issue. (https://unityweb.qcnet.com/Documentation/Help/UnityWeb/399.htm)
  • Alkaline Phosphatase: Again all results passed, but 3/5 samples have SDI values >±2.0. The first survey had all values to the right of the mean, the second survey was a nice tight even mix of +/- bias, and now with the most recent survey all values are appearing to the left of the mean. If this shift coincides with a change in lot number, a calibration may be necessary to get results back on target to help lower the SDI values.
  • GGT: Although only 1 sample was graded as unacceptable, all of the results for this recent survey were at risk of being failures due to how close they were to the upper limit of acceptability. Results like this should be very carefully evaluated to ensure that there is no impact on patient care. Provided the sample stability has not been exceeded, all 5 samples should be repeated. If the repeat values are closer to the target mean, you will need to identify what went wrong on the day the samples were originally tested. If the repeat values are still grossly far from their intended target, a full patient lookback would need to be performed from the time the samples were originally tested until the day they were repeated, as there is a systemic problem that has now continued for weeks or longer.  
  • Vancomycin: Similar to the albumin example above, these results show a trend occurring between the first survey and the most recent; however unlike albumin these are moving in the correct direction. Values are getting closer to the target mean, and SDI values are decreasing, suggesting that any corrective actions implemented after the last survey were successful.
  • Lithium: This shows a good example of what you hope all of your quantitative proficiency results will look like. There is a nice distribution of results on both sides of the mean, and SDI values are all relatively low. Values such as these allow you to have complete confidence in the accuracy of your patient results.
  • MCH: Focus on sample #2, with an SDI of -1.9. The other samples within this survey all appear fine, but it looks as though there was truly a random error with sample #2. When we look at the affiliated analytes we see a similar issue with the RBC count of sample #2, which coincides with our decreased MCH (a reminder for our non-hematology readers, MCH = (Hgb x 10)/RBC). For any calculated values, be sure to evaluate the all parameters together as well as individually to serve as a common sense check that your results are appropriate and truly make sense.

It is important to have a robust quality assurance program that outlines what to monitor, key decision points for when to take action, and guidance on what those actions should include. Your proficiency testing results can provide you with a ton of useful information to evaluate the overall quality of laboratory, and help provide confidence in the patient values being reported out as well.

-Kyle Nevins, MS, MLS(ASCP)CM is one of ASCP’s 2018 Top 5 in the 40 Under Forty recognition program. She has worked in the medical laboratory profession for over 18 years. In her current position, she transitions between performing laboratory audits across the entire Northwell Health System on Long Island, NY, consulting for at-risk laboratories outside of Northwell Health, bringing laboratories up to regulatory standards, and acting as supervisor and mentor in labs with management gaps.

Working with Generation Z: How Other Generations Can Adapt

This generation is very new to the workforce. In fact, the majority has not had a job yet as they are all eighteen and younger at the time of this writing. However, it is important to know how to adapt to this generation as they are starting to enter the workforce and many people communicate with this generation daily on a personal level.

This generation experiences a tremendous amount of uncertainty in their early lives. From the economic downturn in the late 2000s and school and concert shootings, this generation cares about security. This security is important on both a physical but also on a professional level; they want to make sure that they have professional stability. They care about making a difference, but not to the extent of Generation Y, the Millennial Generation.

There is some concern about this generation’s ability to connect with people on a long-term social level, mainly due to technological and social media advances. However, they do have a preference for face-to-face communication, so even if they do not come with that skill to the workplace, they can learn and adapt to it. Additionally, they are competitive and good multitaskers. They also have an entrepreneurial and independent spirit; they want to be in charge of their own projects and start their own companies. They are also looking into different ways to get their education that do not involve higher education and student debt. They are an imaginative generation with an intellectual curiosity.

Generation Z is the most diverse and open-minded generation, which means that they bring a plethora of ideas, background, concepts, and experiences. Leaders can utilize their diverse base to foster diversity of thought, practice, and skills at organizations. Including this generation as interns and entry-level workers is a good start to begin the process of mentoring this generation while learning from everything they bring to the organizational table.

lotte-small

-Lotte Mulder earned her Master’s of Education from the Harvard Graduate School of Education in 2013, where she focused on Leadership and Group Development. She’s currently working toward a PhD in Organizational Leadership. At ASCP, Lotte designs and facilitates the ASCP Leadership Institute, an online leadership certificate program. She has also built ASCP’s first patient ambassador program, called Patient Champions, which leverages patient stories as they relate to the value of the lab.

Proficiency Testing (PT) Part 2: Investigating Failures

Last month we discussed the rules and requirements for how to properly perform proficiency testing (PT) within your laboratory. In part 2 of this 3-part series we’ll review the rules associated with evaluating your results, and how to investigate any unsuccessful surveys. Still to come in part 3 we will look into how to utilize your PT results to monitor for trends and shifts in your values.

The rules:

  • Performance Review: Laboratories must initiate and document a review of their PT performance evaluations within 2 weeks of notification that results are available. This includes a review of both graded and non-graded/educational analytes and events as well.

Key things to note: Even though educational samples are not formally graded, you should still verify the accuracy of your results, with appropriate follow-up for any failures. CAP specifically requires you to evaluate these educational challenges as well. Whether the sample is graded or not does not change the fact that you had an incorrect result.

  • Unsatisfactory Performance: For any unsatisfactory results, you are required to perform a root cause analysis to determine why (see below for guidance). This also includes any clerical errors – you need to evaluate your process and find ways to prevent these simple errors from happening again. If they are happening with PT samples, it is possible they are happening with patient samples as well.
  • Cessation of Patient Testing: Unsatisfactory events indicate that there was a problem with that particular survey; whereas unsuccessful events indicate there has been a pattern of unsatisfactory events/samples and a larger problem exists. If a pattern of poor performance is detected, you may be asked by your local state department of health to cease all testing for a particular analyte.

Key things to note: This also applies to clerical errors. Even if there was no technical problem with the accuracy of your results, failure to submit results on time or clerical errors made while submitting can also have severe impacts on your ability to continue offering that test.

  • Remedial Action: If you’ve been notified by your PT provider or state DOH to cease testing, there are extensive steps that must be completed to prove that the problem was correctly identified and corrected. You must also identify where samples will be referred to for tests you are unable to perform in-house.

Key things to note: If testing has been removed from your laboratory, you will be required to demonstrate successful performance in 2 consecutive PT survey events for the analyte(s) in question before being granted permission to resume patient testing. This can cause significant delays and financial impact for your organization.

Root Cause Analysis: Investigate to determine who, what, why, when, and how the event occurred. Be sure to evaluate all phases of testing to ensure you identify all potential causes.

  • Pre-Examination:
    • Human Resources – evaluate the training and competency records for staff involved in the handling and testing of samples.
    • Facilities – reagent inventory control & storage temperatures, equipment maintenance and function checks
    • Standard Operating Procedures (SOPs) – staff compliance with written policies, bench excerpts are current and valid, document version control up to date
    • Specimen –test requisition/order entry (was the correct test code ordered/performed?), labeling (were aliquot/pour off tubes properly labeled?), transport (was appropriate temperature requirements maintained until testing performed), quality (was there visible deterioration with the sample prior to testing or cracked/damaged tubes received?), quantity (was the original sample spilled or leaking causing an incomplete aspiration of sample by your instrument?)
  • Examination:
    • Method Validations – were instruments current with calibration requirements, any bias noted during instrument correlation studies, values being reported within the verified AMR
    • Environmental Controls – temperatures/humidity within tolerance limits, for light sensitive studies (bilirubin) was there excessive exposure of the samples to light prior to testing, excessive vibrations occurring that may have affected results (nearby construction or a running centrifuge on a shared work bench)
    • Quality Control – did QC pass on the day of testing, was QC trending or shifts noted that month
    • Analytical Records (worksheets) – were sample results transcribed correctly between the analyzer and worksheet, between the worksheet and LIS
    • Instrument Errors – were any corrective actions or problems noted for the days before, during, or immediately after testing of PT occurred
    • Testing Delay, Testing Errors – were samples prepared and not tested immediately leaving them exposed to light or air which may affect results (blood gas samples), any errors or problems noted during testing that may have caused a delay or affected accuracy of results
  • Post-Examination:
    • Data & Results Review – check for clerical errors, was data trasmitted correctly from the instrument into LIS, was data entered correctly on your PT provider entry submission forms
    • Verification of Transmission – did your results correctly upload to the PT provider website, was there an error or failure with submission
    • Review of LIS – are your autoverification rules set up correctly, is the autoverification validation current with no known issues
    • Patient Impact – perhaps the most important step to take when reviewing PT failures, you need to determine what impact your failure had on your patient results. Depending upon the identified root cause and how different your values were from the intended response, this can potentially pose a severe impact on your patient values tested at the same time as the PT samples.

Involve your medical director to determine if the discrepancy in results is clinically significant. Perform a patient look-back to review patient values for the same analyte with the failure during the time period in question. Evaluate the bias that was present, and if deemed to be clinically significant then corrected patient reports will need to be issued with a letter from the medical director explaining why. If it was decided that the discrepancy is not clinically significant, document this in writing and keep on record with your complete investigation response.

Corrective Actions/Preventative Actions – use the following set of questions to help guide you in ensuring that the problem identified during your root cause analysis will not occur again:

  • What changes to policies, procedures, and/or processes will you implement to ensure there will not be a repeat of this problem?
  • Do any processes need to be simplified or standardized?
  • Is additional training or competency assessment needed? If so, identify specific team members to be trained, and who will be accountable for performing and documenting this training.
  • Is additional supervisory oversight needed for a particular area or step?
  • Are current staffing levels adequate to handle testing volumes?
  • Would revision or additional verification of the LIS rules address or prevent this problem?
  • How can the communication between laboratory, nursing, and medical staff be improved to reduce errors in the future?

Continuous Process Improvement – after identifying the true root cause(s) for the failure and implementing corrective/preventative actions, you need to evaluate the effectiveness of those improvements. Have they been sustained? Are they working to correct the original problem? Have you created new problems by changing the previous process?

  • Quality Management Meetings – if necessary, increase the frequency of these meetings during the evaluation period for timely feedback to management and staff
  • Implement internal audits and quality indicators to check for potential issues
  • Access the specimen transport conditions to ensure they meet test requirements
  • Evaluate and monitor your turnaround time metrics to track problem specimens and impact of testing delays
  • If necessary, increase the frequency when QC is performed or calibration frequency if stability issues are identified

Performing a thorough root cause analysis for any failures will allow you to implement appropriate corrective actions that will address the true issues. Having a robust quality management program will help ensure these issues are identified and corrected in a timely manner, and reduce the potential for the dreaded Cessation of Patient Testing letter from your local DOH.

Coming up in the final installment of this series on PT testing, we’ll review all of the quality indicators and data that can be found in your PT evaluation reports to help ensure you’re on track for accurate patient values.

-Kyle Nevins, MS, MLS(ASCP)CM is one of ASCP’s 2018 Top 5 in the 40 Under Forty recognition program. She has worked in the medical laboratory profession for over 18 years. In her current position, she transitions between performing laboratory audits across the entire Northwell Health System on Long Island, NY, consulting for at-risk laboratories outside of Northwell Health, bringing laboratories up to regulatory standards, and acting as supervisor and mentor in labs with management gaps.

Working with Generation Y: How Other Generations Can Adapt

Generation Y is coming and they are coming in strong! It is fast becoming the world’s largest working generation and their impact on the workforce will become even clearer in the next few years. These digital natives find communication natural, in any shape or forms it comes. They prefer texting and instant messaging, but also appreciate face-to-face meetings and hand-written notes. They use social media for both personal and professional use and consider it essential to know how and where to access information. Instant gratification has become one of this generation’s key values, because they grew up with the world of information at their fingertips. They value professional development and feedback and they are at work to learn and grow.

When working with a Millennial the first step is to show them that you respect them and what they bring to the table. This generation has received more negative attention than other generations, but they have a tremendous amount to offer to the workplace (as do all the other generations). They value collaboration and learning opportunities, so they are typically quick to adjust when giving constructive feedback. Because of their collaborative approach, they value inclusion and Social Media to bring people together. They are well versed in finding information and can typically solve smaller technological issues without any help.

This generation is focused on having their work mean something, to have a purpose that is larger than simply getting a paycheck. They dislike long email and voicemails and anything that is a waste of paper. They appreciate flexibility and sending documents electronically. They experiences high academic pressures, so they are comfortable working in a fast-paced environment. They are comfortable multitasking and handling multiple projects simultaneously.

Millennials who work in larger organizations are on the brink of entering leadership positions. However, there are many self-starters who have had to learn leadership skills along the way. Because this generation values collaboration, leaders tend to encourage group work and giving people an acknowledgement for trying. They dislike people who are afraid or do not want to learn new technology and cynicism as they are a generally very positive generation.

When working with Millennials, note that they respond well to a participation work environment so ask for their input and suggestions. Be open about any processes, systems, and share information freely. Provide them with lots of feedback to help them learn and grow. Millennials respond well to a faster pace work environment, so do not try to slow them down. They dislike formality and stiffness, so allow flexibility whenever possible. For example, invite them to provide input for their own goals and do not hover over them. Give them multiple things to work on simultaneously so that they can go from project to project when their energy shifts. This generation is crucial to bring your organization to the next level, so mentor them, help them grow and develop and you get their dedication, passion, collaboration, and positivity in return.

lotte-small

-Lotte Mulder earned her Master’s of Education from the Harvard Graduate School of Education in 2013, where she focused on Leadership and Group Development. She’s currently working toward a PhD in Organizational Leadership. At ASCP, Lotte designs and facilitates the ASCP Leadership Institute, an online leadership certificate program. She has also built ASCP’s first patient ambassador program, called Patient Champions, which leverages patient stories as they relate to the value of the lab.


What’s the purpose? That’s the question that most Gen Ys, or commonly known as Millennials, ask of their job. Why am I here? Can I make a difference in the world if I remain doing what I am doing?

The Baby Boomers worked because they felt an obligation to put in a hard day’s work whether they liked doing what they were doing or not. It was a job. The Generation Xers introduced a focus on work-life balance, which was not the case for the Baby Boomer. The Boomers never heard of the concept of “work-life balance” until their children, the Gen Xers, made it a job requirement and reality.

As for the Millennials, they need to really believe in their job and what they are doing. Millennials ask questions that the Boomers and Gen Xers wouldn’t think of asking. This is often misinterpreted as being lazy or looking for the easy way out. This is not the case. The Millennials took the best of their predecessors. Most Millennials have a good work ethic and they definitely look for balance. However, they’re also searching for a purpose.

My favorite story of a Millennial is centered on the importance of taking lunch at work. This topic surfaced from a Roundtable Discussion with laboratory professionals last October 2018, at the ASCP Annual Meeting in Baltimore. The actual topic for this Roundtable Discussion was “diversity.” However, that quickly changed when the nine people at the Roundtable focused on generational differences. This roundtable was rich in generational diversity. The table was comprised of Boomers, Gen Xers and Millennials. Boomers stated that they found it both necessary and easy to work through lunch. Why? It’s because they pride themselves in their incredible work ethic. The Boomers praised themselves for being better than “most Millennials” who often don’t and won’t work through lunch. Instead of that mindset, perhaps the better approach would be “What can we learn from Millennials in the work place?” That answer is “purpose and balance.”

Stakenas-small

-Catherine Stakenas, MA, is the Senior Director of Organizational Leadership and Development and Performance Management at ASCP. She is certified in the use and interpretation of 28 self-assessment instruments and has designed and taught masters and doctoral level students.