In 1986 my 18-year old sister was in a rollover car accident near Breckenridge, Colorado. She was air lifted to a trauma center in Denver with a closed head injury and a severely fractured left femur. After three days on a ventilator in a medically induced coma, she awoke with no permanent brain injury. Our frightened family was overjoyed. She was still in traction in the ICU to stabilize her femur until we could get her to surgery the following day.
As both the “nurse in the family” and a then practicing critical care nurse, I was permitted to stay with her around the clock in the ICU, which in those days of less patient and family-centric times was the exception rather than the rule that it is today. On day four I noted her non-injured right leg was cold, mottled and the pedal pulse was nearly undetectable. I immediately informed the nurse, who then notified one of the many residents on her case. Within an hour they ordered a chest film and a venogram to check for a thrombosis. Several hours later the resident popped in with the “good news” that there was no blood clot. Forty-eight hours later my sister died of a massive pulmonary embolus.
After several months of supporting my grieving parents, who were convinced that “something went wrong,” I finally requested copies of her medical record from the hospital. After laying out the medical record across my living room floor and reading every scrape of information, I finally saw one of the things that went wrong and which led to a chain of events that resulted in her death. The venogram they did two days before she expired was performed on the wrong leg. It was done on the injured leg, rather than the non-injured leg that I had reported and shown to the nurse.
Years later, as I navigated my professional career towards quality and patient safety, I couldn’t understand why more emphasis wasn’t being placed on diagnostic error detection and reduction. While we had traditional peer review processes and a mortality and morbidity committee in place to discuss these sensitive and complex issues, I suspected the phenomenon was far more prevalent than what was being reported at my hospital. I was hopeful that detection, reporting and process improvement of diagnostic errors would dramatically improve with the broader adoption of the EMR.
Thirty-three years after my sister’s death, and nearly ten years after the Meaningful Use initiative to implement EMRs across the US, I’m grateful that we have new standards of care to reduce and prevent thrombosis in hospitalized patients. However, I still don’t see measurable improvements in diagnostic error. In fact, I have yet to see a single hospital where this is even an organizational priority for improvement. Yet the problem is significant.
The Prevalence of Diagnostic Error
THE PREVALENCE OF DIAGNOSTIC ERROR
According to most research, estimates on diagnostic error range from 10-20%. However, a landmark treatise on the subject called Improving Diagnosis in Health Care, published by the National Academy of Sciences in 2015, revealed that prevalence statistics vary significantly, depending on the method used to collect the information:
- Autopsy results, which are considered to be the gold standard for diagnostic error detection, show that diagnostic errors contribute to approximately 10% of patient deaths.
- Medical records review suggests diagnostic errors account for 6-17% of hospital adverse events.
- Formal secondary reviews conducted under controlled conditions on complex cases in specialty areas of medicine show that the prevalence of diagnostic error may reach as high as 50%.
- While difficult to quantify in ambulatory settings, a conservative estimate found that at least 5% of patients receiving outpatient care experience a diagnostic error at least once a year.
Despite the wide range of statistics above, one thing that nearly all researchers agree on is that the phenomenon is vastly under-detected, under-reported and poorly understood.
Detecting Diagnostic Error
Given the prevalence of diagnostic error, it’s surprising that the issue has not yet taken center stage in one or more of many national quality and patient safety reporting initiatives. There are a number of reasons for this. One major obstacle is the lack of effective measurement and the difficulty in detecting these areas in clinical practice. For instance, in 2012, The Office of Inspector General conducted a study on 785 hospitalized Medicare beneficiaries using five different approaches and identified a 13% incidence of serious adverse events. However, not a single episode was categorized as diagnostic error.
As a former corporate quality professional, I know firsthand that detecting diagnostic error is often like looking for a needle in a haystack. Years ago, when I was asked to quantify the number of quality management reviews conducted during the year in our 325-bed acute care hospital, I found an astonishing total of 3,278. Here are some other significant findings:
- Of the 3,278 reviews, 423 (13%) were referred to medical peer review.
- We categorized these second level reviews into those that resulted in documented opportunities for improvement (OFIs). Of the 423 cases sent to peer review, only 58 resulted in formal OFIs.
- Of the 423 cases, only 27 resulted in actionable process improvement projects or formal corrective action plans to help individual providers improve their practice. The remaining OFIs were tucked away in snoozing reports and charts for the quality and utilization committee, never to be acted on in any meaningful way.
What is noteworthy is that of the 27 cases resulting in corrective actions that year, only five of these reviews appeared to be directly related to delayed or incorrect diagnosis with resulting inappropriate treatment. The other 22 were attributed to technical and surgical care, accompanied by two incidents of professional misconduct. Could it be that our diagnostic error rate was really only less than 1%?
Looking back, I now believe that this low incidence of diagnostic error had more to do with failure to detect and under-report rather than actual performance. This may have been because we were not looking for it, or perhaps we were not confronting it in our peer review process due to cultural and social norms in our organization. Regardless of the reasons, I believe we had an organizational challenge that is sadly unrecognized and shared by many other hospitals and healthcare systems across the US and even globally.
Most organizations today deploy one or more methods for detection and reporting of serious adverse events. Some are automated in nature, such as the AHRQ Patient Safety Indicators or the Utah/Missouri Adverse Event Classification. Others, such as the Global Trigger Tool (GTT), developed by the Institute of Healthcare Improvement, and supported by Medisolv with our GTT software product, require first and second level chart reviews by nursing, pharmacy and medical professionals (See Box 1). While the GTT has extremely high levels of sensitivity and specificity for detecting adverse events, it was not designed to detect and evaluate medical error.
A medical error must be distinguished from an adverse event, which is “an injury caused by medical management rather than by the underlying disease or condition of the patient.” An adverse event results in harm to the patient. In contrast, a medical error is “the failure to complete a planned action as intended or the use of a wrong plan to achieve the aim” (definition by the Institute of Medicine in 2000). Not all medical errors lead to adverse events. So, while adverse event surveillance and reporting remains important, it may uncover only a small fraction of medical error that is occurring in our hospitals and health care systems.
Box 1. The Global Trigger Tool powered by Medisolv, Inc.
Medisolv’s Global Trigger Tool (GTT) software randomly samples 20 cases a month for chart review and graphically displays harm events over time to help organizations understand their culture of patient safety reporting and spot trends in providers or departments that may require performance improvement.
The 2000 Institute of Medicine (IOM) report To Err is Human: Building a Safer Health System distinguished four types of medical error:
- Diagnostic: Error or delay in diagnosis; failure to employ indicated tests; use of outmoded tests or therapy; failure to act on results of monitoring or testing
- Treatment: Error in performance of an operation, procedure, or test; error in administering the treatment; error in the dose or method of using a drug; avoidable delay in treatment or in responding to an abnormal test; inappropriate (not indicated) care
- Preventive: Failure to provide prophylactic treatment: inadequate monitoring or follow up of treatment
- Other: Failure of communication; equipment failure; other system failure
As reflected in my own experience, traditional quality management and peer review activities, along with adverse event and risk management event reporting systems are more amendable to capture treatment errors resulting in patient harm but do little to shed light on diagnostic error.
The Complexity of Diagnostic Error
So why does diagnostic error seem like a blind spot for most healthcare organizations? The answer, in part, is due to the complexity of error detection and measurement. Reasons for the complexity include:
- No adopted definition of diagnostic error
- Wide spectrum in outcomes resulting from diagnostic error
- The diagnostic process is complex and iterative
Diagnostic Error Definition
Definitionally, diagnostic error is problematic. While there are dozens of conceptual models that define diagnostic error and categorize error types, no single definition has been broadly adopted by US healthcare organizations. Some definitions are very broad, such as the one proposed by Schiff et al in 2009, which defines diagnostic error as: “any mistake or failure in the diagnostic process leading to a misdiagnosis, a missed diagnosis, or a delayed diagnosis”. And while other published definitions may be more granular and specific, nearly all traditional definitions of diagnostic error contain the components of delayed, missed or wrong diagnosis. Yet there is often disagreement among experts in the field about the precise meanings of these three concepts.
Outcomes Resulting from Diagnostic Error
There are multiple quality and patient safety frameworks that can be used to organize the various outcomes resulting from diagnostic error. Outcomes of diagnostic error are typically categorized into aspects of delayed treatment, missed treatment, inappropriate or unnecessary treatment and wrong treatment. Each outcome can be further classified for degree of patient harm (also referred to as significance), ranging from no harm to catastrophic harm, including death. Typical categories for patient harm may include minor and major temporary harm, minor and major permanent harm and death.
In addition to patient harm, diagnostic error can also result in system harm. Even when no and low harm errors occur, they can contribute to increased cost, waste of resources, decreased efficiency and may even impact the reputation of the provider organization through dissatisfied and often distraught consumers. It is no surprise that diagnostic errors are the leading type of medical malpractice claims and represent the highest proportion of total malpractice settlements.
The Diagnostic Process
Finally, before embarking on an initiative to begin measuring, monitoring and managing diagnostic error, the complexity of the diagnostic process must also be considered. Nearly all medical treatments ordered and executed in the care of patients are grounded in the diagnostic skill of medical providers, which are largely supported by the accuracy and timeliness of laboratory tests, imaging studies and the correct interpretation of human responses to illness.
The mission to provide the right treatment, to the right patient, at the right time hinges on the successful collaboration and exchange of information among healthcare professionals, including primary care and specialty physicians, radiologists, pathologists, nurses, pharmacists, technologists, therapists, social workers, case managers and many others. Hugely complex health information technology (HIT) systems and organizational work flow processes then combine these health care professionals into one group. When breakdowns occur, finding the root causes of error, whether they be provider, process or technology-based can be highly complex and time consuming, not to mention expensive.
ROOT CAUSES OF DIAGNOSTIC ERROR
Root cause of diagnostic error has an interesting history. Originally rooted in the post mortem examination, which was more common in the late 19th and early half of the 20th century, the science of medicine was able to learn from our failures. Today, the post mortem exam is rare, with less than 1% of all deaths examined by a pathologist to detect cause of death. Due to economic and cultural constraints, autopsy is no longer a common vehicle for detecting diagnostic error. We now use less invasive means of discovery, including chart review by quality and risk management professionals, peer review by colleagues, and more recently, LEAN and Six Sigma Quality Improvement methodologies, to name a few. All these approaches remain retrospective in nature and are not easily amenable to proactive monitoring and surveillance.
Once finally discovered, diagnostic error is examined, dissected and classified into categories of fault, such as no-fault errors, system-related errors, and cognitive errors. Like the definition of diagnostic error, there is no uniformly adopted set of cause-of-error categories in which to unite hospitals and health care systems in large-scale learning. However, the emergence of Patient Safety Organizations and AHRQ Common Formats for safety reporting promises to bring some degree of uniformity. Standardized error types and root cause categories are useful because they create shared learning across organizations and guide us toward innovative best practices at an accelerated pace.
The literature currently shows that diagnostic errors in medical treatment, by both physicians and advanced practice nurses, stem from a wide variety of causes, including but not limited to (listed in no priority order):
- Inadequate collaboration and communication among care teams
- Limited patient and family engagement in the diagnostic process
- Failures in healthcare system processes to render accurate or timely results
- Errors and omissions in supporting health information technology and equipment
- Cognitive errors and insufficient training in critical and systems thinking
- Lack of meaningful feedback to clinicians about diagnostic performance
- Organizational culture that discourages transparency and disclosure of errors
While limited research has been conducted on the exact prevalence of the above listed root causes as they relate to diagnostic error, an abundance of research and professional attention has been given to each of these key aspects of care. An awareness and understanding of these root causes are important for quality and safety professionals to recognize when designing strategies to detect and report diagnostic error, and they provide us with important clues on where to position our surveillance, measurement and improvement efforts.
Looking back 33 years ago, I can’t help but wonder which of the many root causes contributed to my sister’s catastrophic outcome. I often wonder if the providers that cared for her received any feedback about the breakdown in the diagnostic process. I also wonder how many other patients and families suffered as a result of these types of errors. While I may never know the answers to my questions, today I’m more confident that we are beginning to understand the complexity and the prevalence of this problem. While there are significant challenges in the detection, measurement and reporting of this phenomenon, we are beginning to pull the thread on this complex tapestry. My wish for healthcare organizations, as well as for my many friends and colleagues in active practice today, is that they go into the future with eyes wide open to more effectively address this important aspect of patient safety.
Be sure to stay tuned for a future post where we’ll review strategies to measure, monitor and improve diagnostic error.
On-Demand Webinar Series:
eCQM Spring Training
Get full access to our eCQM Spring Training webinar series on-demand! This series provides a complete understanding of eCQMs, how to implement and report these measures. Each video recording is only 30 minutes.
We coach you through every step of your eCQM spring training journey. We provide you with the tips, tools and resources needed to achieve your eCQM goals and confidently cross the submission finish line.
ON-DEMAND SESSIONS INCLUDE
- Laying the Groundwork: Understanding eCQM basics (specifications, logic).
- Setting your eCQM Training Schedule: Implementing eCQMs and creating a sustainable maintenance plan.
- eCQM Stumbling Blocks Part 1: Common errors in clinical documentation and workflows.
- eCQM Stumbling Blocks Part 2: Common errors in timing, logic and mapping.
- Ready, Set, Go: Running the Race: Reviewing the 2019 eCQM regulatory requirements and the submission and post-submission review process.