Abstract
BACKGROUND: An estimated 10% of Americans experience a diagnostic error annually, yet little is known about pediatric diagnostic errors. Physician reporting is a promising method for identifying diagnostic errors. However, our pediatric hospital medicine (PHM) division had only 1 diagnostic-related safety report in the preceding 4 years. We aimed to improve attending physician reporting of suspected diagnostic errors from 0 to 2 per 100 PHM patient admissions within 6 months.
METHODS: Our improvement team used the Model for Improvement, targeting the PHM service. To promote a safe reporting culture, we used the term diagnostic learning opportunity (DLO) rather than diagnostic error, defined as a “potential opportunity to make a better or more timely diagnosis.” We developed an electronic reporting form and encouraged its use through reminders, scheduled reflection time, and monthly progress reports. The outcome measure, the number of DLO reports per 100 patient admissions, was tracked on an annotated control chart to assess the effect of our interventions over time. We evaluated DLOs using a formal 2-reviewer process.
RESULTS: Over the course of 13 weeks, there was an increase in the number of reports filed from 0 to 1.6 per 100 patient admissions, which met special cause variation, and was subsequently sustained. Most events (66%) were true diagnostic errors and were found to be multifactorial after formal review.
CONCLUSIONS: We used quality improvement methodology, focusing on psychological safety, to increase physician reporting of DLOs. This growing data set has generated nuanced learnings that will guide future improvement work.
- DEER —
- diagnostic error evaluation and research
- DLO —
- diagnostic learning opportunity
- PHM —
- pediatric hospital medicine
- QI —
- quality improvement
Diagnostic errors are a national priority in patient safety research, yet relatively little is known about diagnostic errors in the pediatric population.1–4 In 2015, the National Academies of Sciences, Engineering, and Medicine released a report highlighting the gaps in measuring, evaluating, and preventing diagnostic errors and their associated harm.5 In previous studies of predominantly adult patient populations, researchers estimate the diagnostic error rate is 5% to 15% annually.6–8 However, few large studies of diagnostic errors are focused on pediatric patients. In 1 retrospective chart review study of pediatric admissions at a community hospital, the authors found a diagnostic error rate of ∼5%.9 As for specific diagnoses associated with errors in pediatrics, a multisite survey of academic and community pediatricians identified misdiagnosis of viral illnesses as bacterial infections, medication side effects, and psychiatric disorders as the most frequent diagnostic errors in pediatrics.10 Additionally, in a project used to assess missed opportunities for diagnosis in ambulatory pediatric settings, the authors found low screening rates for adolescent depression and inadequate follow-up for elevated blood pressures and abnormal laboratory values.11
Physician reporting is a promising strategy for identifying and learning from diagnostic errors highlighted in the National Academies of Sciences, Engineering, and Medicine report.5 However, incident reporting systems are known to be underused, with previous studies revealing as few as 2% of reports are filed by physicians.12 Physician engagement improves when reportable events are clearly defined and the process of reporting is simple, nonpunitive, systems oriented, and confidential.13 Although physician reporting does not capture all errors or adverse events, diagnostic errors are difficult and time intensive to capture by using chart review alone.14 Two previous studies in adult patient populations have revealed that physician reporting promotes nuanced learning about diagnostic errors and results in identification of opportunities for systems improvements.15,16 However, less is known about what factors effectively engage busy physicians in reporting potential diagnostic events, particularly in pediatric settings.
A review of safety event reports for the pediatric hospital medicine (PHM) division at our institution revealed only 1 diagnostic error-related report in the previous 4 years. Therefore, we chartered a quality improvement (QI) team to increase physician reporting of suspected diagnostic errors with the global aim of increasing our understanding, at a division and systems level, of the patterns and drivers of diagnostic errors in our patients. Our specific aim was to increase attending physician reporting of suspected diagnostic errors from 0 to 2 per 100 patient admissions for PHM patients at our main campus over the course of 6 months.
Methods
Context
Cincinnati Children’s Hospital Medical Center is a quaternary care hospital with 2 inpatient clinical sites and ∼8000 annual admissions to the hospital medicine service. The hospital medicine service includes 8 clinical teams across 2 sites and consists of general pediatrics, complex care, and surgical comanagement services. For the purpose of this study, we targeted our interventions on the 4 general pediatrics teams and 1 complex care team at the main hospital campus. We did not restrict reporting from our other hospital medicine clinical service lines over the course of the study, but events reported outside of our target population were not included in our analysis.
With this study, we aimed to improve the quality of care locally; therefore, it was not subject to review by the Cincinnati Children’s Hospital Medical Center Institutional Review Board.
Planning the Interventions
Our multidisciplinary QI team included 4 hospital medicine physicians, 2 pediatric residents, 2 nurses, a clinical research coordinator, and a parent. Although our objective was to increase reporting by attending physicians, we felt there was significant benefit to including stakeholders with diverse perspectives on the diagnostic process. Additionally, our improvement team met regularly with hospital medicine division leadership and our institution’s chief patient safety officer who helped ensure safety concerns were targeted to the proper event reporting system and that efforts were aligned with the organizational vision of reducing harm.
Our improvement team used knowledge of our system, informal discussion among hospital medicine providers, and review of the literature on diagnostic errors and event reporting to generate a key driver diagram (Fig 1).
Key driver diagram that describes the drivers of increased physician reporting of suspected diagnostic errors.
Improvement Activities
Our improvement activities, organized around our key drivers, are described below. Each intervention was tested and implemented on all target clinical teams unless otherwise noted. Details of our interventions are provided in Table 1.
Description and Timing of Interventions
Operational Definition
Diagnostic errors are a broad and diverse group of medical errors that have multiple different definitions in the literature and can also be difficult to discern from natural progression of disease. We sought to use terminology and a definition that aligned with the long-term goals of this project, which is to learn from suspected diagnostic errors, identify opportunities to improve the diagnostic process, and reduce patient harm. We opted to use the term diagnostic learning opportunity (DLO) rather than suspected diagnostic error and, after eliciting provider feedback, ultimately adopted the following definition: “During the patient’s current illness, either prior to or during admission, there was a missed or potential opportunity to make a better or more timely diagnosis.” This definition was adapted from the conceptualization of diagnostic errors as missed opportunities proposed by Singh et al.17,18 We tested and refined this definition via in-person discussions with ∼10 on-service providers over a 2-week period to ensure the definition was easy to understand and apply to individual patients. We then presented the DLO definition at a division-wide meeting to gather more widespread feedback. The phrase “potential opportunity” was used to absolve individual providers of the responsibility to adjudicate what constitutes a diagnostic error. Additionally, we opted to include any suspected diagnostic errors observed, including those that occurred before admission, to maximize our ability to capture and learn from diagnostic errors within our patient population.
Standardized Reporting Process
When assessing our institutional safety event reporting system (RL Solutions, Toronto, ON), we noted that diagnosis-related events were not a discrete category but were combined with nutrition- and therapy-related errors. Furthermore, the system provides limited guidance on what constitutes a reportable diagnostic event. All safety event reports are available for review by division leadership for follow-up, which may deter reporting from providers who view diagnostic errors as individual cognitive failures. We sought to create a simple electronic reporting tool that would yield adequate information for event review and would maintain the confidentiality of reported events within our improvement team. We drafted a paper data collection form to test desired data fields and refined it on the basis of provider feedback. Ultimately, we created an electronic reporting form using Research Electronic Data Capture, a secure Web application for building and managing surveys and databases, which was available through an institutional Clinical and Translational Science Award Program (Supplemental Information).19 We later created a dedicated internal Web page that directs providers to the reporting form.
Physician Awareness and Engagement
We used multiple methods for reminding providers and engaging them in the reporting process. Promotional posters were placed in the physician workroom as a visual reminder. E-mail reminders that included the definition of a DLO, project goals and updates, and a hyperlink to the reporting form were sent to providers both before and immediately after their service time. Clinical service time can be busy, and we observed that an e-mail to providers the week after they were on service (when they had fewer competing priorities) encouraged reporting. We later scheduled twice-weekly diagnostic reflection time on the calendars of on-service physicians, with the goal of encouraging identification of DLOs throughout the week and as another opportunity to access the hyperlink to the reporting form.
Psychological Safety
Psychological safety is the belief that the team environment is safe for interpersonal risk-taking and can be defined as “being able to show and employ self without fear of negative consequences of self-image, status, or career.”20 Diagnostic errors are sometimes thought of as individual cognitive errors despite a growing body of evidence that, like other types of medical errors, they are usually multifactorial and the result of a combination of environmental, system, and cognitive factors.21–23
Early in the course of this study, our team identified the importance of cultivating a safe reporting culture. As previously stated, we reframed suspected diagnostic errors as learning opportunities to emphasize the goals of this project and to avoid language that could be perceived as threatening. Additionally, when we introduced the terminology and definition of a DLO at our division-wide meeting, we highlighted several stories from our division leaders who described their own experience making a diagnostic error and what they learned from that event.
We also sought to normalize the discussion of DLOs among our providers. As part of engagement efforts, we had a project team member visit the attending physician workroom weekly throughout the initial study period to address any questions and to prompt discussion of potential DLOs. Additionally, the project team leader provided monthly updates at our hospital medicine division meetings on DLO reporting trends, questions and concerns related to confidentiality of the reporting system, and the development of the DLO evaluation process. Division meetings were also an opportunity to share diagnostic learnings such as presenting symptoms and diagnoses captured through our new reporting system, and more recently, a case presentation and discussion. In addition to increasing project awareness, these interventions had the added benefit of generating regular conversation about diagnostic events within our division.
Systematic Evaluation and Feedback Process
Although our institution has a robust evaluation process for safety events, the relative paucity of diagnosis-related safety reporting meant there was little precedent for evaluating diagnostic errors. Creating a systematic approach to evaluate these reports was key to the global aim of our project and also reassured stakeholders that reported events were not used to evaluate individual performance but rather to generate diagnosis- and systems-focused learning for our division.
From a search of the literature, we identified several previously used and validated tools to evaluate diagnostic errors. There were no established tools specific to pediatrics, but we ultimately chose the Safer Dx Instrument and the diagnostic error evaluation and research (DEER) taxonomy.24 The Safer Dx Instrument is a 12-item Likert-scale survey that screens for the presence of a diagnostic error. The DEER taxonomy breaks the diagnostic process down into several broad steps (access to care, history, physical examination, laboratory and radiologic testing, consults or referrals, assessment, and follow-up) and is used to classify at which point in the diagnostic process errors occurred. We had 4 reviewers test both the Safer Dx Instrument and the DEER taxonomy on 2 DLO reports to assess feasibility and identify significant discrepancies in the application of the tools. From this discussion, we generated a review packet with instructions as well as annotated versions of the Safer Dx Instrument and DEER taxonomy to help standardize the application of the tool among reviewers (Supplemental Information). This review packet was revised over time as our team gained experience with reviewing these events.
We created a structured approach to evaluation with 2 independent reviewers assigned to each DLO report. The reviewers first used the Safer Dx Instrument and, if a report was identified as a diagnostic error, reviewers used the DEER taxonomy to localize the error(s) within the diagnostic process. The review team included hospital medicine physicians, pediatric residents, and a PHM nurse practitioner. The 2 reviewers independently completed the evaluation process and then met to discuss their results and adjudicate any disagreements. When disagreements about the presence of a diagnostic error could not be adjudicated via discussion, the DLO report was assigned to a third reviewer to make the final determination.
Study of the Interventions
Outcome Measure
Our primary outcome was the number of DLO reports filed each week. As noted above, DLOs were defined as follows: “During the patient’s current illness, either prior to or during admission, there was a missed or potential opportunity to make a better or more timely diagnosis.” The number of DLOs were tracked weekly to allow for real-time feedback on the effect of our interventions. If >1 report was filed for the same patient, the duplicate report was not included in our outcomes data. Given the seasonal variability in pediatric hospital admissions, we normalized the number of DLOs using the primary measure of number of DLO reports filed per 100 patient admissions. Weekly admissions data for our target population were obtained from a structured query of electronic health record data, which was verified against administrative billing data to ensure the validity and accuracy of our data source.
Secondary Outcomes
To assess the accuracy of this process for identifying true diagnostic errors, we also calculated the proportion of reported cases that contained a diagnostic error as judged by completion of the Safer Dx Instrument. Additionally, we used the DEER taxonomy to identify frequent failure points within the diagnostic process.
Analysis
Our primary outcome measure, that is, the number of DLO reports filed weekly per 100 patient admissions, was evaluated by using an annotated statistical process control u-chart, and established rules were employed to detect special cause variation.25,26 Our secondary outcomes were assessed as reported events were reviewed.
Results
Over the duration of the project, we saw a significant increase in DLOs submitted by attending pediatric hospitalists. Over the course of the first 13 weeks, special cause variation was met and the mean reports filed increased from 0 to 1.6 DLO reports per 100 patient admissions (Fig 2). Variation in weekly reporting rates was observed, with a range of 0 to 6 reports filed on a weekly basis, but this variation was not outside statistical process control limits. This improvement in reporting rate was sustained over an additional 18 weeks.
DLO reporting statistical process control u-chart in which the primary outcome of number of DLO reports submitted per 100 patient admissions is displayed weekly with annotation of improvement activities. Along the x-axis, the weekly number of patient admissions (n) is also included. HM, hospital medicine.
In all, 70 DLO reports have been filed, representing 67 unique events. All 67 events have completed review, with 44 (66%) found to be true diagnostic errors on the basis of the results of the Safer Dx Instrument. An additional 14 events were submitted from our ancillary clinical site and our surgical comanagement teams and were not included in this analysis.
The DEER taxonomy, which was used to localize errors within the diagnostic process, revealed that the vast majority of identified diagnostic errors were multifactorial, with an average of 4 identified diagnostic process errors. We also observed that errors frequently occurred across multiple clinical contexts and episodes of care. Errors in the assessment step of the diagnostic process were most frequent, but errors also occurred frequently in the data-gathering steps of the diagnostic process including the history and physical examination (Table 2). These events involved a wide range of diagnoses including both common and rare diseases (Table 3). Additionally, events that did not meet the threshold for diagnostic error often contained excellent learning points including important diagnostic considerations and systems factors that can complicate the diagnostic process. For example, an incorrect laboratory test was ordered because of the presence of multiple similarly named tests in our system that have some overlap in diagnostic utility. Although in the reported case, this error did not impact the final diagnosis or treatment, we recognize that this could have negative consequences for future patients and is analogous to a diagnostic near-miss event.
DLO Reports Determined To Be Diagnostic Errors
Final Diagnoses Associated With Identified Diagnostic Errors
Discussion
By adopting a clear operational definition for DLOs, building a simple, confidential electronic reporting process, and employing consistent efforts to promote psychological safety and encourage reporting, we observed a significant and sustained increase in reporting of suspected diagnostic errors over the course of 7 months. Furthermore, the majority of reported events (66%) were true diagnostic errors, highlighting that physician reporting is an effective method of identifying this type of error.
There was wide variation in weekly reporting rates that we theorize was driven by 2 distinct factors: variable numbers of reportable events each week and the engagement of individual providers, who are generally on service for 1 week at a time. Given our early success in increasing physician reporting rates in addition to this weekly variation, it is challenging to clearly associate our sustained increase in reporting levels with specific interventions. Testing the operational definition of DLOs, although useful in the initial fine tuning of our project, is unlikely to explain the sustained change in our system. Both our quantitative data revealing some weeks with no DLOs and our qualitative data (eg, physician colleagues sharing barriers to reporting such as competing priorities while on service or not knowing how to access the reporting form) support that both a clear operational definition and interventions that target physician awareness and engagement are needed for sustained change.
In a survey study, researchers found that most pediatricians can readily recall a recent diagnostic error.10 However, within our institution, these events were not being captured by our traditional safety event reporting system. We targeted several known barriers to clinician reporting of safety events, including establishing a clear definition for reportable diagnostic events, creating a simple reporting form, being transparent with who could access these reports and how they were being evaluated, and providing group feedback to clinicians on diagnostic learnings at division meetings. In addition to addressing known barriers to physician incident reporting, cultivating psychological safety was perceived to be critical to the success of this project. To promote psychological safety, our improvement team put particular thought into the framing of the language used, that of opportunity rather than error, and provided consistent messaging that our goal was to learn from these events on a division and systems level. Having a project team member readily available to provide guidance early in the course of this work helped make discussion of DLOs a routine part of clinician practice. The monthly updates at our regular division meeting have become more interactive and focused on diagnostic learnings. Most recently, we presented a reported DLO case and had an engaging discussion on the opportunities for improvement throughout the case, which included both cognitive biases and systems limitations. We hope that continued efforts to engage providers in the learning process will reinforce the value of reporting DLOs. Although measuring psychological safety and cultural change is challenging, the fact that we had 14 reported DLOs from outside our targeted hospital medicine teams without any dedicated interventions is evidence that supports this process was well accepted among our providers.
A central challenge of this work is the lack of a gold standard to determine the true number of diagnostic errors that occurred over the study period. We chose, and nearly met, a goal of 2 events per 100 patient admissions on the basis of a lower end of the estimate of diagnostic error incidence in adults as well as a single retrospective chart review of pediatric admissions with an error rate of 5%. Our aim was to capture approximately half of this number, acknowledging that we would not detect all events using physician reporting alone. Although we have demonstrated that physician reporting is an effective method for identifying diagnostic errors, physician reporting likely needs to be paired with other detection methods to fully capture and hence learn from diagnostic errors.
Electronically triggered chart review, conducted by using methods such as the Institute for Healthcare Improvement’s Global Trigger Tool, is a well-established mechanism for detecting medical errors and adverse events.27 Although electronic triggers for diagnostic error have been tested in both adult and pediatric populations, high-yield triggers are challenging to identify because diagnostic errors are rarely discrete events like medication errors in which the administration of naloxone in a hospitalized patient triggers further review for an opioid overdose.14,28 In a cross-sectional study in a PICU, researchers used chart review and 4 high-risk scenarios or triggers to identify 214 potential diagnostic errors.28 After additional evaluation conducted by using the Safer Dx Instrument, 24 diagnostic errors were confirmed. These errors were not identified through safety reports or other means, again highlighting the need to use multiple detection methods for diagnostic errors.
Limitations
Our study has several important limitations that may affect the generalizability of our interventions and results. Our institution and hospital medicine division have a history of successful improvement projects targeting many different aspects of clinical care, including increasing recognition and discussion of diagnostic uncertainty. This previous diagnostic-related work may have facilitated the acceptance of discussing and reporting diagnostic errors among our providers.
Because our project targeted 1 provider group within a single division of a large institution, we elected to make a separate reporting system for these events rather than using the established system. This gave our team control over the reporting process, increasing our ability to test frequent small changes and adapt the process as needed. Although this approach was beneficial in the early stages of this work, it is less ideal for long-term sustainability and spread. Additionally, it is important that suspected diagnostic events that result in significant patient harm, including disability and mortality, be reviewed at the institutional level. Although these events were not previously being captured by our institution’s reporting system, we ensured that appropriate people within the institution were aware of them. Ultimately, our goal will be to reintegrate DLO reporting into the institutional system in a way that leverages what we have learned about engaging physicians in this type of event reporting and allows best practices for analyzing and learning from these events at the systems level.
Next Steps
We continue to track DLO reporting rates over time so that we can readily identify changes in reporting trends that may require intervention. Although we have made significant progress in recognizing and evaluating suspected diagnostic errors, we are continuing to work on how we share the learnings generated through this reporting system with providers. Ultimately, our goal is to translate these learnings into systems changes that decrease the incidence of diagnostic errors and their associated harm.
Conclusions
Using QI methodology, we were able to successfully increase physician reporting of suspected diagnostic errors or DLOs. Evaluation of these events, although time intensive, is feasible and has shown that physician reporting is an effective means of both identifying diagnostic errors and generating early learnings about diagnostic errors in the PHM population.
Footnotes
- Accepted June 15, 2020.
- Address correspondence to Trisha Marshall, MD, Division of Hospital Medicine, Cincinnati Children’s Hospital Medical Center, 3333 Burnet Ave, MLC 9016, Cincinnati, OH 45229. E-mail: trisha.marshall{at}cchmc.org
FINANCIAL DISCLOSURE: Dr Brady is supported by the Agency for Healthcare Research and Quality under award K08HS23827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. The project described was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under award 5UL1TR001425-04. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors have indicated they have no other financial relationships relevant to this article to disclose.
FUNDING: No external funding.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
References
- Copyright © 2021 by the American Academy of Pediatrics