Improving Adherence to Otitis Media Guidelines With Clinical Decision Support and Physician Feedback
OBJECTIVE: To assess the effects of electronic health record–based clinical decision support (CDS) and physician performance feedback on adherence to guidelines for acute otitis media (AOM) and otitis media with effusion (OME).
METHODS: We conducted a factorial-design cluster randomized trial with primary care practices (n = 24) as the unit of randomization and visits as the unit of analysis. Between December 2007 and September 2010, data were collected from 139 305 otitis media visits made by 55 779 children aged 2 months to 12 years. When activated, the CDS system provided guideline-based recommendations individualized to the patient’s history and presentation. Monthly physician feedback reported adherence to guideline-based care, changes over time, and comparisons to others in the practice and network.
RESULTS: Comprehensive care (all recommended guidelines were adhered to) was accomplished for 15% of AOM and 5% of OME visits during the baseline period. The increase from baseline to intervention periods in adherence to guidelines was larger for CDS compared with non-CDS visits for comprehensive care, pain treatment, adequate diagnostic evaluation for OME, and amoxicillin as first-line therapy for AOM. Although performance feedback was associated with improved antibiotic prescribing for AOM and pain treatment, the joint effects of CDS and feedback on guideline adherence were not additive. There was marked variation in use of the CDS system, ranging from 5% to 45% visits across practices.
CONCLUSIONS: Clinical decision support and performance feedback are both effective strategies for improving adherence to otitis media guidelines. However, combining the 2 interventions is no better than either delivered alone.
- otitis media
- electronic health record
- clinical decision support
- physician performance feedback
- AOM —
- acute otitis media
- CDS —
- clinical decision support
- EHR —
- electronic health record
- OM —
- otitis media
- OME —
- otitis media with effusion
What’s Known on This Subject:
Expectations are high that electronic health record–based clinical decision support and performance feedback will improve adherence to guidelines by delivering relevant and actionable information to clinicians. Few studies have evaluated these assertions or examined the combined effects of decision support and feedback.
What This Study Adds:
Clinical decision support customized to a patient’s history and presentation and performance feedback are both effective for improving adherence to guidelines for otitis media. However, the combination of the 2 interventions is no better than either delivered alone.
Otitis media (OM) is a hallmark disease of pediatrics with 80% of cases occurring among children.1 It is the third most common reason for a pediatric visit, behind preventive care and acute upper respiratory infections,2 and the most common condition pediatricians refer to specialists.3 Primary care4 and subspecialist physicians5 have difficulty diagnosing OM, which contributes to variability in incidence rates6 and overuse of antibiotics.7 It is the most common reason for prescribing antibiotics in the United States8 and is responsible for approximately half the antibiotic prescriptions for children aged 3 to 60 months.9 Despite nationally promulgated guidelines10,11 that recommend judicious use of antimicrobial agents, physicians have been reluctant to alter their rates of prescribing antibiotics for OM.12
The proliferation of electronic health records (EHRs) in hospitals and physicians’ offices and inclusion of clinical decision support (CDS) as a criterion for meaningful use incentives under the Health Information Technology for Economic and Clinical Health Act13 have heightened expectations that EHR-based CDS and performance feedback will improve adherence to guidelines. This is because EHRs have the ability to deliver relevant, patient-specific, and actionable information to clinicians.14 Demonstrating the full benefit of CDS interventions in a complex health care environment will take rigorous evaluation methods, such as multipractice cluster randomized trials.15
There has been only 1 published study of the impact of an OM CDS tool.16 The study was done at a single academic teaching site that facilitated access to a generic set of guideline recommendations. Given the paucity of efforts to apply CDS to OM care, the national burden of this disease, the variability of care, and the large opportunity for improved antimicrobial management, OM is a natural target for a robust CDS intervention that provides patient-specific information to reduce diagnostic uncertainty and treatment variability.
The approach of summarizing clinical practice over a specified interval has been extensively used to deliver performance information to clinicians. Nonetheless, there is a need to determine the best strategies for delivering feedback to maximize its effects. Previous research has shown that feedback is most effective when rates of adherence to practice guidelines are low,17,18 the information is directly useful for care,19 and practitioners are motivated to change.20 Little is known regarding the effects of physician performance feedback that is generated from the rich clinical data located in EHRs. Moreover, no previous studies have examined the combined and potentially additive effects of performance feedback in combination with prospectively delivered CDS individualized to a patient’s history and presentation.
We postulated that an EHR-based intervention that provided patient-specific recommendations in real-time and physician feedback that summarized adherence to guideline-based care would improve the diagnosis and treatment of OM compared with either intervention delivered alone. To test this hypothesis, we conducted a factorial-design cluster randomized trial in a primary care practice–based research network to evaluate the independent and joint effects of an EHR-based CDS system and performance feedback on adherence to OM guidelines.
The study was done in the Pediatric Research Consortium, a practice-based research network that includes all practices in the Children’s Hospital of Philadelphia’s primary care network located in southeastern Pennsylvania, southern New Jersey, and Delaware. Participation in the study was voluntary. Members of the study team visited practices to explain the purpose of the project, to discuss the expectations of participants, and to increase awareness of the newly created, local OM guidelines. Of 27 eligible practices, 24 agreed to participate. The study protocol was approved by the Children’s Hospital of Philadelphia’s Institutional Review Board, which waived the need for informed consent from patients and physicians for practical data collection purposes.
Local Adaptation of National Guidelines
Published guidelines are not always accompanied by strong evidence or clearly defined diagnostic or therapeutic choices. Thus, we formed a panel of primary care physicians, otolaryngologists, and outcomes researchers who reviewed the nationally recognized OM guidelines,10,11 adapted them according to the local context, and modified them using the Guidelines Element Markup methodology21,22 such that local recommendations could be programmed as CDS. When choice or ambiguity existed in certain guideline recommendations, we relied on our expert panel to apply local consensus to make recommendations deliverable unambiguously via CDS. Clinical judgment was not eliminated, however, because use of the tool was optional and not programmed as a hard stop within the EHR.
Several adaptations to the guidelines were made. First, we required pain management in cases of only moderate or severe pain because the guidelines did not specify pain severity. Second, for patients allergic to non–type I penicillin, the guidelines list cephalosporins as the preferred option, but our panel recommended inclusion of macrolides as well. Third, for severe disease in penicillin-allergic patients, guidelines recommend only ceftriaxone, but our experts felt that oral agents were an acceptable alternative. Fourth, local consensus endorsed the use of a narrower-spectrum antibiotic, amoxicillin, instead of amoxicillin-clavulanic acid as first line even in severe disease. This recommendation was based on local practice and resistance patterns and was discussed, along with all the recommendations, with clinicians during the training sessions we held with each practice before the launch of the CDS system. Finally, after extensive consultation with our expert panel and review of aggregate local data as well as individual chart review, we determined that the overprescribing of antibiotics, not the overuse of watchful waiting, was the overwhelming problem in our setting. Our panel unanimously agreed that decision support should be directed at encouraging watchful waiting.
Cluster Randomized Factorial Design
A cluster randomized factorial design was used for this study (Fig 1). The unit of randomization was the practice, which minimized contamination between study groups because physicians in the same practice often comanage patients. The 4 groups were CDS with feedback (n = 8 practices), CDS only (n = 8 practices), feedback only (n = 4 practices), and usual care (n = 4 practices). This design facilitated estimation of the independent and joint effects of CDS and physician feedback on adherence to OM guidelines.
The study was done in 3 phases (Fig 1). In Phase 1 (baseline period lasting 12 months from December 2007 to November 2008), no practices were exposed to either intervention. In Phase 2 (CDS only period lasting 11 months from December 2008 to October 2009), 16 practices were exposed to the CDS intervention and 8 others served as non-CDS controls. In Phase 3 (CDS and feedback period lasting 10 months from November 2009 to August 2010), 8 of the CDS practices were exposed to Feedback, 8 other CDS practices were not exposed to feedback, 4 non-CDS practices were exposed to feedback, and 4 non-CDS practices were not exposed to feedback (“usual care”), which yielded the four-group factorial design.
Two practices were lost to follow-up. One randomized to CDS and feedback left the hospital system month 13 of the study period, and 1 randomized to the CDS-only group asked to leave the study month 13 of the study period.
Ample evidence indicates that the implementation of EHR-based decision support should be integrated as seamlessly as possible into existing workflow processes.23,24 CDS systems that provide personalized recommendations are more effective than those that offer generic information.25,26 Building on these specifications, we designed, prototyped, pilot tested, and then implemented the study’s clinical decision support system to bring OM guidelines to clinicians at the point of care with minimal effort. The CDS system was programmed using a web service that was not part of the EHR. It appeared as part of the EpicCare EHR visit navigator and gathered data from and returned information through the EHR. (Details of the design of the CDS system are available from the authors on request.)
The CDS system had 3 components that used a full range of CDS strategies including data entry tools; cognitive aids that gather, organize, and display information; and workflow aids such as facilitated order entry to expedite clinical care (shown in Supplemental Appendix 1). The first appeared passively for all visits with an ear-related problem; it provided a visual display of previous OM encounters including office visits, audiograms, subspecialist referrals, and past antibiotic history regardless of indication. These data were grouped into episodes of related events as described below, and results were overlaid on the display. Although the CDS system customized guideline-based recommendations to the patient according to past history and current presentation, we also provided hyperlinks to the complete locally adapted clinical practice guidelines for acute OM (AOM) and OM with effusion (OME).
Clinicians actively chose to use the second component, an OM-specific data-gathering tool for recording the history of present illness and the physical examination.
The final component, also triggered by the clinician, returned guideline-based recommendations for treatment using data collected by the first 2 components; generated patient-specific orders for diagnosis, treatment of pain, antibiotics or watchful waiting as appropriate, and referral; wrote a progress note; and provided patient-specific discharge instructions.
We extensively validated all aspects of the CDS system by randomly selecting 100 EHR records for review by 2 clinicians, who iteratively compared the computer algorithms to implicit clinical review. We focused on each element of the decision support and revised algorithms to address inconsistencies. These validation studies continued until all clinically meaningful errors were addressed.
Identification of OM Visits and Episodes of Care
OM visits were identified using structured (ie, diagnostic codes) and unstructured (ie, free text) data from the EHR. Telephone encounters were excluded because the CDS tool would not be available during these conversations. Extensive programming was required to search text from the EHR for combinations of words that would reflect diagnosis, physical examination, and history because a search of only structured information, as in diagnostics codes, proved inadequate as a means of capturing all visits accurately and completely.
For some adherence to guideline metrics, we needed to determine when the first visit for an OM episode began. To do this, we developed OM episode of care algorithms as follows. A visit was identified as possibly related to OM if the EHR data for that visit contained a coded diagnosis of OM or an ear-related problem, a reason for visit that contained an ear- or hearing-related term, or a prescription for an antibiotic commonly used to treat OM. Eligible visits were further characterized as related to AOM or OME based on International Classification of Diseases, Ninth Revision codes, descriptions of the reason for visit, and specific ear examination findings documented by the clinician (eg, presence of middle ear effusion). Visits with a documented bilateral normal ear examination (ie, no findings of inflammation or effusion) were excluded from additional consideration. Visits categorized as AOM or OME were extended into episodes by including all visits of the appropriate type (AOM vs OME) meeting the inclusion criteria and falling within a defined interval (45 days for OME; 14 days for AOM) of another included visit.
We validated visit selection and episode of care algorithms using the same approach as for the CDS.
Beginning month 24 of the study period, we provided individualized feedback on a monthly basis to physicians in 12 of the practices (Fig 1). The reports showed physicians their adherence to OM guidelines for the month, change over time, and comparisons with others in their practice and the full network. A member of the project study team delivered reports to each practice’s office manager, who distributed them to practicing physicians. (An example of one of the feedback reports is shown in Supplemental Appendix 2.)
We examined adherence to guideline recommendations for OM pain management, diagnostic documentation, and medication management (Table 1). All measures operationalized the local adaptation to national guidelines. Assessment of the effects of the CDS system was done for all metrics shown in Table 1 except 2: pain assessed and avoidance of antihistamines and decongestants, which were achieved for nearly 100% of all OM visits. A subset of these indicators (pain treatment, amoxicillin as first-line therapy for AOM, and high-dose amoxicillin for AOM) were evaluated to determine the effects of feedback. Our local panel of OM experts selected these quality indicators because they were judged most likely to change in response to feedback.
The analysis used the “as randomized” (intention-to-treat) principle in which each site and each clinician within-site was assumed to use the intervention arm to which the site and its clinicians were assigned regardless of actual use. As is consistent with sound conventional practice for a cluster-randomized design, we performed an analysis with the unit of randomization as the cluster (ie, practice). Correlation of visits within practices was addressed using both mixed effects and marginal models. We chose the latter for presentation of results because β estimates and P values were nearly identical between the 2, and marginal models have fewer assumptions about the distribution of random effects and the correlation structure of the model.27 We found that 2-level mixed effects models, clustering on physicians and practices, in many cases would not converge, and when they did, neither parameter nor SE estimates were substantively different from marginal models that clustered visits within practices. Moreover, we attempted confirmatory analysis using other algorithms (most notably using mixed-effects models with practice as a random intercept and time as a random slope). To do mixed-effects models correctly requires careful attention to the choice of the algorithm, because simplistic methods that are often used (eg, penalized quasi-likelihood [PQL]) are well known to lead to bias. For that reason, we attempted to implement adaptive quadrature, a computer-intensive method, but experienced convergence problems in some cases, perhaps in part due to the small sample sizes at the individual physician level. As a consequence of testing these alternative approaches, we feel that our analysis is both sound and robust.
To estimate the contrasts of interest, the analysis implemented a marginal model with a logit link function and computed robust SE estimates for patient visits nested within practices using the survey algorithms implemented in Stata v11.1 and SAS v9.2. These algorithms are equivalent to using generalized estimating equations with working independence correlation structures but permit the analysis of large data sets. The models included 3 periods (before intervention [months 1–12], CDS-only intervention phase [months 13–23], and the CDS plus feedback intervention phase [months 24–33]) 4 treatment groups, and adjusted effects for patient age, gender, presence of a comorbidity that increases the risk of OM (ie, immunodeficiencies, developmental disorders, blindness, hearing loss, cleft lip/palate), and number of diagnosis codes as a measure of visit-level clinical complexity. For presentation of data, we computed the predictive margins, which estimate the probability of the outcome standardized to the characteristics of patients in the sample and thus reflect adjusted probabilities and their differences. Our measure of intervention effect contrasted the absolute percentage change between 2 time periods for the intervention group versus control group (in other words, a standardized difference-in-difference estimate) and used 0.05 as the critical value.
There were 1 222 283 visits made to the 24 study practices during the 33-month study period (Fig 2). To form the study sample, we excluded visits (1) made with pediatric residents (to capture practice patterns reflective of attending physicians), (2) in which a patient complaint or physician diagnosis suggested OM but the physical examination indicated that the OM was resolved, and (3) in which otitis externa co-occurred. Among the resulting 139 305 visits, 64% were for AOM, 17% OME, 6% both AOM and OME. The rest (13%) were unclassified because they had nonspecific diagnosis codes and nonspecific ear examination information.
The 139 305 OM visits were distributed across the 4 groups as follows: CDS and feedback, 45 843 visits (range 3292–8128 per practice); CDS only, 38 215 visits (range 1955–8053 per practice); feedback only, 32 214 visits (range 4600–15 363 per practice); and usual care, 23 033 visits (range 3186–7362 per practice).
Baseline practice and visit characteristics of the study sample are shown in Table 2. On average, practices employed 7 physicians (range 6–9 across study groups) and managed 182 OM visits per month. The most notable difference across study groups is the substantially higher number of visits per practice in the feedback-only group, which was due to inclusion of a practice with nearly double the number of OM visits (n = 15 363) as the other practices.
There was marked variation for several of the adherence to guideline measures across the study groups during the baseline period (Table 3). The proportion of visits in which all guideline adherence metrics were achieved (“comprehensive care”), given a minimum of ≥3 opportunities, was also variable, ranging across study groups from 10% to 21% for AOM and 2% to 9% for OME.
Across both follow-up periods (months 13–33), the average practice-level use of the CDS tool was 17% (range 5%–45% across practices) of eligible visits.
We analyzed the effects of the CDS intervention by contrasting rates of adherence to guidelines during the CDS-only versus baseline periods (Fig 1). Among CDS practices, rates of adherence to guidelines rose for 7 of the 10 measures evaluated, whereas the only metric that significantly increased for the non-CDS practices was pain treated (Table 4). The difference-in-difference intervention effects, contrasting the change in the CDS group to the change in the non-CDS group, are shown in the fifth column of Table 4. Compared with the non-CDS group, the CDS group had a significant increase in adherence to guidelines between the 2 time periods for comprehensive care (AOM and OME), pain treated, amoxicillin as first-line therapy for AOM, and OME adequate diagnostic evaluation.
In the feedback period of the study (Fig 1), we provided individualized physician feedback reports to half the practices, equally balanced between CDS and non-CDS groups. The contrasts of interest were the effects during time 3 (feedback period) versus time 2 (CDS only period; Table 5). There was no difference in adherence to guideline indicators for the CDS only versus usual care groups. The feedback only group had significantly greater increases in guideline adherence than both the usual care and CDS only groups. The CDS and feedback group had smaller increases in adherence to guidelines than the feedback only group (Table 5).
This article describes the results from a cluster-randomized trial of the effects of CDS and physician performance feedback on adherence to guidelines for OM care. We developed a CDS system for this study that was prospectively applied at the point of care, seamlessly integrated into EHR workflow, provided patient-specific recommendations, and generated a progress note and patient instructions. Physician feedback summarized the past month’s adherence to key guideline recommendations and contrasted an individual physician’s performance with others in the practice and across the full primary care network. Our findings suggest that both types of interventions can improve adherence to guidelines. However, our hypothesis that the combination of CDS and feedback would be superior to either alone was not supported. The effects of feedback were larger than those for the CDS system, and feedback was more beneficial in practices that had not been exposed to CDS than those that had been using it. Other studies have found that when feedback is coupled with other interventions, its effect is reduced.17,18
There was no effect of CDS on overuse of antibiotics for either AOM or OME. Similar to other studies, our findings indicate that CDS systems can encourage physicians to take or alter an action, such as changing the selection of an antibiotic or modifying choice of therapy,28,29 but have much weaker effects on withholding treatment, especially in the context of strong clinician beliefs29 or if withholding treatment is perceived as decreasing productivity.30
Because the feedback period lasted just 10 months, this study does not tell us how long a program of provider feedback can influence physician behavior change, what happens when feedback is removed, or how long the feedback must persist to achieve optimal effect. Future work should examine the effects of feedback over longer periods to measure lags after onset and attenuation over time. We are also unable to explain the mechanisms for the large feedback effect, and it is possible that feedback could be less effective for improving adherence to guidelines for the metrics that were not included in the physician feedback reports.
The unit of randomization for this study was the physician practice, a design often required to avoid the potential of contamination of the intervention effects if randomization had been done at the levels of physician, patient, or visit. The downside of practice-level randomization, however, was large design effects (data not shown) arising from high interpractice variation in quality at baseline and change over time and the reduced power of even this large study among 24 practices to detect statistically significant intervention effects. One approach for reducing interpractice variation in quality is to standardize care, which minimizes unnecessary variation. Once this has been accomplished, new interventions such as CDS systems might produce more uniform effects across practices and thus be more readily shown to be effective. This combination of quality improvement coupled with research that occurs within a community of physicians and patients organized to advance patient health is the bedrock of learning health systems.31–33
Previous research has found that the promulgation of national guidelines for AOM and OME did not have substantive impacts on quality.34 One of the concerning findings from this study is the overall low adherence to guidelines for OM across most of the metrics. Comprehensive care (ie, all recommended guidelines for a visit were adhered to) was accomplished for only 15% of visits with AOM and 5% with OME during the baseline period of observation. Another finding of note is that clinicians chose watchful waiting for only 6% of eligible AOM visits. It appears that, at least in our sample, the tremendous public emphasis on judicious use of antimicrobial therapy to reduce the risk of drug-resistant microbes35 has had little impact on physician prescribing.
In summary, using the EHR to provide real-time decision support and physician performance feedback are both effective strategies for improving adherence to guidelines for otitis media care. In a direct comparison of the two interventions, feedback produced stronger effects on guideline adherence than the CDS system, and the two interventions did not have additive effects. Modest utilization of the CDS tool in this study may have attenuated its potential effects. Future work will be done to characterize physician and practice level adoption of the CDS tool. Careful attention to overcoming barriers to physician adoption of CDS systems must be addressed as they are disseminated to encourage meaningful use of health information technology.
- Accepted December 4, 2012.
- Address correspondence to Christopher B Forrest, MD, PhD, Children’s Hospital of Philadelphia, 34th St and Civic Center Blvd, Philadelphia, PA 19104. E-mail:
Dr Forrest led the design, conduct, and analysis of the study and was the primary author for the manuscript. Dr Fiks provided conceptual input into all aspects of the study, wrote sections of the manuscript, and approved the final version. Dr Bailey led the programming team that developed the otitis media episode of care grouping software, provided conceptual input into all aspects of the study, critically edited the manuscript, and approved the final version. Dr Localio conducted the statistical analyses for the study, wrote sections of the manuscript, and approved the final version. Dr Grundmeier designed and led the development of the decision support system, wrote sections of the manuscript, provided input into all aspects of the study, critically edited the manuscript, and approved the final version. Mr Richards provided substantive intellectual input into the design and analysis of the guideline adherence metrics, helped to program the episode-of-care software and clinical decision support system, critically edited the manuscript, and approved the final version. Mr Karavite created the user interface for the clinical decision support system, provided intellectual input into the conduct of the study and data analysis, critically edited the manuscript, and approved the final version. Dr Elden co-led development of the local translation of the otitis media guidelines, provided intellectual input into all aspects of the study, critically edited the manuscript, and approved the final version. Dr Alessandrini co-led the conduct of the study, recruited practices into the study, co-led the development of guideline adherence metrics, wrote sections of the manuscript, and approved the final version.
FINANCIAL DISCLOSURE: Drs Grundmeier, Fiks, and Bailey are coinventors of the otitis media clinical decision support software used in this intervention. They have received no income for this technology and no licensing agreement exists. All results were verified by a statistician (Dr Localio) who has no conflict of interests for this project. The other authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: This study was supported by grant R18HS017042 from the Agency for Healthcare Research and Quality.
- Hing E,
- Cherry DK,
- Woodwell DA
- Froom J,
- Culpepper L,
- Grob P,
- et al
- American Academy of Family Physicians; American Academy of Otolaryngology-Head and Neck Surgery
- American Academy of Pediatrics Subcommittee on Management of Acute Otitis Media
- Coco A,
- Vernacchio L,
- Horst M,
- Anderson A
- Osheroff JA,
- Pifer EA,
- Teich JM,
- Sittig DF,
- Jenders RA
- Chuang JH,
- Hripcsak G,
- Heitjan DF
- Christakis DA,
- Zimmerman FJ,
- Wright JA,
- Garrison MM,
- Rivara FP,
- Davis RL
- Jamtvedt G,
- Young JM,
- Kristoffersen DT,
- O’Brien MA,
- Oxman AD
- Mugford M,
- Banfield P,
- O’Hanlon M
- Ash JS,
- Bates DW
- Kawamoto K,
- Houlihan CA,
- Balas EA,
- Lobach DF
- Bates DW,
- Kuperman GJ,
- Wang S,
- et al
- Fitzmaurice GM,
- Laird NM
- Bates DW,
- Teich JM,
- Lee J,
- et al
- Lee F,
- Teich JM,
- Spurr CD,
- Bates DW
- Etheredge LM
- Olsen L,
- Aisner D,
- McGinnis JM
- Slutsky JR
- ↵Interagency Task Force on Antimicrobial Resistance. A Public Health Action Plan to Combat Antimicrobial Resistance. Washington, DC: Health and Human Services/Office of the Assistant Secretary for Preparedness and Response; 2011
- Copyright © 2013 by the American Academy of Pediatrics