Skip to main content

Advertising Disclaimer »

Main menu

  • Journals
    • Pediatrics
    • Hospital Pediatrics
    • Pediatrics in Review
    • NeoReviews
    • AAP Grand Rounds
    • AAP News
  • Authors/Reviewers
    • Submit Manuscript
    • Author Guidelines
    • Reviewer Guidelines
    • Open Access
    • Editorial Policies
  • Content
    • Current Issue
    • Online First
    • Archive
    • Blogs
    • Topic/Program Collections
    • NCE Meeting Abstracts
  • AAP Policy
  • Supplements
  • Multimedia
  • Subscribe
  • Alerts
  • Careers
  • Other Publications
    • American Academy of Pediatrics

User menu

  • Log in

Search

  • Advanced search
American Academy of Pediatrics

AAP Gateway

Advanced Search

AAP Logo

  • Log in
  • Journals
    • Pediatrics
    • Hospital Pediatrics
    • Pediatrics in Review
    • NeoReviews
    • AAP Grand Rounds
    • AAP News
  • Authors/Reviewers
    • Submit Manuscript
    • Author Guidelines
    • Reviewer Guidelines
    • Open Access
    • Editorial Policies
  • Content
    • Current Issue
    • Online First
    • Archive
    • Blogs
    • Topic/Program Collections
    • NCE Meeting Abstracts
  • AAP Policy
  • Supplements
  • Multimedia
  • Subscribe
  • Alerts
  • Careers
American Academy of Pediatrics
Article

Strategies for Improving Vaccine Delivery: A Cluster-Randomized Trial

Linda Y. Fu, Kathleen Zook, Janet A. Gingold, Catherine W. Gillespie, Christine Briccetti, Denice Cora-Bramble, Jill G. Joseph, Rachel Haimowitz and Rachel Y. Moon
Pediatrics June 2016, 137 (6) e20154603; DOI: https://doi.org/10.1542/peds.2015-4603
Linda Y. Fu
Goldberg Center for Community Pediatric Health andCenter for Translational Science, Children’s National Health System, Washington, District of Columbia;The George Washington University School of Medicine, Washington, District of Columbia;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kathleen Zook
Goldberg Center for Community Pediatric Health andSciMetrika, LLC, Silver Spring, Maryland;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Janet A. Gingold
Goldberg Center for Community Pediatric Health and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Catherine W. Gillespie
Center for Translational Science, Children’s National Health System, Washington, District of Columbia;The George Washington University School of Medicine, Washington, District of Columbia;AARP Public Policy Institute, Washington, District of Columbia;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Christine Briccetti
Goldberg Center for Community Pediatric Health andThe George Washington University School of Medicine, Washington, District of Columbia;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Denice Cora-Bramble
Goldberg Center for Community Pediatric Health andCenter for Translational Science, Children’s National Health System, Washington, District of Columbia;The George Washington University School of Medicine, Washington, District of Columbia;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jill G. Joseph
Betty Irene Moore School of Nursing, University of California Davis, Sacramento, California; and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Rachel Haimowitz
The George Washington University School of Medicine, Washington, District of Columbia;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Rachel Y. Moon
Goldberg Center for Community Pediatric Health andCenter for Translational Science, Children’s National Health System, Washington, District of Columbia;The George Washington University School of Medicine, Washington, District of Columbia;Division of General Pediatrics, University of Virginia School of Medicine, Charlottesville, Virgina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • Comments
Loading
Download PDF

Abstract

OBJECTIVE: New emphasis on and requirements for demonstrating health care quality have increased the need for evidence-based methods to disseminate practice guidelines. With regard to impact on pediatric immunization coverage, we aimed to compare a financial incentive program (pay-for-performance [P4P]) and a virtual quality improvement technical support (QITS) learning collaborative.

METHODS: This single-blinded (to outcomes assessor), cluster-randomized trial was conducted among unaffiliated pediatric practices across the United States from June 2013 to June 2014. Practices received either the P4P or QITS intervention. All practices received a Vaccinator Toolkit. P4P practices participated in a tiered financial incentives program for immunization coverage improvement. QITS practices participated in a virtual learning collaborative. Primary outcome was percentage of all needed vaccines received (PANVR). We also assessed immunization up-to-date (UTD) status.

RESULTS: Data were analyzed from 3,147 patient records from 32 practices. Practices in the study arms reported similar QI activities (∼6 to 7 activities). We found no difference in PANVR between P4P and QITS (mean ± SE, 90.7% ± 1.1% vs 86.1% ± 1.3%, P = 0.46). Likewise, there was no difference in odds of being UTD between study arms (adjusted odds ratio 1.02, 95% confidence interval 0.68 to 1.52, P = .93). In within-group analysis, patients in both arms experienced nonsignificant increases in PANVR. Similarly, the change in adjusted odds of UTD over time was modest and nonsignificant for P4P but reached significance in the QITS arm (adjusted odds ratio 1.28, 95% confidence interval 1.02 to 1.60, P = .03).

CONCLUSIONS: Participation in either a financial incentives program or a virtual learning collaborative led to self-reported improvements in immunization practices but minimal change in objectively measured immunization coverage.

  • Abbreviations:
    AAP —
    American Academy of Pediatrics
    ACIP —
    Advisory Committee on Immunization Practices
    aOR —
    adjusted odds raio
    CI —
    confidence interval
    CIzQIDS —
    Comparison of Immunization Quality Improvement Dissemination Strategies
    ICC —
    intracluster correlation
    P4P —
    pay for performance
    PANVR —
    percentage of all needed vaccines received
    QI —
    quality improvement
    QITS —
    quality improvement technical support
    UTD —
    up-to-date
  • What’s Known on This Subject:

    There is a paucity of evidence about the effectiveness of strategies to disseminate immunization delivery practice guidelines. No studies to date have compared 2 popular dissemination strategies, virtual learning collaboratives and financial incentives, for their impact on immunization coverage.

    What This Study Adds:

    Standard approaches to implementing both a remote financial incentives program and a virtual learning collaborative may result in improved immunization delivery practices but may not have a high impact on immunization coverage.

    New requirements for demonstrating health care quality have increased the need for evidence-based methods to disseminate practice guidelines. Since 2000, the American Board of Pediatrics has included participation in quality improvement (QI) projects as a requisite for maintenance of certification.1 However, methods used in QI programs can vary in adherence to accepted QI principles,2 and evidence for their impact on patient outcomes is sparse and conflicting.3–5 Furthermore, web-based QI learning programs are increasingly common,6 although evidence of impact of virtual or remote learning activities on patient outcomes is particularly weak.7–9

    Another approach to improving practitioner adherence to evidence-based guidelines is via financial incentives. Accountable care organizations with provider reimbursement partially determined by quality measurements are currently operating in all 50 states, with 23.5 million covered lives.10 Despite the growth of incentives-based health care models,11 a 2013 Medicare report showed wide variability among accountable care organization quality scores.12,13 Furthermore, prospective trials supporting the effectiveness of pay for performance (P4P) programs for improving patient outcomes are few and methodologically limited.14,15

    The Advisory Committee on Immunization Practices (ACIP) has provided immunization guidelines since 1964.16 However, only a few randomized trials have tested the effect of dissemination methods on provider adherence to immunization guidelines. Five trials of financial incentives17–19 and 1 trial of remote provider education20 yielded minimal to no improvement in immunization coverage. To date, we have found no published comparative effectiveness study of financial incentives and remote provider QI education. Furthermore, recent changes in the health care environment (mandated insurance coverage of immunizations,21 electronic medical records,22 immunization registries,23 electronic clinical support tools,22 and changing parental attitudes about immunization24) suggest the need for evidence about dissemination strategies in the contemporary practice environment. The Comparison of Immunization Quality Improvement Dissemination Strategies (CIzQIDS) study was a cluster-randomized, comparative effectiveness trial conducted among geographically dispersed practices comparing a P4P program and a virtual QI learning collaborative for impact on pediatric immunization coverage.

    Methods

    This study was reviewed and approved by the Children’s National Medical Center and American Academy of Pediatrics (AAP) institutional review boards. The CIzQIDS trial was undertaken and reported in accordance with Consolidated Standards of Reporting Trials (CONSORT) guidelines25 and is registered at clinicaltrials.gov (NCT02432430).

    Study Design and Participants

    This study was designed as a single-blinded (to outcomes assessor), cluster-randomized, parallel trial. Recruitment occurred from January to June 2013 before the start of the intervention. To gain a diverse participant sample, study advertisements were placed in 4 pediatric journals and emailed to members of 2 pediatric professional associations and 53 primary care, physician, and immunization networks. Advertisements directed individuals to an online eligibility survey.26 Of 300 complete and partial surveys, 42 unique respondents met eligibility criteria (English-speaking primary care pediatricians in practices immunizing at least five 3- to 18-month-olds per week, with daytime internet access, estimated practice immunization coverage of <86% for 3- to 18-month-olds, and the ability to form an improvement team consisting of a provider, clinical support staff, and administrator) and completed informed consent. Of the 42 practices, 7 were excluded after study staff’s random review of 50 medical records per site revealed coverage ≥86%. (The baseline coverage criterion excluded participants with minimal room for improvement.) The remaining 35 practices were block-randomized (via the REDCap data system, which conceals allocation sequence to investigators26) by 2 factors: percent of patients fully up-to-date for age (<75% vs ≥75%) and practice type (federally qualified health center, academic, or other) into 2 interventional study arms: quality improvement technical support (QITS) and P4P.

    Outcome Measures

    Our primary outcome was percentage of all needed vaccines received (PANVR) during the 12 months preceding assessment. To calculate PANVR, we divided the number of valid doses received by the number of doses that should have been received (ie, denominator included all doses whose maximum recommended age or interval boundary occurred within 12 months preceding the assessment). We also assessed a binary variable representing immunization up-to-date (UTD) status. Patients were considered UTD if they had received all recommended doses for age.

    Blinded study staff extracted immunization, encounter, and demographic data from 50 medical records per practice at baseline (March 25, 2013) and postintervention (June 25, 2014) by either site visit or remote electronic record access. Each practice’s patients were stratified into 4 age categories whose boundaries coincided with the minimum and maximum recommended ages for routine receipt of vaccinations (Table 1).27 Patients in each stratum were randomly sorted for sample selection, and larger age strata were intentionally oversampled (3 to 4 months, n = 10; 5 to 6 months, n = 10; 7 to 15 months; n = 16; 16 to 18 months, n = 14). A medical record was excluded if the child had moved or gone elsewhere, had no contact with the practice in ≥12 months, or had <2 encounters. Excluded records were replaced by the next patient record in the corresponding age stratum. At practices in which the total eligible patient population aged 3 to 18 months was <50, all eligible records of patients 3 to 18 months were included. Data were extracted using the Comprehensive Clinical Assessment Software Application for immunizations.19

    View this table:
    • View inline
    • View popup
    TABLE 1

    Vaccine Doses Needed by Age According to Study Definition

    Patient vaccination status was determined according to the 2011 ACIP routine and catch-up schedules28 for vaccines included in the Healthy People 2020 goals29: hepatitis B, diphtheria, tetanus toxoids, pertussis, Haemophilus influenzae type b, pneumococcal conjugate, inactivated poliovirus, measles, mumps, rubella, and varicella (Table 1). As per ACIP, doses were considered valid if given within 4 days of the minimum recommended ages or intervals.30 Patients with a documented valid vaccination contraindication were excluded from all analyses. Those who had received any vaccinations elsewhere were excluded from PANVR analyses.

    To assess practice demographic information and track changes in immunization practices and attitudes, participants completed online surveys at baseline and monthly throughout the intervention and postintervention.

    Interventions

    The intervention period spanned June 25, 2013, to June 24, 2014. At baseline, all participants received a Vaccinator Toolkit to support implementation of immunization best practices,31 including examples and worksheets to aid in application of the model for improvement through plan-do-study-act cycles to implement changes based on local needs.32 Practices in the P4P arm additionally participated in a financial incentive program designed to reward improvement in immunization coverage from baseline to postintervention both within individual practices and over the entire study arm. Each P4P practice was eligible to receive $500 if the percentage of patients UTD in their individual practice increased 5 to <10 percentage points and $1000 if it increased ≥10 percentage points. They were eligible to receive an additional $1000 if >90% of practices in the P4P arm achieved immunization coverage of >85%. No restrictions were placed on how practices allocated incentives.

    Practices in the QITS study arm participated in a year-long virtual learning collaborative consisting of 6 expert-led, Web-based learning sessions (covering QI science, the immunization schedule, systemic vaccine delivery barriers and solutions, parental vaccine hesitancy, and sustaining improvement), monthly coaching conference calls, and a continuously updated compendium of Web-based resources.33 Each month, QITS practices extracted data (vaccination status, vaccination missed opportunities, and use of patient vaccination reminders) from 10 to 20 patient records. The QI coach used this information to create individual practice and aggregate run charts for performance feedback. QITS participants could earn American Board of Pediatrics Maintenance of Certification Part 4 (QI) credit by meeting participation requirements. Details about the Vaccinator Toolkit and QITS intervention are located on the AAP Quality Improvement Innovation Networks Web site.33

    Statistical Analysis

    We developed sample size estimates for difference in postintervention PANVR between arms using contemporaneous national early-childhood immunization rates (mean ± SD, 70% ± 5.4%)34 to make assumptions about the expected effect size. We did not find previous reports of intracluster correlation (ICC) by practice to include in the power calculation and conservatively assumed an ICC of 0.20. Setting the 2-tailed type 1 error at 5%, we determined that we would have >90% power to detect a difference equivalent to 0.5 SD units (ie, 2.7 percentage points) between arms if we enrolled 20 practices per arm and extracted data for 50 patients per practice. In actuality, the observed SD for the proportion of expected doses received was larger than anticipated, the ICC in our analytic sample was only 0.07, and we enrolled 16 practices per study arm. Under actual trial conditions, the observed effect size was 0.2 SD units.

    Analyses were performed in Stata v13.1.35 Descriptive statistics were used to summarize characteristics of the study population. To account for minor variations in the age distribution of patients sampled from each practice, all practice-level outcome estimates were age-standardized according to the 2011 age distribution of American children.36 All patient-level multivariable analyses accounted for patient clustering within practices by including practice as a random intercept. To model PANVR, a proportional response variable, we used generalized estimating equations (xtgee) with a logit link and the binomial family. To model UTD, we used random-effects logistic regression (xtlogit) and reported adjusted odds ratios (aORs) and 95% confidence intervals (CIs). All patient-level models were adjusted for patient age stratum and insurance status (Medicaid or other). To account for imbalances across groups despite randomization, each of the 2 models of patient-level postintervention outcomes (PANVR and UTD) were also adjusted for baseline practice average PANVR or percentage UTD, respectively.

    Results

    Thirty-five practices were randomized to P4P (n = 17) or QITS (n = 18) (Fig. 1). One practice in the QITS arm dropped out before the intervention because of unexpected staffing turnover. Data from that practice were excluded from all analyses. During the intervention, 1 practice randomized to QITS dropped out because meeting times were inconvenient, and 1 practice in P4P dropped out because the site project leader left the practice. Data from those 2 practices were included in intent-to-treat but not per-protocol analyses. In this article, we exclusively report per-protocol results, since both dropouts occurred within the first quarter of the intervention period; intent-to-treat findings were virtually identical. Per-protocol analyses consisted of data from 3147 patient records (1576 assessed at baseline and 1571 postintervention) from 32 practices (16 P4P and 16 QITS).

    FIGURE 1
    • Download figure
    • Open in new tab
    • Download powerpoint
    FIGURE 1

    Consolidated Standards of Reporting Trials (CONSORT) flow diagram.

    Baseline Characteristics

    Practices in the treatment arms were similar with respect to measured baseline characteristics with few exceptions (Table 2). Although randomization balanced arms in terms of the number of practices with <75% of patients UTD, the baseline age-standardized proportion of patients UTD in P4P practices was borderline significantly higher than in QITS practices (mean ± SE, 73.9% ± 7.6% vs. 64.2% ± 19.8%, P = .05). Also, patients sampled from P4P practices at baseline had statistically, but not clinically significantly, fewer health care encounters per month than patients in QITS practices (1.07 ± 0.02 vs 1.14 ± 0.02, P = .04) (Table 3).

    View this table:
    • View inline
    • View popup
    TABLE 2

    Baseline Practice-Level Characteristics

    View this table:
    • View inline
    • View popup
    TABLE 3

    Baseline and Postintervention Patient-Level Characteristics

    Quality Improvement Activities

    Practices in the 2 study arms reported similar QI activities, with an average of 6 to 7 new immunization improvement strategies initiated during the intervention period (Table 4). The most common changes made were more consistently checking immunization status at point of service and more consistently administering immunizations at sick encounters. Half or more of the practices in both study arms reported distributing new immunization educational materials to patients, eliminating unwarranted per-visit injection limits, increasing use of immunization registries, and having more staff training. Compared with practices in the P4P arm, practices in the QITS arm completed more plan-do-study-act cycles (median 5 [interquartile range 3 to 6.5] vs 2 [1.5 to 3.5]; P = .01), and a greater number of QITS practices implemented new systems to prompt providers about immunizations at point of service (56.3% vs 6.3%, P < .01).

    View this table:
    • View inline
    • View popup
    TABLE 4

    Self-Reported Activities During Intervention Period

    Comparisons Between Treatment Arms

    In unadjusted analyses of practice-level data, the age-standardized PANVR during the intervention period in the P4P arm was significantly higher than in the QITS arm (mean ± SE, 92.3% ± 1.1% vs 86.9% ± 2.0%, P = 0.01) (Table 5). However, using patient-level data and adjusting for clustering within practice, patient age stratum, Medicaid insurance status, and baseline practice average PANVR, there was no significant difference between groups (90.7% ± 1.1% for P4P vs 86.1% ± 1.3% for QITS, P = 0.46).

    View this table:
    • View inline
    • View popup
    TABLE 5

    PANVR.

    In unadjusted analyses of practice-level data, the percentage of patients UTD in practices in the P4P arm trended higher than in the QITS arm (mean ± SE, 75.9% ± 10.2% vs 67.1% ± 16.7%, P = .06). However, on patient-level analysis, after adjustment, there was no difference in the odds of being UTD between study arms (aOR 1.02, 95% CI 0.68–1.52, P = .93).

    Changes Within Treatment Arms From Baseline to Postintervention

    At the practice level for P4P, we detected an improvement in the unadjusted PANVR over time from 89.3% ± 1.0% (mean ± SE) during the 12 months preceding the baseline assessment to 92.3% ± 1.1% during the 12-month intervention period (P = .05) (Table 5). However, among QITS practices, PANVR did not increase significantly over time (86.0% ± 1.7% vs 86.9% ± 2.0%, P = .72). Results were largely similar in the fully adjusted patient-level models. For patients in the P4P arm, there was a nonsignificant increase in PANVR, from 88.2% ± 1.3% to 90.7% ± 1.1% (P = .12). For patients in the QITS arm, PANVR did not significantly increase from baseline to postintervention (84.5% ± 1.4% vs 86.1% ± 1.3%, P = .30).

    At the practice level, the unadjusted percentage of patients UTD did not improve significantly over time in the P4P arm (mean ± SE, 73.9% ± 7.6% to 75.9% ± 10.2%, P = .54) or in the QITS arm (64.2% ± 19.8% to 67.1% ± 16.7%, P = .65). Similarly, results from models predicting the adjusted odds of a patient being UTD revealed nonsignificant or only modest improvement over time in both study arms (P4P: aOR 1.16, 95% CI 0.92–1.46, P = .21; QITS: aOR 1.28, 95% CI 1.02–1.60, P = .03).

    In P4P, 7 practices increased in age-standardized percent UTD by 5 to <10 percentage points and 1 practice by ≥10 percentage points. In QITS, 3 practices increased by 5 to <10 percentage points and 4 practices by ≥10 percentage points.

    Other Factors

    We sought to determine whether factors besides group allocation contributed to immunization improvement by comparing high performers (15 practices that improved percentage UTD by >5% from baseline to postintervention) to low performers (17 practices with ≤5% improvement), regardless of treatment allocation. High and low performers did not differ in demographic characteristics or most self-reported activities; however, high performers tended to have lower baseline percent UTD than low performers (64.1% ± 19.2% vs 73.5% ± 10.2%, P = .09) and were less likely to report using data to give clinicians feedback (27% vs 71%, P = .007).

    Discussion

    Participation in both a financial incentive program or a virtual QI learning collaborative led to modest improvement in immunization outcomes, as evidenced by temporal gains in mean PANVR of 3.0% among practices exposed to P4P and 28% increased odds of UTD among patients exposed to QITS (nonsignificant 2.9% increase in practice percentage UTD). On direct comparison, the P4P and QITS interventions induced similar QI activities and outcomes.

    We found considerable variability among practices in change in immunization coverage over time, with a nonsignificant trend toward an association between lower baseline coverage and greater gains in percent UTD. This supports the intuitive notion that it is easier to improve outcomes when there is more room for improvement. Furthermore, it suggests that local contextual factors, such as organizational constraints, may pose significant barriers to greater improvement after “low-hanging fruit” have already been picked.37–40 Perhaps future guideline dissemination projects should place greater emphasis on helping practices first nurture an environment that is conducive to transformation.41

    To our knowledge, this is the first comparative effectiveness trial comparing financial incentives to a virtual learning collaborative for impact on immunization coverage. A meta-analysis of 3 trials to improve adult immunization coverage via provider incentives found no effect compared with the control.17 A trial examining immunization coverage among children with Medicaid insurance found no effect of an intervention that combined provider incentives with performance feedback.18 A trial among inner-city pediatric practices found improvements with incentives but attributed much of the change to improved documentation rather than increased vaccine delivery.19 A trial examining the effect of a virtual immunization learning collaborative found an increase in practices’ self-reported immunization coverage by 4.9% over time, but improvement was not different from that of the control.20 The smaller measured increase in percent UTD among our QITS practices may be attributed in part to our use of random medical record selection and blinded data extraction.

    Given the modest effects on immunization rates of both P4P and virtual QI learning collaboratives, the question arises as to whether use of either is worthwhile for improving immunization delivery. This question is timely as several medical boards reconsider their QI participation requirements and as demand grows for value-based care in which providers are financially rewarded for positive patient health outcomes.11,42–44 For individual patients who receive additional protection from vaccine-preventable diseases as a result of their health care practitioners’ stricter adherence to immunization guidelines, the benefit is clear.45 Furthermore, in communities where baseline vaccination rates are low, the benefit to the local population from even incremental improvement in herd immunity is potentially important.46–48 However, benefits must be weighed against the costs in time and expense for both the sponsors of and participants in financial incentive programs and learning collaboratives.

    Our study had several strengths, including rigorous study design and exacting assessments of immunization status that incorporated both routine and catch-up schedules. Furthermore, participating practices were unaffiliated and of varying size, structure, and location, allowing for generalizability to many practice settings. In addition, our baseline percentage UTD (69.1%) was similar to national coverage rates (71.6%),49 suggesting that our sample reflects conditions nationwide. Finally, we were able to collect postintervention data from all practices that began the intervention (ie, no loss to follow-up). Our primary limitation was that because of financial and sample size constraints, we were unable to test various intensities of each intervention including a no-treatment control. Nonetheless, the 5 previous trials of financial incentives used widely varying levels of incentives and also found generally negative results.17–19 As for our virtual learning collaborative, the 12-month duration was longer than most QI projects, and the format was consistent with other projects coordinated by the AAP Quality Improvement Innovation Networks. Future studies using factorial study designs to compare varying levels of exposure to each intervention or studies that include a third arm that combines aspects of P4P and QITS could prove illuminating, but recruitment of enough participants for adequate power might be challenging.

    Conclusions

    Participation in either a financial incentives program or a virtual QI learning collaborative led to self-reported improvements in immunization practices but minimal change in objectively measured immunization coverage. Our findings suggest that immunization guideline dissemination is a complex process requiring further investigation into factors impacting success.

    Acknowledgments

    The authors thank the AAP Quality Improvement Innovation Networks, a subcontractor on this study, and especially Elizabeth Rice-Conboy for assistance with recruitment and implementation of the QITS intervention. The authors gratefully acknowledge all participants in the CIzQIDS study.

    Footnotes

      • Accepted March 11, 2016.
    • Address correspondence to Linda Fu, MD, MS, Goldberg Center for Community Pediatric Health, Children’s National Health System, 111 Michigan Ave, NW, Washington, DC 20010. E-mail: lfu{at}childrensnational.org
    • This trial has been registered at www.clinicaltrials.gov (identifier NCT02432430).

    • FINANCIAL DISCLOSURE: Drs Fu, Gingold, Gillespie, Briccetti, and Moon and Ms Zook received salary support from Pfizer, Inc to conduct this study. Drs Fu and Cora-Bramble have served on advisory boards for Pfizer. Dr Joseph and Ms Haimowitz have no financial relationships relevant to this article to disclose.

    • FUNDING: Supported in part by Pfizer, Inc (Investigator Initiated Research grant award WS2163043). Financial incentives for the pay-for-performance intervention were supported by Children’s National Health System. Neither Pfizer nor Children’s National Health System had any role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, and approval of the manuscript.

    • POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.

    References

    1. ↵
      1. Miles PV,
      2. Moyer VA
      . Quality improvement and maintenance of certification. Acad Pediatr. 2013;13(6 suppl):S14–S15pmid:24268078
      OpenUrlCrossRefPubMed
    2. ↵
      1. Taylor MJ,
      2. McNicholas C,
      3. Nicolay C,
      4. Darzi A,
      5. Bell D,
      6. Reed JE
      . Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Qual Saf. 2014;23(4):290–298pmid:24025320
      OpenUrlAbstract/FREE Full Text
    3. ↵
      1. Schouten LM,
      2. Hulscher ME,
      3. van Everdingen JJ,
      4. Huijsman R,
      5. Grol RP
      . Evidence for the impact of quality improvement collaboratives: systematic review. BMJ. 2008;336(7659):1491–1494pmid:18577559
      OpenUrlAbstract/FREE Full Text
      1. McPheeters ML,
      2. Kripalani S,
      3. Peterson NB, et al
      . Closing the quality gap: revisiting the state of the science (vol. 3: quality improvement interventions to address health disparities). Evid Rep Technol Assess (Full Rep). 2012;208(208.3):1–475pmid:24422952
      OpenUrlPubMed
    4. ↵
      1. Shojania KG,
      2. Grimshaw JM
      . Evidence-based quality improvement: the state of the science. Health Aff (Millwood). 2005;24(1):138–150pmid:15647225
      OpenUrlAbstract/FREE Full Text
    5. ↵
      1. Moyer VA
      . Maintenance of Certification: Myths, Facts, and FAQs. Contemporary Pediatrics. Available at: contemporarypediatrics.modernmedicine.com/contemporary-pediatrics/content/tags/affordable-care-act/maintenance-certification-myths-facts-and-f?page=full. Accessed August 1, 2014
    6. ↵
      1. Cobb SC
      . Internet continuing education for health care professionals: an integrative review. J Contin Educ Health Prof. 2004;24(3):171–180pmid:15490549
      OpenUrlCrossRefPubMed
      1. Wutoh R,
      2. Boren SA,
      3. Balas EA
      . eLearning: a review of Internet-based continuing medical education. J Contin Educ Health Prof. 2004;24(1):20–30pmid:15069909
      OpenUrlCrossRefPubMed
    7. ↵
      1. Militello LK,
      2. Gance-Cleveland B,
      3. Aldrich H,
      4. Kamal R
      . A methodological quality synthesis of systematic reviews on computer-mediated continuing education for healthcare providers. Worldviews Evid Based Nurs. 2014;11(3):177–186pmid:24865984
      OpenUrlCrossRefPubMed
    8. ↵
      1. Muhlestein D
      . Growth and Dispersion of Accountable Care Organizations in 2015. Available at: healthaffairs.org/blog/2015/03/31/growth-and-dispersion-of-accountable-care-organizations-in-2015-2. Accessed April 20, 2015
    9. ↵
      1. Guterman S
      . With SGR Repeal, Now We Can Proceed with Medicare Payment Reform. Available at: www.commonwealthfund.org/publications/blog/2015/apr/repealing-the-sgr. Accessed April 20, 2015
    10. ↵
      Medicare Shared Savings Program. 2013 Quality Results. Available at: data.cms.gov/ACO/Medicare-Shared-Savings-Program-Accountable-Care-O/yuq5-65xt. Accessed April 20, 2015
    11. ↵
      1. Muhlestein D,
      2. Hall C
      . ASO Quality Results: Good But Not Great. Available at:www.healthaffairs.org/blog/2014/12/18/aco-quality-results-good-but-not-great. Accessed April 20, 2015
    12. ↵
      1. Flodgren G,
      2. Eccles MP,
      3. Shepperd S,
      4. Scott A,
      5. Parmelli E,
      6. Beyer FR
      . An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database Syst Rev. 2011;6(7):CD009255pmid:21735443
      OpenUrlPubMed
    13. ↵
      1. Eijkenaar F,
      2. Emmert M,
      3. Scheppach M,
      4. Schöffski O
      . Effects of pay for performance in health care: a systematic review of systematic reviews. Health Policy. 2013;110(2-3):115–130pmid:23380190
      OpenUrlCrossRefPubMed
    14. ↵
      1. Walton LR,
      2. Orenstein WA,
      3. Pickering LK
      . The history of the United States Advisory Committee on Immunization Practices (ACIP). Vaccine. 2015;33(3):405–414pmid:25446820
      OpenUrlCrossRefPubMed
    15. ↵
      1. Stone EG,
      2. Morton SC,
      3. Hulscher ME, et al
      . Interventions that increase use of adult immunization and cancer screening services: a meta-analysis. Ann Intern Med. 2002;136(9):641–651pmid:11992299
      OpenUrlCrossRefPubMed
    16. ↵
      1. Hillman AL,
      2. Ripley K,
      3. Goldfarb N,
      4. Weiner J,
      5. Nuamah I,
      6. Lusk E
      . The use of physician financial incentives and feedback to improve pediatric preventive care in Medicaid managed care. Pediatrics. 1999;104(4 Pt 1):931–935pmid:10506237
      OpenUrlAbstract/FREE Full Text
    17. ↵
      1. Fairbrother G,
      2. Hanson KL,
      3. Friedman S,
      4. Butts GC
      . The impact of physician bonuses, enhanced fees, and feedback on childhood immunization coverage rates. Am J Public Health. 1999;89(2):171–175pmid:9949744
      OpenUrlCrossRefPubMed
    18. ↵
      1. Slora EJ,
      2. Steffes JM,
      3. Harris D, et al
      . Improving pediatric practice immunization rates through distance-based quality improvement: a feasibility trial from PROS. Clin Pediatr (Phila). 2008;47(1):25–36pmid:17693592
      OpenUrlAbstract/FREE Full Text
    19. ↵
      Patient Protection and Affordable Care Act of 2010, Pub. L. no. 111-148, 111th Cong. (2010)
    20. ↵
      1. DesRoches CM,
      2. Campbell EG,
      3. Rao SR, et al
      . Electronic health records in ambulatory care—a national survey of physicians. N Engl J Med. 2008;359(1):50–60pmid:18565855
      OpenUrlCrossRefPubMed
    21. ↵
      1. Centers for Disease Control and Prevention
      . Development of community- and state-based immunization registries. CDC response to a report from the National Vaccine Advisory Committee. MMWR Recomm Rep. 2001;50(RR-17):1–17pmid:11605778
      OpenUrlPubMed
    22. ↵
      1. Gowda C,
      2. Dempsey AF
      . The rise (and fall?) of parental vaccine hesitancy. Hum Vaccin Immunother. 2013;9(8):1755–1762pmid:23744504
      OpenUrlCrossRefPubMed
    23. ↵
      1. Campbell MK,
      2. Piaggio G,
      3. Elbourne DR,
      4. Altman DG; CONSORT Group
      . Consort 2010 statement: extension to cluster randomised trials. BMJ. 2012 Sep 4;345:e5661
      OpenUrlFREE Full Text
    24. ↵
      1. Harris PA,
      2. Taylor R,
      3. Thielke R,
      4. Payne J,
      5. Gonzalez N,
      6. Conde JG
      . Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–381pmid:18929686
      OpenUrlCrossRefPubMed
    25. ↵
      1. Akinsanya-Beysolow I,
      2. Jenkins R,
      3. Meissner HC; ACIP Childhood/Adolescent Immunization Work Group; Centers for Disease Control and Prevention (CDC)
      . Advisory Committee on Immunization Practices (ACIP) recommended immunization schedule for persons aged 0 through 18 years—United States, 2013. MMWR Suppl. 2013;62(1):2–8pmid:23364302
      OpenUrlPubMed
    26. ↵
      1. Centers for Disease Control and Prevention
      . Recommended immunization schedules for persons aged 0-18 years. MMWR Morb Mortal Wkly Rep. 2011;60(5):1–4
      OpenUrlPubMed
    27. ↵
      1. US Department of Health and Human Services Office of Disease Prevention and Health Promotion
      . Healthy People 2020. Available at: www.healthypeople.gov/2020. Accessed December 15, 2015
    28. ↵
      1. National Center for Immunization and Respiratory Diseases
      . General recommendations on immunization—recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm Rep. 2011;60(2):1–64pmid:21293327
      OpenUrlPubMed
    29. ↵
      1. National Vaccine Advisory Committee
      . Standards for child and adolescent immunization practices. Pediatrics. 2003;112(4):958–963. Available at: www.pediatrics.org/cgi/content/full/112/4/e958pmid:14523192
      OpenUrlFREE Full Text
    30. ↵
      1. Berwick DM
      . Developing and testing changes in delivery of care. Ann Intern Med. 1998;128(8):651–656pmid:9537939
      OpenUrlCrossRefPubMed
    31. ↵
      Quality Improvement Innovation Networks (QuIIN). Comparison of Immunization Quality Improvement Dissemination Strategies Project. Elk Grove, IL: American Academy of Pediatrics. Available at: www.aap.org/en-us/professional-resources/quality-improvement/Quality-Improvement-Innovation-Networks/Pages/CIzQIDS.aspx. Accessed December 15, 2015
    32. ↵
      1. Centers for Disease Control and Prevention (CDC)
      . National, state, and local area vaccination coverage among children aged 19-35 months—United States, 2011. MMWR Morb Mortal Wkly Rep. 2012;61:689–696pmid:22951450
      OpenUrlPubMed
    33. ↵
      Stata [computer program]. Release 13. College Station, TX: StataCorp LP; 2013
    34. ↵
      1. National Center for Health Statistics
      . Data File Documentation, National Health Interview Survey, 2011 (machine readable data file and documentation). Hyattsville, MD: Centers for Disease Control and Prevention; 2012
    35. ↵
      1. Kaplan HC,
      2. Brady PW,
      3. Dritz MC, et al
      . The influence of context on quality improvement success in health care: a systematic review of the literature. Milbank Q. 2010;88(4):500–559pmid:21166868
      OpenUrlCrossRefPubMed
      1. McDonald KM
      . Considering context in quality improvement interventions and implementation: concepts, frameworks, and application. Acad Pediatr. 2013;13(suppl 6):S45–S53
      OpenUrlCrossRefPubMed
      1. Plesk P
      . Complexity and the Adoption of Innovation in Health Care. Available at: www.nihcm.org/pdf/Plsek.pdf. Accessed August 29, 2014
    36. ↵
      1. Gingold J,
      2. Briccetti C,
      3. Zook K, et al
      . Context matters: practitioner perspectives on immunization-delivery quality improvement efforts [published online ahead of print January 16, 2016]. Clin Pediatr. pii: 0009922815625874
    37. ↵
      1. Lukas CV,
      2. Holmes SK,
      3. Cohen AB, et al
      . Transformational change in health care systems: an organizational model. Health Care Manage Rev. 2007;32(4):309–320pmid:18075440
      OpenUrlCrossRefPubMed
    38. ↵
      1. Nichols DG
      . ABP Outlines MOC Changes and Long-Term Direction. Available at: abpedsblog.org/2015/02/25/abp-outlines-moc-changes/. Accessed December 15, 2015
      1. Baron RJ
      . ABIM Announces Immediate Changes to MOC Program. Available at: www.abim.org/news/abim-announces-immediate-changes-to-moc-program.aspx. Accessed December 15, 2015
    39. ↵
      1. Porter ME
      . A strategy for health care reform—toward a value-based system. N Engl J Med. 2009;361(2):109–112pmid:19494209
      OpenUrlCrossRefPubMed
    40. ↵
      1. Fefferman NH,
      2. Naumova EN
      . Dangers of vaccine refusal near the herd immunity threshold: a modelling study. Lancet Infect Dis. 2015;15(8):922–926pmid:25981883
      OpenUrlCrossRefPubMed
    41. ↵
      1. Omer SB,
      2. Enger KS,
      3. Moulton LH,
      4. Halsey NA,
      5. Stokley S,
      6. Salmon DA
      . Geographic clustering of nonmedical exemptions to school immunization requirements and associations with geographic clustering of pertussis. Am J Epidemiol. 2008;168(12):1389–1396pmid:18922998
      OpenUrlAbstract/FREE Full Text
      1. Feikin DR,
      2. Lezotte DC,
      3. Hamman RF,
      4. Salmon DA,
      5. Chen RT,
      6. Hoffman RE
      . Individual and community risks of measles and pertussis associated with personal exemptions to immunization. JAMA. 2000;284(24):3145–3150pmid:11135778
      OpenUrlCrossRefPubMed
    42. ↵
      1. De Jong MC,
      2. Bouma A
      . Herd immunity after vaccination: how to quantify it and how to use it to halt disease. Vaccine. 2001;19(17-19):2722–2728pmid:11257415
      OpenUrlCrossRefPubMed
    43. ↵
      1. Hill HA,
      2. Elam-Evans LD,
      3. Yankey D,
      4. Singleton JA,
      5. Kolasa M
      . National, state, and selected local area vaccination coverage among children aged 19-35 months—United States, 2014. MMWR Morb Mortal Wkly Rep. 2015;64(33):889–896pmid:26313470
      OpenUrlCrossRefPubMed
    • Copyright © 2016 by the American Academy of Pediatrics
    View Abstract
    PreviousNext
    Back to top

    Advertising Disclaimer »

    In this issue

    Pediatrics
    Vol. 137, Issue 6
    1 Jun 2016
    • Table of Contents
    • Index by author
    View this article with LENS
    PreviousNext
    Email Article

    Thank you for your interest in spreading the word on American Academy of Pediatrics.

    NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

    Enter multiple addresses on separate lines or separate them with commas.
    Strategies for Improving Vaccine Delivery: A Cluster-Randomized Trial
    (Your Name) has sent you a message from American Academy of Pediatrics
    (Your Name) thought you would like to see the American Academy of Pediatrics web site.
    Request Permissions
    Article Alerts
    Sign In to Email Alerts with your Email Address
    Citation Tools
    Strategies for Improving Vaccine Delivery: A Cluster-Randomized Trial
    Linda Y. Fu, Kathleen Zook, Janet A. Gingold, Catherine W. Gillespie, Christine Briccetti, Denice Cora-Bramble, Jill G. Joseph, Rachel Haimowitz, Rachel Y. Moon
    Pediatrics Jun 2016, 137 (6) e20154603; DOI: 10.1542/peds.2015-4603

    Citation Manager Formats

    • BibTeX
    • Bookends
    • EasyBib
    • EndNote (tagged)
    • EndNote 8 (xml)
    • Medlars
    • Mendeley
    • Papers
    • RefWorks Tagged
    • Ref Manager
    • RIS
    • Zotero
    Share
    Strategies for Improving Vaccine Delivery: A Cluster-Randomized Trial
    Linda Y. Fu, Kathleen Zook, Janet A. Gingold, Catherine W. Gillespie, Christine Briccetti, Denice Cora-Bramble, Jill G. Joseph, Rachel Haimowitz, Rachel Y. Moon
    Pediatrics Jun 2016, 137 (6) e20154603; DOI: 10.1542/peds.2015-4603
    del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
    Print
    Download PDF
    Insight Alerts
    • Table of Contents

    Jump to section

    • Article
      • Abstract
      • Methods
      • Results
      • Discussion
      • Conclusions
      • Acknowledgments
      • Footnotes
      • References
    • Figures & Data
    • Info & Metrics
    • Comments

    Related Articles

    • No related articles found.
    • Scopus
    • PubMed
    • Google Scholar

    Cited By...

    • No citing articles found.
    • Scopus (4)
    • Google Scholar

    More in this TOC Section

    • Trends in Trampoline Fractures: 2008–2017
    • Global Health Experience and Interest: Results From the AAP Periodic Survey
    • Distinct Populations of Sudden Unexpected Infant Death Based on Age
    Show more Article

    Similar Articles

    Subjects

    • Infectious Disease
      • Infectious Disease
      • Vaccine/Immunization
    • Journal Info
    • Editorial Board
    • Editorial Policies
    • Overview
    • Licensing Information
    • Authors/Reviewers
    • Author Guidelines
    • Submit My Manuscript
    • Open Access
    • Reviewer Guidelines
    • Librarians
    • Usage Stats
    • Support
    • Contact Us
    • Subscribe
    • About
    • International Access
    • Terms of Use
    • Privacy Statement
    • FAQ
    • RSS Feeds
    • AAP.org
    • shopAAP
    • Follow American Academy of Pediatrics on Instagram
    • Visit American Academy of Pediatrics on Facebook
    • Follow American Academy of Pediatrics on Twitter
    • Follow American Academy of Pediatrics on Youtube
    • RSS
    American Academy of Pediatrics

    © 2019 American Academy of Pediatrics