June 2014, VOLUME133 /ISSUE 6

Practice-Tailored Facilitation to Improve Pediatric Preventive Care Delivery: A Randomized Trial

  1. Sharon B. Meropol , MD , PhD a , b , c ,
  2. Nicholas K. Schiltz , PhD b , c ,
  3. Abdus Sattar , PhD b ,
  4. Kurt C. Stange , MD , PhD b , d , e , f ,
  5. Ann H. Nevar , MPA c ,
  6. Christina Davey , MPH a , c ,
  7. Gerald A. Ferretti , DDS , MS , MPH a , g ,
  8. Diana E. Howell , MS a , c ,
  9. Robyn Strosaker , MD a ,
  10. Pamela Vavrek , RN c ,
  11. Samantha Bader , MPH c ,
  12. Mary C. Ruhe , RN , MPH d , and
  13. Leona Cuttler , MD a , c , h ,
  1. Departments of aPediatrics,
  2. bEpidemiology and Biostatistics,
  3. dFamily Medicine and Community Health,
  4. eOncology,
  5. fSociology,
  6. gPediatric Dentistry, and
  7. hBioethics, Case Western Reserve University, Cleveland, Ohio; and
  8. cThe Center for Child Health and Policy, Rainbow Babies and Children's Hospital, Cleveland, Ohio
  • Deceased.


OBJECTIVE: Evolving primary care models require methods to help practices achieve quality standards. This study assessed the effectiveness of a Practice-Tailored Facilitation Intervention for improving delivery of 3 pediatric preventive services.

METHODS: In this cluster-randomized trial, a practice facilitator implemented practice-tailored rapid-cycle feedback/change strategies for improving obesity screening/counseling, lead screening, and dental fluoride varnish application. Thirty practices were randomized to Early or Late Intervention, and outcomes assessed for 16 419 well-child visits. A multidisciplinary team characterized facilitation processes by using comparative case study methods.

RESULTS: Baseline performance was as follows: for Obesity: 3.5% successful performance in Early and 6.3% in Late practices, P = .74; Lead: 62.2% and 77.8% success, respectively, P = .11; and Fluoride: <0.1% success for all practices. Four months after randomization, performance rose in Early practices, to 82.8% for Obesity, 86.3% for Lead, and 89.1% for Fluoride, all P < .001 for improvement compared with Late practices’ control time. During the full 6-month intervention, care improved versus baseline in all practices, for Obesity for Early practices to 86.5%, and for Late practices 88.9%; for Lead for Early practices to 87.5% and Late practices 94.5%; and for Fluoride, for Early practices to 78.9% and Late practices 81.9%, all P < .001 compared with baseline. Improvements were sustained 2 months after intervention. Successful facilitation involved multidisciplinary support, rapid-cycle problem solving feedback, and ongoing relationship-building, allowing individualizing facilitation approach and intensity based on 3 levels of practice need.

CONCLUSIONS: Practice-tailored Facilitation Intervention can lead to substantial, simultaneous, and sustained improvements in 3 domains, and holds promise as a broad-based method to advance pediatric preventive care.

  • child
  • quality improvement
  • obesity
  • lead poisoning
  • dental caries
  • Abbreviations:
    Child Health Excellence Centera: University-Practice-Public Partnership
    confidence interval
    electronic medical record
    Practice-tailored Facilitation Intervention
  • What’s Known on This Subject:

    Children receive only half of recommended health care; disadvantaged children have higher risk of unmet needs. Practice coaching combined with quality improvement using rapid-cycle feedback has potential to help practices meet quality standards and improve pediatric health care delivery.

    What This Study Adds:

    The Practice-tailored Facilitation Intervention led to large and sustained improvements in preventive service delivery, including substantial numbers of disadvantaged children, and in multiple simultaneous health care domains. Practice-tailored facilitation holds promise as a method to advance pediatric preventive care delivery.

    Preventive services for children are crucially important to foster optimal health and developmental potential 1 3 ; however, children receive only about half of recommended health care. 4 Children from economically disadvantaged backgrounds have even higher risk of unmet needs. 5 9 Improving the capacity of primary care practices to meet national standards based on evidence-based quality metrics is central to emerging health care models, but significant hurdles exist. 10 17 System change is challenging; small group practices, the most common care setting, 13 , 18 20 traditionally lack the systems, resources, and quality improvement experience needed to achieve broad-based, sustainable practice improvements. Practices differ in organization, leadership, and preferences, so 1 approach does not fit all, 21 , 22 and addressing 1 component at a time can be inefficient and counterproductive. 23 25 Practice change facilitation implements interventions tailored to the needs of individual practices and, especially when combined with academic detailing and rapid-cycle feedback, has been shown to improve delivery of primary care–based preventive services 26 , 27 ; although experience with pediatric services is more limited, 24 , 26 , 28 34 and insights into how effective facilitation actually works are also limited. 34 36

    We implemented and evaluated a 6-month Practice-Tailored Facilitation Intervention (PTFI), called CHEC-UPPP (Child Health Excellence Center: a University-Practice-Public Partnership), that combines practice coaching with rapid-cycle feedback/change to improve delivery of recommended pediatric preventive services simultaneously in 3 domains of public health importance: obesity detection and counseling, lead screening, and fluoride varnish application to prevent dental decay, targeting diverse practice settings. 11 , 37 41 We also sought to characterize the process by which the facilitation approach can be successfully adjusted to meet diverse practices’ learning styles and levels of need.


    A cluster-randomized trial design was conducted in which practices were randomized to initial or delayed intervention with the PTFI, and outcomes were measured at the level of the patient. Observational field note and interview data supplemented quantitative data for comparative case studies of the facilitation process across practices.

    Practice Eligibility and Recruitment

    Practices were recruited from 2 practice-based research networks: the Rainbow Research Network, and Research Association of Practices, 42 44 pediatric and family medicine practice-based research networks, respectively, supported by the Cleveland Clinical and Translational Science Collaborative and The Case Comprehensive Cancer Center. Locally accessible practices were eligible for participation if they had at least 15% of patients of ≤10 years of age and at least 20% of pediatric patients covered by Medicaid insurance, and agreed to provide at least 2 of 3 targeted services and participate in educational meetings and chart reviews. Eligible practices were sequentially approached from November 1, 2010, to July 12, 2011, by letters followed by calls to practice lead physicians, until target practice enrollment was reached.

    The protocol was approved by the institutional review board. Practices received fluoride varnish, and pediatricians were eligible for American Board of Pediatrics Maintenance of Certification credit.

    Educational Sessions and Randomization

    After baseline data collection, practice sites received 2 standardized 30-minute educational sessions, delivered by a coinvestigator (LC, SBM, RS). 29 , 32 Sessions included information on the prevalence and health implications of obesity, lead exposure, and dental decay, and recommendations for preventive services, based on standards established by government and/or professional bodies. 11 , 37 41 , 45

    After the educational sessions, practices were randomized 1:1 to either Early-Phase or Late-Phase (control) PTFI from February 24, 2011, to August 31, 2011, by using covariate adaptive randomization, 46 stratifying on practice size (≤2 or ≥3 clinicians) and percentage of children with Medicaid (20% to 40% or >40%) to balance practice-level covariates. To avoid contamination, practice sites that were administratively linked with shared leadership were randomized to the same intervention group. Because treating each site as an independent cluster may underestimate variance, for sensitivity analysis we repeated analyses by using the 19 administrative groups as the cluster variable replacing the 30 individual practice sites. 47 , 48 Early-Phase practices began the 6-month PTFI immediately after randomization. Late-Phase practices began the PTFI after a 4-month lag (control time) during which program staff had no direct practice contact.


    The intervention was based on principles of practice-tailored facilitation 13 , 26 , 30 , 34 and rapid-cycle change. 13 , 18 , 24 , 31 , 49 52 Practice facilitation recognizes that imposed mandates for change are not likely to be sustained, and that incorporation of evidence-based practice requires tailoring an intervention based on individual practice needs. 21 , 24 , 26 Facilitators develop ongoing relationships with practices to perform audits, feedback, training, and system redesign.

    A practice facilitator was recruited, based on public health, primary care, and coaching experience. She had no previous experience with study practices and was introduced at their first educational session. The facilitator received training based on published literature, provided by a coinvestigator with facilitator training experience (MCR). 24 , 49 Study coordination was performed by a separate individual.

    At the beginning of the 6-month intervention, the practice facilitator conducted 1 to 2 days of observation to understand practice dynamics and establish relationships with staff. The facilitator then initiated the PTFI with each practice with a group meeting, reviewing baseline performance data with clinicians and staff. Guided by the facilitator, each practice set expectations and short-term goals designed to incrementally improve delivery of all 3 preventive services simultaneously, and planned steps to achieve them. Toolkits were provided, including materials useful for meeting goals (eg, BMI wheels, parent handouts, fluoride varnish kits).

    The practice facilitator visited each practice approximately weekly during the 6-month PTFI; mean visit length was 1 hour (interquartile range 40–70 minutes). At each visit, the facilitator delivered rapid-cycle feedback, tailored to specific practice needs. She (1) reviewed a small convenience sample of ∼5 to 10 charts for well-child visits conducted the previous week and documented whether targeted services were performed; (2) plotted each week’s results on “run charts”; and (3) “huddled” briefly with available practice members to review run charts, assess what had worked during the previous week, brainstorm solutions for further improvement, and select new tools/procedures to implement during the coming week. For example, some practices needed training in BMI calculation/interpretation, some rearranged processes/documentation, and most learned fluoride varnish application. Updated run charts were posted in prominent locations, and facilitators held structured feedback meetings with all practice members to review results of large-scale data collection after 6 full months of PTFI, and at 2 months after the PTFI program ended. Facilitators did not provide other support during the 2-month follow-up period of sustainability assessment.


    Quality measure outcomes were measured at the individual well-child visit level (Table 1). For obesity, the outcome was appropriate screening and counseling at well-child visits for children 2 to <18 years of age, defined based on Health Effectiveness Data and Information Set (HEDIS), and professional organizations’ 11 , 37 recommendations to include BMI calculated, documented, and plotted on a growth chart and, for overweight or obese children (BMI ≥85th percentile), documented counseling for nutrition and physical activity. For lead, the outcome was screening at 12 and 24 months of age per Ohio Department of Health guidelines. 38 , 39 For practices using universal lead testing, this was defined as lead blood tests ordered within 3 months of the 12-month well-child visit, and within 3 months of the 24-month visit. For practices that performed selective testing, appropriate screening was defined as documented blood lead tests ordered within 3 months for children within high-risk zip codes, with Medicaid insurance, or with risk factors based on an Ohio Department of Health questionnaire. For fluoride, the primary outcome was varnish application during well visits for children 12 to 35 months of age, based on Ohio Department of Health guidelines and Ohio Medicaid reimbursement policy, unless the chart documented that the child had no teeth, had received varnish within the previous 6 months, or had a parent decline. 41 , 45

    TABLE 1

    Quality Measures

    Data Collection

    Outcome data were collected by a separate evaluation team from January 19, 2011, to October 25, 2012, by chart reviews, at baseline before randomization and at 2-month intervals after randomization, over a total of 8 and 12 months for Early/Late practices, respectively.

    Practice records were used to identify well-child visits for the appropriate age groups in the 2 months preceding each collection date; charts were reviewed in reverse chronological order until all were analyzed or a maximum of 100 charts were reviewed, whichever occurred first.

    The facilitator kept detailed field notes of practice observations and interactions to guide her intervention and inform the qualitative analysis.

    Sample Size

    We estimated that 30 practices would participate, with11 to 40 eligible well-child visits per practice per day. Using a 2-sided α, assuming a change in the Late-Phase group up to 10%, and an intraclass correlation coefficient of 0.05, 31 , 48 we would have 80% power to detect a 10% difference in performance rates between intervention groups with 52 charts screened per practice per outcome per data collection period.

    Statistical Analyses

    Analyses used Stata SE 12.1 (Stata Corp, College Station, TX), setting the type I error at 0.05, without correction for multiple comparisons. For univariate analysis, Student’s t test was used for continuous variables and χ2 or Fisher’s exact test for categorical variables. 53

    The unit of analysis was the well-child visit for each of the 3 study domains. The following a priori comparisons were performed for each preventive service outcome. (1) The relative effect of the PTFI versus control on targeted service delivery was assessed by comparing the change in the proportion of eligible children receiving each service at well-child visits during the first 4 months after randomization, comparing Early-Phase practices (PTFI time) with Late-Phase practices (control time). (2) The full effect of the 6-month PTFI was assessed for all practices by comparing service delivery after 6 months of the PTFI versus at baseline. (3) To assess whether the PTFI had a consistent effect throughout the study, improvement in service delivery during the 6-month PTFI was compared for Early versus Late practices. (4) Short-term sustainability of the PTFI effect was assessed by comparing service delivery after 6 months of the PTFI to delivery 2 months after the end of the PTFI (follow-up time) for all practices.

    For multivariable analysis, using Stata’s xtlogit function with mixed effects polynomial logistic regression, service delivery for each outcome was modeled on the Intervention group over time, adjusting for clustering by practice and assessing model fit by using Akaike’s Information Criterion. 47 , 48 , 54 56 Covariates were included if they were significantly different between Early- and Late-Phase groups and if they changed the odds ratio point estimate by ≥10%. For sensitivity analysis, all analyses were repeated by using the 19 practice networks replacing their 30 practices as the cluster unit.

    Comparative Case Study Analyses

    Using data sources shown in Supplemental Appendix Table A1, qualitative analyses were conducted by a multidisciplinary study team (facilitator, study coordinator, data managers) with expertise in nursing, pediatrics, public health, epidemiology, sociology, and psychology. An outside analyst with no previous experience with the study provided external perspective and challenged the team to make their findings explicit. Team members with minimal involvement in the initial analysis served as auditors who challenged and asked for corroborating data for themes. 24 This involvement of multiple data sources, disciplinary perspectives, and analytic frames represents methodological triangulation, a source of rigor and trustworthiness in qualitative analysis. 57

    The facilitator used multiple data sources to draft a 1-page descriptive case study of each practice’s intervention experience, including behavioral dynamics, communication and leadership styles, work ethics, barriers to change, and her perception of what motivated the practice to reach outcome goals. Each team member reviewed the draft and offered impressions of each practice. Through this iterative process, consensus was reached on emergent key descriptive traits for each practice.

    The outside analyst created a separate list of descriptive traits and behaviors, noting how the facilitator tailored each practice’s intervention, then met with the team to compare lists of practice traits, habits, and behaviors. The analyst's independent description ultimately matched the study team’s portrayal, helping to explain the intensity of the facilitator’s tailoring process. An evolving definition of facilitation intensity was created to include other, not easily measured components (eg, time spent by the facilitator/study staff outside the practice responding to requests, amount of education/reminders needed, degree of tailoring needed).

    Cross-case analyses involved study team members individually identifying themes in practice descriptive data related to the facilitation process and outcome, by using a combination of editing 35 and immersion/crystallization approaches. 52 Emergent themes were discussed, challenged, and refined during multiple meetings, and cross-cutting themes were identified.

    We assessed whether there was a relationship between the level of facilitation intensity needed and practice’s improvement over time, by using the mixed effects logistic regression model for each service outcome.


    Participant Flow

    Practice recruitment and flow through the study are shown in Fig 1. Forty practices were sent invitations; 5 practices could not be reached. The remaining 35 practices were screened further for eligibility; 2 were not eligible and 2 declined to schedule education meetings. The remaining 31 practices were randomized, 16 to Early-Phase PTFI and 15 to Late PTFI. One Late-Phase practice withdrew before PTFI data collection. Data were collected for 16 419 well-child visits.

    FIGURE 1

    Flow diagram. aMore visits were included for Late-Phase practices because they had 4 months of control time before starting the intervention. FTE, full-time equivalent.

    Baseline Data

    Table 2 shows baseline data comparing Early- with Late-Phase practices; randomization effectively achieved balance between the intervention groups. There were no differences in number of clinicians or staff, clinician experience or gender, percentage of Medicaid visits, practice type or location, or use of electronic medical records (EMRs). Half of the practices included at least 40% of children covered by Medicaid. There were also no baseline differences between Early- and Late-Phase practices in the proportion of children receiving appropriate obesity screening/counseling: 3.5% (95% confidence interval [CI] 0.2.2%–5.8%) for Early practices versus 6.3% (3.9%–9.9%) for Late practices, lead screening: 62.2% (47.8%–74.7%) for Early Practices versus 77.8% (64.7%–87.1%) for Late practices, or fluoride varnish application (only 1 Late-Phase practice applied fluoride varnish at baseline).

    TABLE 2

    Practice Characteristics: Comparing Early- and Late-Phase Practices


    Table 3 and Figs 2, 3, and 4 show the changes in the delivery of the 3 preventive services, adjusted for clustering by practice, over the course of multiple time points during the study. There were significant and simultaneous improvements in all 3 services (obesity, lead, fluoride) comparing Early-Phase PTFI time relative to Late-Phase control time at 4 months, and similar large improvements comparing all 3 outcomes at the end of the 6-month PTFI time versus baseline for all practices. Improvement was more dramatic for obesity and fluoride outcomes, which had very low baseline rates, relative to baseline lead screening.

    TABLE 3

    Delivery of Targeted Services: Adjusted Results From the Mixed Effects Polynomial Logistic Regression Analyses

    FIGURE 2

    Obesity: detection and counseling performed.

    FIGURE 3

    Lead: appropriate screening performed.

    FIGURE 4

    Fluoride: varnish applied.

    Obesity Screening and Counseling

    During the first 4 months after randomization, obesity screening and counseling fell initially, then rose slightly in Late-Phase practices (control time) to 12.2% successful performance (95% CI 8.2%–17.8%), but rose substantially in Early-Phase practices (PTFI time) to 82.8% successful performance (76.1%–87.9%), with P < .001, comparing the change for Early versus Late Phase practices (Table 3, Fig 2). The full 6-month PTFI was associated with large improvements in obesity screening/counseling in all practices; obesity screening/counseling rose to 86.5% (80.9%–90.7%) in Early-Phase and 88.9% (83.7%–92.5%) in Late-Phase practices after 6 months of intervention, both P < .001 compared with baseline. The rate of improvement, in percent change over the 6-month PTFI, was the same in Early- and Late-Phase practices (P = .14). For all practices, the improvement was sustained during the 2 months after the end of the PTFI (follow-up time); P = .663, P = .192, for Early- and Late-Phase practices, respectively.

    Lead Screening at 12 and 24 Months of Age

    During the first 4 months after randomization, lead screening decreased in Late practices (control time) to 70.9% successful performance (56.8%–81.9%), but rose substantially in Early-Phase practices (PTFI time) to 86.3% successful performance (77.4%–92.0%), P < .001, comparing the change for Early versus Late practices (Table 3, Fig 3). The full 6-month PTFI was associated with improvement in lead screening in all practices; lead screening rose to 87.5% (79.2%–92.7%) in Early-Phase practices and to 94.5% (89.7%–97.1%) in Late-Phase practices, both P < .001 compared with baseline. The rate of improvement of percent change over the 6-month PTFI was the same in Early- and Late-Phase practices (P = .62). For all practices, the improvement was sustained during the 2 months after the end of the PTFI, with Late-Phase practices showing further improvement during this time.

    Fluoride Varnish Application

    During the first 4 months after randomization, fluoride varnish application decreased very slightly in Late-Phase practices (control time) (P = .93) but rose substantially in Early-Phase practices (PTFI time) to 89.1% successful performance (82.8%–93.3%), P < .001 (Table 3, Fig 4). The full 6-month PFTI was associated with a large improvement in fluoride varnish application in all practices; application rose to 78.9% (68.8%–86.4%) in Early-Phase and to 81.9% (72.9%–88.3%) in Late-Phase practices by 6 months, both P < .001 compared with baseline. The rate of percent improvement over the 6- month PTFI was the same in Early and Late practices (P = .65). Late-Phase practices sustained this improvement during the 2 months after the end of the PFTI (P = .57), whereas Early-Phase practices showed further improvement (P = .01).

    Specific practice characteristics were not significantly associated with differences in service delivery. Results were unchanged with analyses using the 19 practice networks replacing the 30 practices as the cluster unit.

    Tailoring the Facilitation Intervention

    A key foundation for practice change was ongoing relationship-building with practices, coupled with the facilitator’s sensitive tailoring of the intervention to each practice’s level of need (Fig 5). Using available resources, an iterative learning process, and practices’ growing motivation and trust fueled by early successes, were critical stepping stones to successful practice change.

    FIGURE 5

    Building blocks for reaching high success rates with practice change.

    Qualitative analyses suggested 3 main practice profiles corresponding to level of need: low, medium, or high, corresponding to the intensity of facilitation required to foster change. Examples across this spectrum are included in the Supplemental Appendix. It was not possible to determine the facilitation intensity required by considering 1 or 2 factors alone (Fig 6 A and B). The combined effects of different practice characteristics and barriers needed to be considered together, along with the facilitator’s growing familiarity with the practice over time, to tailor the intervention intensity.

    FIGURE 6

    A, Practice characteristics by facilitation intensity. B, Practice decision style: by facilitation intensity.

    For example, one might expect a large busy practice to require intensive intervention, but other characteristics may offset this (eg, a strong leader or practice-improvement process well-established at baseline). Likewise, a small, less-busy practice may require more intensive intervention under certain circumstances (eg, undergoing transition to an EMR).

    There were performance differences between practices needing “low” versus “medium” versus “high” facilitation intensity for all 3 outcomes, with a “dose effect” throughout the study period; medium-intensity practices performed better than “low-intensity” practices, and “high-intensity” practices had the best performance, at baseline, after the full 6-month intervention, and after the sustainability period, 2 months after intervention end (Supplemental Appendix Table A2).

    The intervention process involved repeated trial and error, with helping practices brainstorm and pilot process change. Although the educational and formal feedback sessions were designed for practice meetings, many practices found large meetings disruptive; between larger meetings they preferred brief group “huddles” and one-on-one facilitator interactions, to review weekly run charts and fine-tune changes. Meeting with providers independently was sometimes necessary; this could involve finding personal motivational hooks or instrumental solutions, and/or troubleshooting roles and processes. Often this involved demonstrating lags in peer-comparison performance, and/or reframing goals.

    Preventive service documentation needed to be thorough and explicit, while also nonburdensome and tailored to practices’ needs, preferences, and sometimes evolving medical record systems. Certain services, such as behavioral counseling, may be underrepresented in the medical record, but we believe that even if there is an increase in documentation, this represents a positive effect of the intervention, as documentation is an important aspect of service delivery.


    The CHEC-UPPP randomized controlled trial and multimethod comparative case study process assessment evaluated the effectiveness of practice facilitation targeted to practices’ specific needs, including sensitive, supportive feedback, problem solving, and rapid-cycle change. Several previous studies evaluated practice coaching to improve pediatric preventive service delivery in community practices, including rapid-cycle feedback, 31 , 58 65 academic detailing, and learning collaboratives; some targeted single services and used a higher facilitator-to-practice ratio. 31 , 66 , 67 Previous studies have not described practice-specific characteristics associated with improvement. 34 , 61 , 65

    Our PTFI program led to large improvements in all 3 services: obesity detection/counseling, lead screening, and fluoride application. Most improvements were broad-based with no difference in improvement across practices with different characteristics. Addressing multiple improvements simultaneously using rapid-cycle feedback can efficiently provide an overall higher treatment “dose.”

    For Late-Phase control practices, all 3 services showed a slight decrease from baseline to the 2-month time point; although statistically significant, these changes were much smaller in clinical significance than their subsequent improvements, and Early practices’ improvements, during intervention time.

    The finding that facilitation intensity needs were associated with practice performance was not surprising, as practices were characterized retrospectively after completing the intervention. Future research can explore whether practices’ needs can be predicted; services could potentially be further targeted to practices with higher needs.

    The key component of our facilitation process was the longitudinal evolving relationship between the facilitator and the practices and its interaction with (1) a diverse team with complementary skills; (2) repeated cycles of outcome assessment, feedback, and problem solving; and (3) a sensitive facilitator who could gain rapport with diverse members of each practice’s culture, judge their needs, and tailor the nature and intensity of the intervention accordingly.

    This program’s findings are consistent with Solberg’s 68 framework for practice improvement, incorporating both “hard” systems changes and “soft” changes in organizational culture, and with Bodenheimer et al’s 69 findings that the most important predictors of care improvements are strong leadership and organizational structures that value quality. Some practices had competing priorities that influenced their “leverage points” and relative “readiness-to-change,” but all eventually achieved improvement.

    By first engaging medical and administrative leaders, and then reaching out to all practice members, we recognized that practices function like complex adaptive systems, 70 74 while also establishing that high-quality care delivery depends on each individual’s contributions. 68 , 75 77 The rapid improvements by most practices are evidence that the intervention had great salience to providers and staff, that they place high value on delivering quality care, and were ready to take advantage of the opportunity to change, by using the facilitator as a catalyst. Although classic practice-based quality improvement depends on formal group meetings to perform plan-do-study-act cycles, we found that the facilitator’s flexibility to many practices’ preference for brief informal “huddles” and one-on-one interactions was key for maintaining buy-in and motivation; clinicians and staff learned to take responsibility both individually and as a group in plan-do-study-act–led improvement. 51 , 68 Neither our quantitative nor our qualitative analyses could discern individual practice characteristics that predicted success, probably because the individualized facilitation approach helped to overcome individual practice barriers.

    This article builds on previous work to show how effective facilitation is supported by the facilitator’s evolving relationship-based understanding of each practice’s culture, linked to scientific evidence, goal setting, problem-solving, flexibility, feedback, a diverse supportive team, and space for learning and reflection, creating a sometimes messy and iterative process of practice improvement. 24 , 49 , 52 , 78 , 79

    This study’s strengths include our randomized design using a control group with a lagged intervention, multimethod comparative case study process assessment, research team with previous facilitation experience, number and diversity of practices, simultaneous inclusion of 3 unrelated outcomes, strong longitudinal practice relationships, and large improvements in all outcomes across all practices. 43 Facilitation responsibilities were handled separately from research data collection, supporting the potential for dissemination beyond the research setting.

    Limitations of our study include that PTFI may not be equally effective with different service targets, in different settings, or using facilitators with different training/skills. The history of strong collaborative relationships between our academic medical center and regional practices may have contributed to the program’s success. Second, we did not assess sustainability beyond 2 months after PTFI ended. It may be unrealistic to expect that practices can sustain successes indefinitely and initiate new improvements on their own. Instead, practice facilitation may work best as part of a redesigned primary care model, with biweekly or monthly facilitator visits to reinforce earlier successes and introduce new goals. Improvement was apparent in all practices within 2 to 4 months, suggesting that shorter intervention cycles could enhance efficiency. This model could provide long-term sustainability, perhaps supported by future pay-for-performance incentives. Third, it is difficult to perform cost-effectiveness analyses for preventive services 30 ; although we can estimate the cost of PTFI services, it is harder to precisely define future cost savings due to improved preventive care. 13 , 14

    Future research can explore wider dissemination, addressing broader use in other contexts, and longer-term sustainability. Using this study as a template, we are currently up-scaling the PTFI program to include additional practices and outcomes.

    In conclusion, these results suggest that the PTFI holds promise as a method to advance meaningful, broad-based, generalizable, and sustainable improvements in pediatric preventive health care delivery.


    The authors dedicate this manuscript to the memory of Dr Leona Cuttler; we are grateful for her inspiration and leadership.

    We are indebted to the participating practices and to Lauren Birmingham, MA, for her work with data management.


      • Accepted March 14, 2014.
    • Address correspondence to Sharon Meropol, MD, PhD, Rainbow Babies and Children’s Hospital, 11100 Euclid Ave, Mailstop 6019, Cleveland, OH 44106. E-mail: sharon.meropol{at}
    • Dr Meropol helped to design and supervise the intervention, evaluation, and data collection, participated in the intervention, planned and carried out the analysis, drafted the initial manuscript, reviewed and revised the manuscript, and approved the final manuscript as submitted; Dr Schiltz assisted with the evaluation, analysis plan, and analysis, and reviewed and revised the manuscript; Dr Sattar assisted with and supervised the analysis, and reviewed and revised the manuscript, and approved the final manuscript as submitted; Dr Stange advised and assisted in study design, designing and implementing the intervention and evaluation, reviewed the analysis, reviewed and revised the manuscript, and approved the final manuscript as submitted; Ms Nevar assisted with study design and supervision, assisted with design of the evaluation and intervention, reviewed the analysis, and reviewed and approved the final manuscript as submitted; Ms Davey assisted with data collection instrument and design of the intervention and qualitative analysis, participated in data collection and evaluation, carried out the intervention, assisted with the qualitative analysis, and reviewed and approved the final manuscript as submitted; Dr Ferretti helped to conceptualize and design the study, reviewed the results, and reviewed and approved the final manuscript as submitted; Ms Howell assisted with data collection instrument design, assisted with carrying out the intervention, participated in the intervention and evaluation, and reviewed and approved the final manuscript as submitted; Dr Strosaker assisted with study, intervention and evaluation design, participated in the intervention, assisted with study supervision, and reviewed and approved the final manuscript as submitted; Ms Vavrek assisted with design of the data collection instruments and qualitative analysis, data collection, intervention, and evaluation, coordinated and supervised the intervention and data collection, assisted with data review and qualitative analysis, and reviewed and and approved the final manuscript as submitted; Ms Bader assisted with critical review of the qualitative data from the intervention, and reviewed and approved the final manuscript as submitted; Ms Ruhe advised and assisted in study design, advised and assisted with intervention implementation and evaluation, advised on and reviewed the qualitative analysis, and reviewed and revised the manuscript, and reviewed and approved the final manuscript as submitted; and Dr Cuttler conceptualized, designed, and supervised the study, designed the intervention and data collection, helped to design, and supervised and reviewed the analysis, and reviewed and revised the manuscript; .

    • This trial has been registered at (identifierNCT01739166).

    • FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.

    • FUNDING: This work was supported by a grant from the Medicaid Technical Assistance and Policy Program from the Ohio Department of Jobs and Family Services (Dr Cuttler) and by The Center Child Health and Policy at Rainbow Babies and Children’s Hospital. A portion of Dr Stange's time is additionally supported by a Clinical Research Professorship from the American Cancer Society. This publication was made possible by the Clinical and Translational Science Collaborative of Cleveland, UL1TR000439 from the National Center for Advancing Translational Sciences component of the National Institutes of Health (NIH), and NIH Roadmap for Medical Research. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. Funded by the National Institutes of Health (NIH).

    • POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.