Improving Recruitment and Retention Rates in a Randomized Controlled Trial
High recruitment and retention rates in randomized controlled trials are essential to ensure validity and broad generalizability. We used quality improvement methods, including run charts and intervention cycles, to achieve and sustain high recruitment and retention rates during the Hospital-To-Home Outcomes randomized controlled trial. This study is examining the effects of a single nurse–led home health care visit after discharge for an acute pediatric hospitalization. A total of 1500 participants were enrolled in the 15-month study period. For study recruitment, we assessed the percentage of patients who enrolled in the study among those randomly selected to approach (goal ≥50%) and the percentage of patients who refused to enroll from those randomly selected to approach (goal ≤30%). For intervention completion, we examined the percentage of patients who completed the home visit intervention among those randomized to receive the intervention (goal ≥95%) were examined. Follow-up rates were tracked as the percentage of patients who completed the 14-day follow-up telephone survey (goal ≥95%). The study goals for 2 of the 4 metrics were met and sustained, with statistically significant improvements over time in 3 metrics. The median enrollment rate increased from 50% to 59%, and the median refusal rate decreased from 37% to 32%. The median intervention completion rate remained unchanged at 88%. The 14-day follow-up completion median rate increased from 94% to 96%. These results indicate that quality improvement methods can be used within the scope of a large research study to achieve and sustain high recruitment and retention rates.
- CCHMC —
- Cincinnati Children’s Hospital Medical Center
- H2O —
- Hospital-To-Home Outcomes
- RCT —
- randomized controlled trial
Randomized controlled trials (RCTs) are considered the gold standard of study designs.1 Proper, effective randomization with adequate sample size ensures validity by balancing both measured and unmeasured confounders in the intervention and control groups. High-quality RCTs include appropriate randomization, blinding, and a full accounting of all patients (ie, follow-up for outcome assessment).2 For the intervention arm of an RCT, high intervention completion rates are important to maximize the ability to assess the effect of the intervention.
Previous studies have described methods to ensure adherence to the intervention, including run-in periods or choosing not to enroll subjects whose previous clinical adherence has been low.3 Assessment of outcomes in 100% of enrollees is seen as the only acceptable rate,4 although this rate is not realistic in studies in which the outcomes are determined after a follow-up period. Furthermore, many RCTs have less-than-desirable enrollment rates, raising issues with generalizability.5 Although randomization can help protect validity in studies with low enrollment rates, high enrollment rates in RCTs enhance the generalizability of the intervention and the study results.
The H2O (Hospital-To-Home Outcomes) study was designed to assess the effectiveness of a one-time, nurse-led, transitional home visit in an acute care pediatric population after hospital discharge.6 The effectiveness of the nurse visit was studied through a single-center RCT. Quality improvement methods were used to maximize study enrollment and intervention completion rates, as well as to obtain follow-up outcome measures on all patients. The objective of the present article was to describe the methods used during the RCT to achieve and sustain high rates of study enrollment, intervention completion, and completion of a 14-day follow-up telephone call used to collect outcome measures.
Cincinnati Children’s Hospital Medical Center (CCHMC) is a 629-bed, urban, academic pediatric hospital and predominant pediatric inpatient facility with an 8-county service area. There are ∼10 000 annual admissions across the inpatient service lines included in the H2O study (General Hospital Medicine [which included patients admitted to the general hospital medicine service, community pediatrics, and adolescent medicine], as well as Neurology, Neurosurgery, and Complex Care Hospital Medicine). Patients were eligible for the study if they were <18 years old, resided with an English-speaking caregiver, lived in the home nursing service area (a 4-county subset of CCHMC’s primary service area), and were not eligible for traditional home nursing visits (eg, patients who receive traditional nurse visits for medication infusions).6 Patients whose caregivers had twice previously refused enrollment in the study were excluded from subsequent attempts to enroll. Our study was approved by the CCHMC institutional review board.
Research assistants recruited patients most weekdays (with the exception of holidays) and ∼3 weekend days per month. Patients were approached for recruitment when they had a clinical prediction of discharge7 within the next 48 hours. This time frame was chosen to ensure that patients received the home visit, if randomized to that group, as close to discharge as possible. To ensure patients had equal opportunity to be approached for the study, each morning eligible patients were randomly selected to be approached for enrollment. The number of patients randomly selected depended on research assistant staffing and ranged from 8 to 16 patients per day. Parents of eligible patients were approached for consent (and assent of the patient if the patient age was ≥8 years).
Those who consented completed a baseline in-hospital survey facilitated by a research assistant. Patients were subsequently randomized to receive either standard of care or a 1-time nurse home visit. For patients randomized to the intervention, a home care nurse would meet the family in the hospital to schedule the visit. In a minority of instances in which the nurse was unable to meet with the family before discharge, the nurses called the family to schedule the home visit. According to the study protocol, the home visit was to be completed within 96 hours of hospital discharge. Parents completed the 14-day follow-up telephone survey to report on postdischarge outcomes. We considered the 14-day follow-up call to be successful if it occurred within the allotted time frame beginning at day 14 after hospital discharge and extending through day 23 (to allow for 7 business days to complete the call). Additional information regarding the intervention, procedures, and measures is detailed in the published protocol.6
Before the trial, the study team completed an intervention development phase to test and improve the nurse home visit.6 The goal of this phase was to align the intervention with parental needs as well as to optimize feasibility. The pretrial improvement phase also allowed the study team to develop and refine methods for reaching families for feedback.
During the trial, the multidisciplinary study team met weekly to review progress toward goals and to develop strategies to improve recruitment, intervention completion, and retention goal rates. This team included the 2 physician principal investigators, a physician co-investigator, the project manager, and 2 research assistants. The larger study team, which included physicians, home care nurses, and statistical team members, met biweekly to discuss recruitment progress and challenges. A parent of a previously hospitalized child, who was a member of the study team, also provided pivotal feedback. Our parent participant was referred to the study team by 1 of the co-investigators; the lead research assistant reached out to the parent to determine if she was interested in joining the study team. The parent participant had 2 children who had previously been hospitalized; 1 of the children had a 1-time hospitalization, and the other had more frequent hospitalizations with our Neurosciences team. Our parent participant attended the biweekly study team meetings, which provided orientation to the study, and also provided ad hoc feedback to the research assistants on an as-needed basis throughout the trial. This feedback was critical when the research assistants were initially practicing the consenting process but was also helpful in refining scheduling and contact methods for the home visit and follow-up telephone call.8
Four main metrics were tracked: 2 recruitment metrics, an intervention completion metric, and a follow-up metric. For study recruitment, we assessed the percentage of patients who enrolled from those randomly selected to approach (goal ≥50%) and the percentage of patients who refused from those randomly selected to approach (goal ≤30%). For intervention completion, we examined the percentage of patients who completed a home visit among those randomized to receive a home visit (goal ≥95%). Follow-up rates were tracked as the percentage of patients who completed the 14-day follow-up telephone survey (goal ≥95%) within the specified window.
Quality Improvement Methods
The Model for Improvement,9 including rapid-cycle testing of interventions, was used to improve outcomes in real-time. Outcomes were tracked on weekly run charts and reviewed at weekly team meetings to assess progress toward goals. Run charts display data over time to help detect special causes of variation, and statistically significant shifts in the median occur with ≥7 consecutive points above or below the established median.10 The study team also reviewed an enrollment algorithm weekly that detailed the patient population, including reasons for ineligibility, reasons patients were not approached, and reasons patients did not enroll. Intervention and control data were examined in aggregate to preserve blinding of the investigators.
During the weekly review of outcome measures, the team discussed failures as well as possible interventions that could mitigate them. Failures discussed included: reasons patients did not enroll (eg, refused or unable to complete consent process with parent or guardian), reasons that the home visits were not completed on patients randomized to receive them, and reasons the 14-day follow-up telephone calls were not completed. If a suggested intervention seemed applicable and feasible, the intervention was tested during the course of the following week, and outcomes of the test were discussed at the next weekly meeting. If nurses or research assistants believed that the intervention improved interactions with families, the intervention was adopted. Successful interventions improved the study metrics as shown by a median shift on the run chart. Larger intervention decisions were brought to the whole team to be discussed during the biweekly meetings. Successful interventions were adopted into practice by the research team.
Over the 15-month study period, there were 3934 eligible patients, of whom 2777 were randomly selected to approach (70.5% of eligible). We enrolled 1500 patients (54% of those randomly selected). Of those eligible but not randomly selected, 42% were due to no research assistants present to approach (weekends and holidays), and 36% were not randomly selected to approach (Fig 1). The main reason that patients were not randomly selected was due to only 1 research assistant recruiting on the weekends, with the rest due to days of high inpatient census. The remainder were discharged 36 to 48 hours before their anticipated discharge time and thus were not eligible for random selection. Of those randomly selected but not enrolled, 34% refused. Primary refusal reasons included not wanting the home visit (either due to parents preferring not to have a stranger in their home or being too busy to schedule), parents not interested in research, or parents believing there was too much going on to complete a research study in the hospital. The remainder were discharged before they were approached, unable to consent due to no legal guardian present in the hospital, or the research assistant was unable to complete the consent process during the patient’s stay. Fourteen patients completed consent and enrollment but were not randomized because they were determined to be ineligible before randomization. Patients who were randomized to the study were similar to patients who did not enroll in the study and those who refused (Supplemental Table 1).
Improving Study Enrollment/Decreasing Refusals
For the first 7 months of the study, the enrollment median rate was 50% (Fig 2A), and the refusal median rate was 37% (Fig 2B). During this time, we had 2 patient recruiters depart and 2 replacement research assistants hired and trained. At the weekly meetings, the research assistants and co-investigators discussed recruitment challenges and successes, and shared ideas to improve recruitment. Because the primary refusal reason was because parents did not desire a home visit, 1 research assistant developed an intervention to test a new consenting approach. In the new approach, the research assistant highlighted the potential benefits of a home visit early in the consent process. After the test was qualitatively successful, the other research assistants implemented the change in their consenting process. After that change, the enrollment median rate increased to 56%, reaching our a priori goal, and the refusal median rate decreased to 32%; both outcomes met the criteria of statistical significance. These rates were maintained through the end of the study period.
Improving Intervention Adherence
The team monitored the intervention completion rate through review of the weekly run charts during the early months of the study but did not regularly convene with the nursing team responsible for the scheduling and completion of the home visits. Due to the completion rate being below goal for 4 months, a second weekly telephone call was added to include the project manager, the 2 patient recruiting staff members, a co-investigator, 2 nurse managers, and 5 nurses responsible for the scheduling and tracking of the home visits. After these calls began, a number of interventions were tested to improve intervention adherence. The tested interventions included the following: (1) real-time notification of visit failures to the research assistant who enrolled the patient for rapid learning; (2) standard discussion points about the visit for the nurse to communicate effectively with families; (3) texting the families with their visit time; and (4) asking families to add the date and time of the visit into their telephone calendar. None of those interventions had an impact on the completion rate. The median rate of completed home visits remained unchanged at 88% throughout the course of the study (Fig 3). This median represents an average of 1.5 incomplete visits per week. Some of these incomplete visits (20 [320.8%] of 96) were because the patient was determined to be ineligible for the home visit after recruitment and randomization (primarily due to referral for traditional home visits instead). Most incomplete visits were due to patients canceling and not rescheduling or not being home for the home visit.
Decreasing Loss to Follow-up
The rate for the 14-day follow-up telephone call completion increased from a median of 94% at the beginning of the study to a median of 96% by the end of the study period (Fig 4), with many weeks having 100% of calls completed. Research assistant turnover of those completing the telephone call also occurred. Early on, at the suggestion of the parent on our study team, we began texting families the day they were eligible for the follow-up telephone survey to confirm the best time for them to complete the call. This intervention not only helped increase the rate of call completion, but it also allowed for the best possible utilization of research assistant time because they knew the family would be available when they called. Parents were sent text messages and called up to 7 times. These contacts occurred during daytime, evenings, and weekends. If early attempts were unsuccessful, a research assistant would e-mail, send a letter in the mail, or both, to attempt to reach families in other ways. All of these strategies contributed to the high call completion rate.
Maximizing recruitment and retention rates helps to increase the generalizability of RCT findings. Through the use of quality improvement methods during the enrollment and retention periods of an RCT, we achieved our a priori goals on 2 of the 4 core study process metrics. Furthermore, with these methods, over time, we increased the median percentage of patients enrolled in the study, decreased the median percentage of patients who refused, and increased the percentage of families who completed 14-day follow-up telephone calls. Although not meeting our goal, we were able maintain intervention completion rates at 88% despite staffing turnover and some other challenges. Weekly data review, including the identification of failures by a multidisciplinary team of key stakeholders, allowed for the discussion and implementation of tests of change in real time.
The inclusion of key stakeholders in failure mitigation and intervention planning has been well described in clinical quality improvement work.11–14 By utilizing feedback from frontline staff, including the research assistants and scheduling nurses, we were able to test and implement interventions that were seamlessly implemented into the workflow. The parent on our research team suggested text messaging as an effective way to contact families; this strategy improved the follow-up telephone call completion rate and was also used in the scheduling of the home visits. The importance of including the input of patients and their families in research studies has been increasingly documented, both as longitudinal team members15–18 and providing short-term, focused feedback on study processes.8
Although we were able to improve enrollment, refusal, and follow-up call completion rates, our interventions did not improve the home visit completion rate, and we fell just short of our a priori goal. We tried to optimize the enrollment processes to reduce the number of ineligible patients randomized (ie, those receiving traditional home nurse visits) and attempted to affect family willingness to complete the visits. Neither effort produced a positive change in the median visit completion rate. We believe that our efforts helped to sustain the rate, however. Previous studies that included nurse home visits found similar, or lower, completion rates,19 highlighting that for many families, even though they had previously expressed interest in completing a home visit, there are still barriers to having a nurse visit the home. Although our overall incompletion rate was small, it is possible that this group of patients is a particularly vulnerable population. Thus, more research should focus on determining interventions in this population that may be more acceptable to families who are uncomfortable with home visits (eg, nurse-led telephone calls, telehealth visits) to both increase recruitment of these families as well as ensure intervention completion.
Quality improvement methods are widely used in health care to improve patient outcomes. As such, this research has important implications for clinicians. First, researchers can partner with clinicians to leverage their quality improvement expertise and improve clinical research studies in innovative ways. Many clinicians, particularly in the inpatient setting, are well trained and experienced in quality improvement methods because they are now an Accreditation Council for Graduate Medical Education requirement.20 Second, higher recruitment rates in trials bolster generalizability and thus gives clinicians greater confidence in the study results and their ability to apply trial results to their patients. Finally, the multi-stakeholder approach (in particular, the addition of parents) has broad utility. Parent participation in quality improvement efforts can contribute to patient- and family-centered approaches to improvement. Parents partnering with nurses and physicians can enhance clinical experiences and the acceptability of experiences such as previsit planning, immunization rates, and well-child visits.
This research is not without limitations. Our RCT was conducted at an institution with a robust quality improvement culture,7,21–23 and other sites may have more challenges implementing these strategies within clinical trials. However, our primary improvement methods involved weekly meetings and review of data, totaling ∼1 hour per week. Although the study paid for the cost of the home visit, the nurses on our study team provided in-kind support to the improvement effort. We also benefited from the input of a family stakeholder, which is not always present in clinical trials. However, none of our interventions required extraordinary measures and thus are likely widely scalable in the context of a clinical trial. Finally, although recruitment quality improvement methods likely contributed to the known similarities between the enrolled, refused, and nonenrolled groups, there may be unmeasured differences that exist in other demographic or clinical characteristics.
Quality improvement methods can be used within the scope of a large research study to improve recruitment and retention rates. Weekly review of data and failures by a multidisciplinary study team is widely implementable in RCTs. By leveraging expertise of frontline stakeholders, including parents or patients, interventions can be designed to have the greatest impact while being seamlessly integrated into research workflows.
H2O Study Group
JoAnne Bachus, BSN; Andrew F. Beck, MD, MPH; Monica L. Borell, BSN; Lenisa Chang, MA, PhD; Patricia Crawford, CPN; Judy A. Heilman, RN; Jane C. Khoury, PhD; Pierce Kuhnell, MS; Karen Lawley, BSN; Allison Loechtenfeldt, BS; Logan Maag, BS; Colleen Mangeot, MS; Lynn O’Donnell, BSN; Rita H. Pickler, RN, PhD; Anita Shah, DO; Susan N. Sherman, DPA; Lauren G. Solan, MD, MEd; Heidi J. Sucharew, PhD; Karen P. Sullivan, BSN; Susan Wade-Murphy, MSN; and Christine M. White, MD, MAT.
- Accepted February 21, 2017.
- Address correspondence to Katherine A. Auger, MD, MSc, 3333 Burnet Ave, MLC 9016, Cincinnati, OH 45229. E-mail:
This work was presented in part at the Institute for Healthcare Improvement Scientific Symposium; December 7, 2015; Orlando, FL. It was also presented at the Pediatric Hospital Medicine Conference; July 29, 2016; Chicago, IL.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: Supported through a Patient-Centered Outcomes Research Institute Award (IHS-1306-0081 to Dr Shah). All statements in this report, including findings and conclusions, are solely those of the authors and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute, its Board of Governors, or the Methodology Committee.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
- White CM,
- Statile AM,
- White DL, et al
- Langley GJ,
- Moen R,
- Nolan KM,
- Nolan TW,
- Norman CL,
- Provost LP
- Provost LP,
- Murray SK
- Sauers HS,
- Beck AF,
- Kahn RS,
- Simmons JM
- Fleurence RL,
- Beal AC,
- Sheridan SE,
- Johnson LB,
- Selby JV
- Pelaez S,
- Bacon SL,
- Lacoste G,
- Lavoie KL
- Awindaogo F,
- Smith VC,
- Litt JS
- Accreditation Council for Graduate Medical Education
- Brady PW,
- Brinkman WB,
- Simmons JM, et al
- Brady PW,
- Zix J,
- Brilli R, et al
- Copyright © 2017 by the American Academy of Pediatrics