Objective. Appointment delays impede access to primary health care. By reducing appointment delays, open access (OA) scheduling may improve access to and the quality of primary health care. The objective of this pilot study was to assess the potential impact of OA on practice and patient outcomes by using pilot-study data from 4 North Carolina primary care practices.
Methods. We conducted an interrupted time-series pilot study of 4 North Carolina primary care practices (2 family medicine and 2 pediatric practices) participating in a quality-improvement (QI) collaborative from May 2001 to May 2002. The year-long collaborative comprised 25 practices and consisted of three 2-day meetings led by expert faculty, monthly data feedback, and monthly conference calls. Our main outcome measures were appointment delays, appointment no-shows, patient satisfaction, continuity of care, and staff satisfaction during the 12-month study period.
Results. Providers in all 4 practices successfully implemented OA. On average, providers reduced their delay to the third available preventive care appointment from 36 to 4 days. No-show rates declined (first quarter [Q1] rate: 16%; fourth quarter [Q4] rate: 11%; no-show reduction: 5% [95% confidence interval: 1%, 10%]), and overall patient satisfaction improved (Q1: 45% rated overall visit quality as excellent; Q4: 61% rated overall visit quality as excellent; change in satisfaction: 16% [95% confidence interval: 0.2%, 30%]). Continuity of care followed a similar pattern of improvement, but the change was not statistically significant. Staff satisfaction neither improved nor declined.
Conclusions. This pilot study suggests that primary care practices can implement OA successfully by using QI-collaborative methods. These results provide preliminary evidence that OA may improve practice and patient outcomes in primary care. These analyses should be repeated in larger groups of practices with longer follow-up.
Barriers to primary health care access represent a significant problem for US children.1 These barriers exist at multiple levels including the patient (eg, poverty, family health beliefs), health care system (eg, limited practice hours, scheduling problems), and policy (eg, health insurance, physician availability) levels.2 Despite the multilevel nature of this problem, research and interventions aimed at reducing barriers to health care access have focused primarily on policy-level barriers such as incomplete health insurance coverage. Efforts to improve access at the policy level, however, may be diluted if significant practice-level barriers persist.3
One important barrier to primary health care access for children is difficulty obtaining timely appointments. This barrier not only frustrates parents, but it also may lead to poor quality of care for children. A population-based study in Virginia revealed that delay in obtaining timely appointments in primary care practices is one of the most common barriers to immunization delivery and is associated with lower immunization rates.4
Open access (OA) (also known as “same-day scheduling” or “advanced access”) has been proposed as a solution to practice-level barriers by shortening wait times for appointments and improving practice efficiency.5–7 OA is an alternative scheduling system based on the principle that patient demand for appointments is predictable. Therefore, practices can match appointment capacity to anticipated demand, and patients can be offered same-day appointments with their primary care physician (PCP). Although many pediatric practices allow same-day appointments for acute care visits, OA expands same-day access to include routine and preventive care. OA has been implemented successfully in a variety of clinical venues, and anecdotal reports from multiple practices and several published case reports8–13 suggest that OA may decrease appointment no-shows, improve continuity of care, and increase both patient and staff satisfaction. However, to date, there is limited research demonstrating its feasibility or its advantages over traditional scheduling systems.
The primary objective of this study was to assess the potential impact of OA on practice and patient outcomes by using pilot-study data from 4 North Carolina primary care practices. We hypothesized that most or all practices could implement OA successfully over a 1-year period and that OA implementation would result in decreased appointment delays, decreased appointment no-shows, increased patient and staff satisfaction, and increased continuity of care.
Setting and Participants
We recruited a convenience sample of 5 North Carolina primary care practices that had expressed prior interest in quality improvement (QI). These 5 practices were recruited to participate in (and help assess the impact of) a national OA QI initiative led by the Institute for Healthcare Improvement from May 2001 to May 2002. In addition to participating in the QI initiative with the 20 other practices (located outside of North Carolina), pilot-study practices agreed to collect a standard set of monthly data for the study. Although the 20 national practices participated in the same intervention, these practices collected data (that were not standardized across practices) only for internal use. Thus, data collected in the national practices could not be included in the pilot study. One of the pilot practices was unable to complete the year-long intervention because it was closed by its parent health system. The present study includes data from the 4 remaining North Carolina pilot practices.
Pilot practices represented a variety of practice characteristics. Practice A was a not-for-profit, urban pediatric practice with 3 clinical office locations. It also served as the local health department provider. Most patients received public insurance, with 92% of the patients insured by Medicaid and 6% served by the North Carolina State Children's Health Insurance Program. Most remaining patients were uninsured. Practice A had the equivalent of 11 full-time health care professionals, split between physicians and nurse practitioners, and also served as a residency teaching site.
Practice B was a rural, private, pediatric practice with 3 clinical office locations and served as the local health department provider. Most patients received public insurance, with 70% of the patients insured by Medicaid, 15% served by the North Carolina State Children's Health Insurance Program, and 15% served by private, other, or no insurance. Practice B had the equivalent of 16 full-time clinicians, split between physicians and physician assistants.
Practice C was an urban, family medicine practice owned by a large health system. More than 80% of patients seen there were in managed care plans, and 11% were in Medicaid or Medicare plans. Practice C had the equivalent of 8.3 full-time clinicians, including physicians and midlevel providers.
Practice D was a rural, family medicine practice associated with the same large health system as Practice C. More than 21% of the patients seen there were in Medicaid or Medicare plans, and 66% were in managed care plans. Practice D employed 6 full-time providers (5 physicians and 1 midlevel provider).
After agreeing to participate, the pilot practices entered the OA QI collaborative. QI-collaborative methodology has been described elsewhere.14 Briefly, this QI collaborative involved multidisciplinary practice teams (usually a physician, nurse, and administrative staff member) from the 25 practices working together to implement OA principles over 12 months. The collaborative was led by faculty with expertise in OA or QI methods who used three 2-day workshop sessions to assist teams with implementation. Between meetings, practice teams applied what they had learned with assistance from faculty through monthly conference calls and e-mail listserv support.
In the collaborative, practices implemented the key components of OA (Table 1). 6 After a period of baseline data gathering, teams temporarily increased their daily appointment capacity and improved office efficiency through the redesign of patient-flow processes to reduce the backlog of scheduled appointments. Practices began offering same-day appointments to all patients once the waiting time had been reduced to ≤1 week. Practices continued to monitor appointment demand and availability to maintain an appropriate balance and avoid reaccumulation of the waiting list.
All practices were asked to collect and analyze data to determine if the changes being made were associated with improvements in processes and outcomes. The 4 pilot practices participated in the collaborative exactly as the other 20 national practices, with 1 exception. The data-collection and feedback process for pilot practices involved a standardized set of measures that were handled separately by the research team, as described below.
Each pilot practice designated a data-collection coordinator (usually a nurse or clinical office assistant) who was responsible for all data collection in the practice. Data-collection coordinators were trained by 1 group conference call, 1 individual call, and 1 face-to-face training session by the research team at the first workshop. They also were provided a detailed measurement guide that addressed each of the study measures and procedures and were coached by a trained research assistant as needed. The research assistant and a physician member of the research team (G.D.R.) reviewed all submitted data forms for quality assurance (eg, missing data, illegible writing). Errors detected were reported quickly to the data-collection coordinators to improve data quality. All data were entered into the study database by a study research assistant. Results were reported to the practices each month.
To assess if successful implementation occurred, we measured the average number of days to the third available health maintenance appointment for each clinician. This measure is a proxy for overall appointment delays in primary care and is less susceptible to fluctuations caused by last-minute cancellations than, for example, measuring the next available appointment. We considered the implementation of OA to be successful if the time to the third available health maintenance appointment was reduced to ≤7 days. Each month, practices reported the number of days until the third available appointment for each clinician; clinicians were included in the analysis if they provided data in at least 3 of the 4 quarters.
Practices reported the total number of scheduled visits for the last week of each month. For each visit, practices indicated whether the patient kept their scheduled appointment by using data from their practice-management software.
Patient satisfaction with the practices was measured by using a questionnaire adapted from the Medical Outcomes Study.15 We collected 1270 patient surveys from the 4 practices over the 12-month study period. Pilot practices were encouraged to make changes that would result in excellent care; we therefore dichotomized the primary measure of patient satisfaction as the proportion of patients rating their overall satisfaction with their office visit as “excellent.” Two secondary measures of patient satisfaction were the proportion of patients rating the wait time to receive an appointment and time spent with the clinician during the appointment as “excellent.” Practices surveyed 30 consecutive patients on the same day of the week each month.
Continuity of Care
Practices measured continuity of care with the monthly patient survey (N = 1270). There are a variety of methods to define and measure continuity of care. We chose to measure it in a way that is intuitive to practicing clinicians, simple to collect in busy office practices, and reflects the patients' point of view. We defined successful continuity of care when patients responded “yes” to the question “Did you see the clinician (eg, doctor, physician assistant) that you prefer to see today?” Patients who responded “no” or “did not matter who I saw today” were defined as not having continuity of care for that visit. Patients answering “did not matter who I saw today” are unlikely to have an established relationship with a PCP, which is 1 prerequisite for maintaining continuity of care (the ability to routinely see the PCP being another).
Staff satisfaction was defined as the percentage of staff responding “strongly agree” to the statement “I would recommend this office practice as a great place to work.” All staff members in each practice were surveyed quarterly (N = 475 responses).
Data were aggregated quarterly to mute month-by-month variations such as those resulting from short-term staffing fluctuations or seasonal outbreaks of illness. Delays to the third available appointment were analyzed at the clinician level. Each clinician's mean delay in the first quarter (Q1) (or Q2, if Q1 data were missing) was compared with the mean Q4 delay (or Q3, if Q4 data were missing) by using a paired t test. We used binomial regression,16 with dummy variables for each quarter, to assess changes in the dichotomous outcomes over time (appointment no-shows, patient and staff satisfaction, and continuity of care). We adjusted each binomial model for intrapractice clustering. We used Stata 8 (College Station, TX) for all statistical analyses.
The study was reviewed and granted exemption status by the institutional review board at the University of North Carolina at Chapel Hill School of Medicine. All participating pilot practices received detailed informed consent about the storage and use of study data.
All 4 clinics successfully implemented OA. For the 30 providers with complete data (among the 35 total providers), the mean delay to the third available appointment was reduced by 32 days (95% confidence interval [CI]: 20, 44 days), from 36 days in Q1 to 4 days in the Q4, an 89% reduction (Fig 1). This analysis did not account for possible intrapractice clustering. However, of 30 providers, 27 (90%) reduced their delay during the course of the collaborative, making significant within-practice effects unlikely.
More than 11000 scheduled patient visits were analyzed to assess changes in appointment no-shows. No-shows decreased by almost one third during the intervention (Fig 2), from 16% in Q1 (263 no-shows for 1633 scheduled visits) to 11% in Q4 (355 no-shows for 3248 scheduled visits), a reduction of 5% (95% CI: 1%, 10%).
Overall patient satisfaction improved during the year-long intervention (Fig 3). The proportion of patients with excellent overall visit satisfaction increased from 45% in Q1 to 61% in Q4, an increase of 16% (95% CI: 0.2%, 30%). Two secondary measures of patient satisfaction showed a similar pattern of improvement over time, but neither was statistically significant. Patient satisfaction with the wait to receive an appointment improved from 37% in Q1 to 47% by Q4, a change of 10% (95% CI: −9%, 29%). The proportion of patients with excellent satisfaction regarding the length of time that they spent with the clinician during their visit increased from 40% in the Q1 to 57% in the Q4, a 17% improvement (95% CI: −2%, 36%).
Continuity of care followed a similar pattern of improvement, but the change was not statistically significant. Seventy-six percent of patients saw the provider they preferred to see in Q1, compared with 89% in Q4 (Fig 4), a change of 13% (95% CI: −7%, 32%).
Staff satisfaction was variable during the intervention, ranging from 27% (Q4) to 39% (Q3), with no consistent trend over time. Examining the clinicians (physicians, nurse practitioners, and physician assistants) and office staff separately did not change these results.
This pilot study suggests that primary care practices can implement OA successfully by using QI-collaborative methods. Each of the 4 practices was able to reduce appointment delays significantly by applying the 10 key changes of OA during the QI collaborative. In addition, the pilot data provide preliminary evidence that OA can result in improvements in practice and patient outcomes in primary care. We observed a consistent pattern of improvement across our outcomes over the course of the intervention, with statistically significant improvement in appointment delays, appointment no-shows, and overall patient satisfaction. We found neither improvement nor decline in staff satisfaction.
Our results are consistent with previous case reports8–13 that suggest that OA is both feasible and effective in improving access to and quality of care in primary care practices. Implementation of OA has been associated with reductions in wait time for appointments,8–13 decreased no-show rates,9,12 improved patient satisfaction,8–10,13 increased continuity of care,8,11,13 and improved staff satisfaction.10,11 Higher patient volumes, physician productivity, and revenues have been reported also.9,10,12,13 To our knowledge, there are no other published studies that have systematically evaluated the impact of OA in a group of unrelated practices.
Our study has several limitations. First, although the practices studied were fairly diverse, they may not be representative of many community practices. The sample size was small and represented a convenience sample of motivated practices from a limited geographic area. Second, our study did not involve a control group. In studies for which significant effort is required on the part of practices to collect and report data, enrolling clinics into a control group is difficult. In the absence of a control group, it is possible that secular trends or the Hawthorne effect (changes in behavior resulting from the attention that study participants are receiving from researchers and not the study exposure) explain the observed changes in our measures over time.17 However, we speculate that this is unlikely, given the magnitude of the improvements and because so many of the outcomes improved in the hypothesized direction. Third, continuity of care, patient satisfaction, and appointment no-shows were all still improving in Q3 and Q4; thus, our follow-up period may have been too short to fully capture changes in these outcomes. A longer follow-up period also would have allowed us to assess the sustainability of the intervention. Fourth, our study did not evaluate clinical health outcomes. Given the potential of OA to improve both access to and continuity of care, we suspect that OA could result also in improved clinical health outcomes.
Two of our measures warrant additional discussion. For our measure of continuity of care, some families may have answered “yes, I saw the clinician I preferred to see today” but not be referring to their PCP. In situations for which the wait to see their PCP was longer than the wait to see another provider, patients may have preferred to see the non-PCP provider (to be seen more quickly) and still answered “yes” to the continuity question. However, given the reduction in delays over time, these “false-positive” responses should have been more common earlier in the study; this misclassification would bias our result toward the null. Similarly, it is possible that our measure of staff satisfaction does not capture changes in staff satisfaction accurately, because our results conflict with both previous case reports and the informal comments we received from participating pilot practices during this study suggesting positive staff responses to the OA implementation. Future studies should consider addressing staff satisfaction with more recent, validated instruments18,19 or ones that are focused more specifically on changes in staff satisfaction associated with the implementation of OA.
OA may be a particularly attractive QI intervention, because it not only has potential to improve patient outcomes but also may offer important benefits to practices. In this study we found that OA decreased appointment no-shows by nearly one third. In our work with primary care practices, we have observed that many physicians are concerned about the negative impact of patient no-shows on their daily practice. Not only do no-shows interfere with clinical workflow, but they also represent lost revenue for primary care practices. By transforming more of each provider's working hours into billable time through the reduction of no-shows and improved clinic efficiency, we speculate that OA could lead to increases in revenue for primary care clinics. Although we did not collect financial data as part of this pilot study, future studies should evaluate the impact of OA scheduling on clinic finances.
The finding that OA may improve continuity of care is of particular interest. A recent study showed that more than half of US children cannot identify their regular physician for well-child care.20 Other studies have shown similar low rates of continuity of care for children.21 Given that there is a large body of evidence linking continuity of care to improved health outcomes for children22–29 and that there are few proven strategies to improve continuity of care, the effect of OA on continuity of care deserves additional study.
It is important to note that implementation of OA did not seem to occur at the expense of time spent between patient and provider (ie, shorter interactions between patients and providers). Indeed, patients in this study were more satisfied with the amount of time spent with their provider at the end of the intervention than at the beginning. Future studies could strengthen this finding by measuring whether OA affects the actual time spent between patients and providers, not just the perception of this time.
This pilot study provides preliminary evidence that OA may be a viable means for practices to reduce appointment delays, with resulting improvement in appointment no-shows, patient satisfaction, and possibly, continuity of care. As practices across the country consider adopting OA, these adoption decisions will be better informed if the potential benefits that OA may provide over existing primary care scheduling models are determined more precisely. Given the limitations of the present study, future studies should consider involving larger groups of practices, control groups, and longer follow-up. It also will be important to measure selected clinical outcomes that may be affected by appointment delays, such as immunization rates.
This work was supported by the Duke Endowment, the University of North Carolina at Chapel Hill Program on Health Outcomes, and the Robert Wood Johnson Clinical Scholars Program (grant 047948).
We thank the involved patients and practices for their participation; the Institute for Healthcare Improvement for leading the intervention; Leah Gilbert for data-management assistance; and Joanne M. Garrett and William C. Miller for statistical advice.
- Accepted February 28, 2005.
- Reprint requests to (D.G.B.) University of North Carolina, 5034 Old Clinic Building, CB #7105, Chapel Hill, NC 27599-7105. E-mail:
Conflict of interest: Dr Murray is the founder and principle of Mark Murray and Associates, a for-profit organization that assists practices with open access scheduling and other practice improvements.
- ↵Morrow AL, Rosenthal J, Lakkis HD, et al. A population-based study of access to immunization among urban Virginia children served by public, private, and military health care systems. Pediatrics. 1998;101(2). Available at: www.pediatrics.org/cgi/content/full/101/2/e5
- ↵Murray M, Tantau C. Same-day appointments: exploding the access paradigm. Fam Pract Manag.2000;7(8) :45– 50
- ↵Randolph GD, Murray M, Swanson JA, Margolis PA. Behind schedule: improving access to care for children one practice at a time. Pediatrics.2004;113(3) . Available at: http://www.pediatrics.org/cgi/content/full/113/3/e230
- ↵Radel SJ, Norman AM, Notaro JC, Horrigan DR. Redesigning clinical office practices to improve performance levels in an individual practice association model HMO. J Healthc Qual.2001;23(2) :11– 15, quiz 15, 52
- ↵O'Hare CD, Corlett J. The outcomes of open-access scheduling. Fam Pract Manag.2004;11(2) :35– 38
- ↵Ovretveit J, Bate P, Cleary P, et al. Quality collaboratives: lessons from research. Qual Saf Health Care.2002;11 :345– 351
- ↵Wacholder S. Binomial regression in GLIM: estimating risk ratios and risk differences. Am J Epidemiol.1986;123 :174– 184
- ↵Williams ES, Konrad TR, Linzer M, et al. Refining the measurement of physician job satisfaction: results from the Physician Worklife Survey. SGIM Career Satisfaction Study Group. Society of General Internal Medicine. Med Care.1999;37 :1140– 1154
- ↵Inkelas M, Schuster MA, Olson LM, Park CH, Halfon N. Continuity of primary care clinician in early childhood. Pediatrics.2004;113(6 suppl) :1917– 1925
- ↵Mustard CA, Mayer T, Black C, Postl B. Continuity of pediatric ambulatory care in a universally insured population. Pediatrics.1996;98 :1028– 1034
- ↵Christakis DA, Mell L, Koepsell TD, Zimmerman FJ, Connell FA. Association of lower continuity of care with greater risk of emergency department use and hospitalization in children. Pediatrics.2001;107 :524– 529
- Christakis DA, Wright JA, Koepsell TD, Emerson S, Connell FA. Is greater continuity of care associated with less emergency department utilization? Pediatrics.1999;103 :738– 742
- Christakis DA, Wright JA, Zimmerman FJ, Bassett AL, Connell FA. Continuity of care is associated with high-quality careby [sic] parental report. Pediatrics.2002;109 (4). Available at: http://www.pediatrics.org/cgi/content/full/109/4/e54
- Ettlinger PR, Freeman GK. General practice compliance study: is it worth being a personal doctor? Br Med J (Clin Res Ed).1981;282(6271) :1192– 1194
- Copyright © 2005 by the American Academy of Pediatrics