Context. The benefits of continuity of pediatric care remain controversial.
Objective. To determine whether there is an association between having a continuous relationship with a primary care pediatric provider and improved quality of care by parental report.
Design. Cross-sectional study.
Setting and Population. Seven hundred fifty-nine patients presenting to a primary care clinic completed surveys, which included validated measures of provider and clinic quality of care from the Consumer Assessment of Health Plan Survey.
Main Exposure Variable. A continuity of care index that quantifies the degree to which a patient has experienced continuous care with a provider.
Main Outcome Measures. The likelihood of parents reporting quality of care as high in several provider- specific items including reporting that providers respected what they had to say, treated them with courtesy and respect, listened to them carefully, explained things in a way they could understand, and spent enough time with their children. In addition, participants were asked to rate the overall quality of the clinic and their child’s provider on a 10-point scale.
Results. In ordered logistic regression models, continuity of care was associated with statistically significantly higher Consumer Assessment of Health Plan Survey scores for 5 of the 6 items, including feeling that providers respected what parents had to say; listened carefully to them; explained things in a way that they could understand; asked about how their child was feeling, growing, and behaving; and spent enough time with their child. In addition, greater continuity of care was associated with a higher clinic rating, as well as a higher provider rating.
Conclusions. Greater continuity of primary care is associated with higher quality of care as reported by parents. Efforts to improve and maintain continuity may be warranted.
The recent report from the Institute of Medicine has underscored the urgent need to monitor and improve the quality of medical care in the United States.1 Traditional means of benchmarking quality have relied on administrative data, medical record review, and patient survey. Given the limited outcome measures for general pediatric care, parental surveys are an important process measure.2–4 To date, many different survey instruments have been developed and used—an abundance that hampers direct comparisons across plans and clinics. As a result, in 1995, the Agency for Health Care Policy and Research funded the development of the Consumer Assessment of Health Plan Survey (CAHPS). This instrument was designed for widespread use and intended to facilitate the benchmarking of the performance of providers, clinics, and plans, and it has been validated.5 Scales specific to the care of pediatric patients have been developed as well. When disseminated to patients, CAHPS data have been shown to be important to patients’ selection of providers and plans.6,7 In an increasingly competitive medical market place, consumer selection can therefore be used to drive quality improvement.8,9
Given health plans’ apparent interest in identifying modifiable attributes of care that might improve patient satisfaction and therefore their CAHPS scores, there is a need to identify features of care delivery that are associated with high CAHPS ratings.10 Some studies have compared plans and features of plans with respect to their assessments in CAHPS surveys.7 Other studies have examined the association of demographic variables and CAHPS responses.11 Few studies have focused on attributes of care delivery that may be associated with higher CAHPS assessments.
Although CAHPS was designed to be used at the health plan level, several items from it relate directly to the perceived quality of the relationships between patients and their providers and therefore are relevant at the practice level. In previous studies, continuity of care has been found to be associated with improved patient satisfaction, as well as with improved outcomes and utilization.12–22 We hypothesized that having a more consistent relationship with a provider, greater continuity of care, would be associated with higher CAHPS ratings for these items.
MATERIALS AND METHODS
This was a cross-sectional survey conducted in a pediatric clinic affiliated with the University of Washington. The Institutional Review Board of the University of Washington approved the study protocol.
Participants were recruited from the Pediatric Care Center (PCC). The PCC is functionally 2 coexisting clinics: a primary care clinic staffed by 4 full-time clinicians (2 pediatricians and 2 nurse practitioners) and a resident teaching clinic precepted by pediatric faculty. The majority of patients (57%) are followed by and the majority of visits (60%) are made to the full-time clinicians. However, the same patients are seen by both groups of providers depending on availability. Patients, therefore, do cross over from one panel of providers to the other as needed.
All English-speaking patients presenting to the clinic for either well or acute care and who had made at least 3 previous visits were eligible for participation. Participants were given informed consent, and those that agreed completed a brief questionnaire.
The questionnaire included 6 questions relating to the quality of provider care from the CAHPS surveys for Child Medicaid Managed Care (Table 1). In addition, we included the CAHPS assessment of the overall quality of care delivered by their child’s clinic as well as their personal provider at that clinic. Both of these are 10-point scales. All of the aforementioned questions are also included in the core CAHPS survey, although respondents are asked to reflect on the past year rather than just on the past 6 months. For the purposes of this study, we asked all respondents to reflect on the last 6 months.
Surveys were distributed by a research assistant at the time of a visit. They were collected at the end of the visit. Parents were compensated $1 for their participation. Only people returning completed surveys were counted as participating in the study.
CAHPS data are typically reported as proportion of respondents selecting each of the 4 Likert anchors (never, sometimes, usually, always). In this study, parents who reported that they did not know the answers to certain questions or that their child did not have a personal provider were excluded from analyses. In an effort to model what would occur in actual consumer report cards, we used the CAHPS Likert scales as our outcome variables.
Our primary predictor variable was an index of continuity of care. Several such indices have been developed to quantify continuity of care. We opted to use the continuity of care (COC) index developed by Bice and Boxerman,23 which is of the general form: where N = total number of visits nj = number of visits to provider j s = number of providers
The COC takes on values between 0 and 1. A value of 0 signifies maximum dispersion, which occurs when a different provider is seen for every visit. A value of 1 signifies minimum dispersion, which occurs when the same provider is seen at every visit. To demonstrate the behavior of the COC, several hypothetical patterns each involving 8 visits are shown in Table 2. Note that as the contacts with providers become more dispersed—from all visits with Provider A to every visit with a different provider—the COC moves from 1 to 0.
The PCC uses a computerized information system. This system is used for appointment scheduling as well as for billing. It reliably tracks which provider patients see at each visit. Because we were primarily interested in the association of continuity of primary care and parental perceived quality, we calculated patients’ COC indices based only on visits to primary care providers—both well-child and acute visits. Visits to specialists, subspecialists, or emergency departments were not included in computing the COC index. In addition, we excluded visits that were for procedures (eg, immunizations) only.
All visits made by the child at the time of the survey administration were included in the calculation.
We included race/ethnicity, number of visits at the time of survey, age of child, reported household income, and gender of child as covariates in our models. In addition, because the period of time that the children had been followed at the PCC might also confound our primary association of interest, we also included a variable, days at clinic, which was the number of days before the date that they completed the survey that they had been continuously enrolled at the PCC. Finally, because characteristics of individual providers may confound the association of interest, we included a dummy variable for each provider that participants identified as their child’s primary one in all models.
Ordered logistic regression was used to estimate the relationship between our dependent variables and our independent variables. Ordered logistic regression is the best choice of model in a situation such as this in which the outcome variable is categorical, there is a natural ordering among the categories (eg, from worst to best), and the increments between the categories cannot be assumed to represent regular increments in the outcome (ie, in contrast to a count model). In ordered logistic regression the link function is the logistic link, and this is the source of the second part of the name. In addition to estimating the coefficients associated with the explanatory variables, one estimates cutpoints that correspond to the thresholds dividing the continuous logistic index function into discrete categories. The probability that the outcome for patient i will be in a particular category (eg, never, sometimes, usually, always) of N total categories is accordingly represented as: where Xi represents all of patient i’s explanatory variables, β represents the coefficient estimates, and ui, is the error term associated with patient i, and the cn are the cutpoints to be estimated. As can be appreciated from this representation, interpretation of the coefficients is challenging. A positive and significant level of a particular coefficient implies that the associated variable has a positive and significant relationship to the outcome.
Ordered logistic regression in this case proffers several advantages. First, it maximizes statistical power. Second, it analyzes the data in the form that they are reported: as the percentage of respondents who replied in each category. Third, it does not lump cognitively disparate categories into one solely for the purposes of applying logistic regression. In other words, although one could compare those that responded “always” to all others, the comparison group would include those that responded “never,” “usually,” or “sometimes,” making the resultant odds ratio difficult to interpret.
However, because neither the coefficients of our associations nor the numeric values of our main predictor (COC) are directly interpretable, we used the estimated coefficients and cutpoints to predict the results (ie, the probabilities that a patient will respond with a particular score for each of the items studied) for different levels of the explanatory variable of interest. These simulations are intended to demonstrate how changes in visit patterns might be expected to affect changes in CAHPS scores for selected items. We used the 25th, 50th, and 75th percentiles of COC scores for children with 10 visits (the mean number in our sample). All analyses, including the prediction of the magnitude of the effects, were conducted using Stata version 7.0.
A total of 1457 eligible patients were seen in clinic during the study period, and 759 parents completed surveys (participation rate: 52%). There were no significant differences between respondents and nonrespondents with respect to age, insurance type, provider, or race (Table 3). The mean age of patients whose parents participated was 4.5 years. They had made an average of 10 visits to the clinic and had been enrolled there for an average of just under 2 years.
In general, quality of care was rated highly. Over 80% of respondents reported that providers always respected what they had to say, treated them with courtesy and respect, listened to them carefully, and explained things in a way they could understand. Fewer (approximately 70%) reported that providers always queried about how their child was feeling, growing, or behaving and that their provider spent enough time with their child. Approximately 6% of respondents reported that their child did not have a personal provider. Over 70% of respondents rated both the clinic and their providers as a 9 or 10. Figure 1 presents histograms of the overall rating of the PCC clinic and patients providers.
In the regression models, continuity of care was associated with statistically significantly higher CAHPS ratings for 5 of the 6 items, including feeling that providers respected what parents had to say (P < .01); listened carefully to them (P < .05); explained things in a way that they could understand (P = .05); asked about how their child was feeling, growing, and behaving (P < .01); and spent enough time with their child (P < .01; Table 4). In addition, greater continuity of care was associated with a higher clinic rating (P < .01), as well as a higher provider rating (P < .01).
In the simulation models (Table 5), the difference between the 25th and 75th percentile of the COC scores resulted in a 10% increased probability that parents would report that providers always ask about how their child is feeling, growing, or behaving and an 8% increased probability that they would report providers always spend enough time with their child. The differences in terms of global satisfaction scores were even more dramatic. There was a 21% increase in the probability that parents would rate their provider a “10” and a 15% increase in the probability that they would rate the clinic itself a “10.”
Although most parents of children in this study rated the quality of their child’s health care as excellent, we found a significant association between continuity of care and overall satisfaction with care as well as 5 provider-specific CAHPS items. The statistical significance of this association, as measured by the P values on the coefficients in our regression models, consistently exceeded that of other potentially important covariates including number of visits, race, and income level. In addition, we found that differences in provider visit patterns could result in meaningful changes in the likelihood that quality of care will be perceived as high.
These findings are both plausible and important. Our finding that satisfaction with care is associated with continuity of care is consistent with the work of others.12 However, that study assessed overall satisfaction and did not explore more specific patient-provider domains. Our data give some indication as to why patients may report greater satisfaction with continuous relationships. For example, continuity of care was associated with the perception that providers spend adequate amounts of time with patients. This could be reflective of actual practice, that is providers do in fact spend more time during visits with patients they know better. More likely however is the possibility that given the mutual knowledge that arises from consistent contact, the same amount of time—or even potentially less time—spent together may be viewed favorably by parents. Visits in the setting of mutual knowledge may be both more efficient and more rewarding. This is also suggested by the other items we explored in this study. For example, patients with greater continuity of care were also more likely to report feeling that what they had to say was respected, that they were listened to, and that things were explained well. Continuity of care has been shown to vary according to structural attributes of clinics.24 Our findings therefore suggest that it affords an opportunity for quality improvement activities that may result in demonstrable changes in quality of care.
These results are also interesting for several methodological reasons. First, this study controls for differences among individual providers by including dummy variables for each provider. Thus, we have controlled for the possibility that providers who are inherently more sensitive to their patients’ needs are more likely to have greater satisfaction ratings. Second, the use of ordered logit in this study more sensitively and comprehensively accounts for differences in satisfaction ratings across the entire spectrum of possible answers than does the usual practice of dichotomizing the outcome and using logistic regression.
There are some limitations to this study that warrant consideration. First, although the associations we have found are plausible, the cross-sectional nature of this study prohibits drawing causal conclusions. It could in fact be the case that the causality here is reversed—namely parents who perceive the care their provider delivers as high quality make efforts to see them consistently. If true, this is also important, because increased continuity of care has been associated with improved health outcomes.14–18,20,25 Therefore, enabling parents to find providers with whom they are pleased may motivate them to form consistent relationships, although restrictions on choice imposed by some managed care plans may hamper such efforts. Second, this study must be generalized conservatively as it was conducted in a single clinic. However, the overall satisfaction with care as well as the responses to the individual CAHPS items are consistent with the report of others.3
Despite these limitations, some meaningful implications are evident in this work. Many changes in care delivery arising in response to the increasingly competitive medical market place may potentially diminish continuity of care. The larger size of physician groups, the increasing use of physician extenders, and the shifting allegiances of health plans with providers all may hamper patients or providers attempts to establish and maintain consistent contact. This study indicates that consumers give providers and clinics a higher rating when they have a more continuous relationship with a provider. Accordingly, plans and medical practices should target continuity of care in their quality assurance efforts.
This study was funded, in part, by a Robert Wood Johnson Generalist Faculty Physician Scholars grant to Dimitri Christakis.
We thank the parents who participated in the survey, as well as Cindy Farrell, Jamee Redmond, and Miryah Hibbard for their assistance with data collection.
- ↵Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academy Press;2001
- ↵Christakis DA, Johnston BD, Connell FA. Methodologic issues in pediatric outcomes research. Ambulatory Pediatr.2001;1 :59– 62
- ↵Hays RD, Shaul JA, Williams VS, et al. Psychometric properties of the CAHPS 1.0 survey measures. Consumer Assessment of Health Plans Study. Med Care.1999;37(suppl) :MS22– MS31
- ↵Spranca M, Kanouse DE, Elliott M, Short PF, Farley DO, Hays RD. Do consumer reports of health plan quality affect health plan selection? Health Serv Res.2000;35(5 Pt 1) :933– 947
- ↵Enthoven AC. The history and principles of managed competition. Health Aff (Millwood).1993;12(suppl) :24– 48
- ↵Hjortdahl P, Laerum E. Continuity of care in general practice: effect on patient satisfaction. BMJ.1992;304 :1287– 1290
- ↵Christakis DA, Wright JA, Koepsell TD, Emerson S, Connell FA. Is greater continuity of care associated with less emergency department utilization? Pediatrics.1999;103 :738– 742
- ↵Christakis DA, Feudtner C, Pihoker C, Connell FA. Continuity and quality of care for Medicaid children. Ambulatory Pediatr.2001;1 :99– 103
- Morgan M, Fenwick N, McKenzie C, Wolfe CD. Quality of midwifery led care: assessing the effects of different models of continuity for women’s satisfaction. Qual Health Care.1998;7 :77– 82
- ↵Christakis DA, Mell L, Koepsell TD, Zimmerman FJ, Connell FA. Association of lower continuity of care with greater risk of emergency department use and hospitalization in children. Pediatrics.2001;107 :524– 529
- Copyright © 2002 by the American Academy of Pediatrics