Background. There is widespread agreement among pediatric educators that continuity (following a panel of patients on a first contact basis for all their health care) is an important part of the education of pediatricians.
Objective. To measure continuity in a pediatric residency practice and to compare this continuity with 2 nearby private general pediatric group practices. We also examined measures of continuity suggested in the literature.
Design. Visit data were obtained from the practice billing system for a resident continuity practice and 2 nearby private practices for the 3-year period from July 1, 1992, to June 30, 1995. Visit data used were restricted to patients seen in the office of the practices during regular office hours. Continuity was measured using 5 different indices: 1) the usual provider of care index, visits by the usual clinician/total visits, 2) continuity for patient, the average proportion of visits that an individual patient was seen by his or her own physician, 3) continuity for physician (PHY), the average proportion of visits that an individual physician saw his or her own patients, 4) Continuity of Care Index (COC), and 5) the Modified, Modified Continuity Index. During the period examined, pediatric residents were present in the continuity practice for 1 half-day each week. The resident continuity practice (RCP) had 57 residents and saw 3386 patients for 18 955 visits. Private practice 1 (PP1) had 4 pediatricians who saw 4968 patients for 33 537 visits. Private practice 2 (PP2) had 5 pediatricians who saw 11 953 patients for 75 778 visits.
Results. For all visit types, continuity in the RCP was not as high as in the private practices, PHY-RCP versus PP1, PP2; 53% versus 70%, 77%. However, continuity in RCP was greater than 50% for all measures except the COC index, which precipitously decreases as the number of clinicians seen increases. Examining continuity for health maintenance visits (PHY-RCP, PP2 vs PP1; 96%, 96% vs 82%) RCP was equal to the best of the private practices. The percentage of patients not seen for a health maintenance visit during the study period was lowest in the resident practice (RCP/PP1/PP2, 15/22/30).
Conclusion. Although continuity for all visits in this RCP was less than in private practice, it was surprisingly high, considering the limited time residents spend in clinic. In a particularly important area for continuity, health maintenance visits, continuity was identical to one and superior to the other private practice.
Continuity of care, a medical home for patients, and primary care are aspects of health care that are believed to be associated with enhanced quality of care for patients. The terms “medical home”1,,2 and “primary care”3 are concepts that are understood and agreed on. However, the concept of continuity of care is associated with confusion in the medical literature.4
Starfield4 has defined continuity as the orderly transfer of medical information concerning a patient from one visit to the next, regardless of who sees the patient. This is the way many specialists use the term. Dr Starfield prefers the term “longitudinality” to refer to the long-term care of one patient by one physician. She notes that many studies on the subject of continuity have been published, but most fail to distinguish between the separate concepts of longitudinality (the presence and use of a regular provider of care over time) and continuity (the sequence of visits in which there is a mechanism of information transfer).4,,5
There have been >750 articles published concerning continuity of care since 1995. There are at least 10 different indices for measuring continuity, and each index addresses continuity in a somewhat different way.6,7,8,9,10,11,12,13,14 It is most common for the term continuity to be used for the concept Starfield defines as longitudinality. For the remainder of this article, the term continuity will refer to the use of the patient’s primary care provider.
Despite the inconsistencies in the definition and measurement of continuity, there is evidence that a relationship with one physician does enhance aspects of quality of care for patients. These include patient satisfaction, physician and staff satisfaction, increased health maintenance visits, increased immunization rates, fewer sick visits, decreased emergency department visits, decreased hospitalizations, increased compliance with appointments and medications, increased physician recognition and discussion of emotional and behavioral problems, and a decrease in lab and imaging studies.8,,15,16,17,18,19,20,21,22,23,24 There is also evidence that the clinician rather than the site of care is important to see the beneficial effects of continuity.25
There is inconsistent evidence that long-term continuity decreases visits to emergency departments or decreases hospitalizations. Also inconsistent are reports of effects on costs, and morbidity or mortality in association with changes in continuity.19,,21
For a number of years there has been recognition of the disparity between training in all primary care residencies and the actual practice of those primary care disciplines. Nowhere has this mismatch been more glaring than in pediatrics.15,,26,27,28 General pediatricians note that over 95% of their patient encounters occur in outpatient settings, whereas the majority of their residency training occurred in hospitals. Surveys of recent graduates from pediatric residency programs document that newly trained pediatricians feel well-trained to care for the critically ill child or newborn, but feel poorly trained in areas of care where pediatricians spend most of their time. These include health maintenance, the outpatient management of moderately ill children, the management of children with special health care needs, the recognition and management of emotional and behavioral problems, minor orthopedic problems, adolescent problems, and others.27,28,29
For these and other reasons, the Accreditation Council for Graduate Medical Education, and all primary care residency review committees (RRC) have encouraged residency training programs to offer a continuity experience.30,,31 Since 1989, the Pediatric RRC has required that for at least 1 half-day each week, residents must be available to follow a panel of patients during the 3 years of training. The latest RRC requirements recommended increasing this to 2 half-days per week.
Recently, questions have been raised concerning how effective pediatric resident continuity practices have been in offering a meaningful continuity experience.27,,29,,32,,33 Garfunkel et al34 also noted that there have been no reports comparing continuity in a resident clinic to a private pediatric practice.
The purpose of this study was to report a comparison of continuity between the residents’ continuity practice at the Medical University of South Carolina and 2 highly respected private group pediatric practices in Charleston. We examined measures of continuity suggested in the literature and proposed measures of continuity that may be more suitable for evaluating resident education.
Data from the 3 practices came from computer billing and appointment records. At each visit, all practices recorded the name of the patient, the examining physician, visit diagnoses and procedures, and the doctor assigned as that patient’s regular physician. All 3 practices assign each patient/family to a specific physician as the primary care provider for that patient/family.
All practices were located in Charleston, South Carolina. One of the private practices, PP1, has a satellite office located in a nearby town. The other private practice, PP2, and the resident continuity practice (RCP) each operated out of a single site. PP1 was a 3-pediatrician practice (over the course of the study there were 4 different pediatricians but never >3 at one time); PP2 was a 5-pediatrician practice; and RCP was a pediatric residency that accepts 12 interns each year. All practices saw adolescents. Visits were examined over the 3-year period from July 1, 1992, through June 30, 1995. This period was used because comparable data for all 3 practices were available. Data were extracted from the billing and/or appointment system at each site. PP1 and RCP both used OverSite (Medical Micro Systems, Inc, Charleston, SC). PP2 used CompuSystems (Columbia, SC). Both private practices (PP1 and PP2) had evening and weekend hours during the study period. All practices handled patient advice telephone calls within the office. In the private offices, a nurse handled most calls. In the resident practice, the residents or attending in the office would handle patient advice calls. In all practices referrals during office hours to the emergency department were not measured but were reported to be rare and only for true emergencies. All practices saw acute and/or walk-in patients in the office.
In RCP, similar to the private practices, patients were seen 5 days a week, morning and afternoon. Residents were available to see their own patients whenever RCP was open. The only exception was between Christmas and New Years when RCP was open but staffed only for acute care. In RCP, the residents were scheduled to be at the practice 1 half-day per week during the period studied. During their continuity practice time, the residents saw health maintenance visits; return visits and sick visits, as well as scheduled and unscheduled visits. They were excused from attendance only for illness, vacation, or when on out-of-town rotations. None of the faculty-attending pediatricians at RCP had assigned patients, although they may see patients (0–2/session) as needed.
In the RCP, administrative and computer support systems were in place to maximize continuity. Practice policy was that health maintenance visits and return visits should be scheduled for the primary resident physician. Staff could easily identify, within the scheduling software, both the primary physician for each patient and the resident’s schedule.
Five different measures of continuity are presented in this study (see details of the calculation of continuity in the Appendix):
Usual Provider of Care (UPC).7 UPC is the proportion of visits in which a patient is seen by their assigned clinician. The calculation of UPC is straightforward and it has a denominator of patient visits.
Continuity for Physician (PHY). PHY is calculated for each clinician individually. It is the proportion of visits for each clinician in which they see their own patients. For each practice, the proportions for each clinician are added together and divided by the number of clinicians in that practice. The calculation of PHY is straightforward and it has a denominator of number of physicians. PHY relies on having an explicitly assigned clinician for each patient.
Continuity for Patient (PAT). PAT is calculated for each patient individually. It is the proportion of visits for each patient in which they see their own clinician. For each practice the proportions for each patient are added together and divided by the number of patients in that practice. The calculation of PAT is straightforward and it has a denominator of number of patients. PAT relies on having an explicitly assigned clinician for each patient.
Bice Index or Continuity of Care Index (COC).6 COC is a measure of dispersion. COC is calculated for each patient individually. For each practice, the individually calculated COC for each patient are added together and divided by the number of patients in that practice. Calculation of this measure does not require an assigned clinician; rather it looks at the number of different clinicians seen. The denominator of this measure is the number of patients. COC is difficult to calculate and the measure is not linearly related to, but is extremely sensitive to, the number of different clinicians seen.
Modified, Modified Continuity Index (MMCI).11 The MMCI is another measure of dispersion. The calculation is modified from COC to make the measure perform in a more linear fashion. The denominator of this measure is the number of patients.
Continuity can be calculated from the perspective of the physician, the patient, or the visit. This study examines 3 measures of continuity that are commonly used (UPC, COC, MMCI) and 2 measures of continuity (PAT, PHY) that we believed were more intuitive and easily calculated (see Appendix for details of the calculations).
Visits that can never result in continuity were excluded from analyses. Visits to each practice that did not result in seeing a physician were not used in the calculations of continuity. Examples of such visits include visits to a nurse, psychologist, substance abuse counselor, visits for laboratory testing, immunizations only, visits to measure weight change, or visits for research protocols.
Visits to physicians that were recorded differently between practices and are often discontinuous were excluded from analyses. Visits to physicians that were excluded from continuity analysis were visits of patients aged 19 years or older, visits that occurred after hours or on weekends, or visits that occurred outside the office, such as emergency department and hospital visits. In RCP, visits occurring between Christmas and New Years were excluded. Visits to PP1 and PP2 on days that had only 1 physician in the office and occurred on a holiday were excluded from analyses.
Visits of patients with inadequate identifying information, such as an invalid medical record number or missing dates of birth, were dropped from the analysis.
When calculating PHY, visits to clinicians who did not have assigned patients in that practice were excluded from those calculations. When calculating PAT, COC, and MMCI, visits to clinicians who did not have patients in the RCP were included, as these visits could potentially have been with the patient’s regular clinician. The measures of continuity described above and, more completely, in the Appendix can be characterized by the denominator of the measure. Most of the measures have a denominator of the patient (COC, MMCI, PAT). PHY has a denominator of the physician and UPC has a denominator of visits.
Each of these measures was calculated for all eligible visits to the practice. The UPC index and Continuity for Physicians were also calculated for health maintenance visits (International Classification of Diseases, Ninth Revision code V20.2). For each practice, the number of patients who were never seen for health maintenance over the 3-year period was calculated.
Examining the differences in proportions between the 3 practices was done using the binomial proportion and 95% confidence interval (CI). Comparing differences between practices for variables with dichotomous outcomes and controlling for confounding variables used logistic regression. The logistic parameter was converted to an odds ratio and 95% CI. Examining the differences in means between practices was done using linear regression analyses with least square means to estimate the mean and 95% CI. A 95% CI that excluded the other practices’ mean was considered significant. All analyses were done using SAS version 8.1 (SAS Institute, Cary, NC).
This project was reviewed and approved by the Institutional Review Board for Human Research at the Medical University of South Carolina.
All the practices studied differed from the other practices. The private practices tended to be more similar to each other than to the resident practice. The resident practice saw a more varied group of patients, having more racial minority and Medicaid-insured patients (Table 1). The age distribution of patient visits was different for all practices with PP2 having a higher mean and median age (mean age in months PP1/PP2/RCP–40/66/46, P < .05), and a higher proportion of adolescents (age ≥11 years) than the other 2 practices (Table 1). The percentage of visits in each practice for health maintenance was statistically significantly different (PP1/PP2/RCP, 39%/33%/36%, P < .05). Much of the difference in percentage of visits for health maintenance was explained by the difference in the age distribution of the practices.
The distribution of patients and physicians differed among the practices. PP2 saw the most patients per day (visits/day, PP1/PP2/RCP, 44.4/97.9/25.7), whereas RCP had the most physicians seeing patients (physicians/day, PP1/PP2/RCP, 2.5/3.9/5.7). From the data, we could not differentiate a physician seeing patients for a full day versus half a day. Each practice differed in the number of visits per patient (visits/patient, PP1/PP2/RCP–6.8/6.3/5.6, P < .05) with RCP seeing their patients significantly less often. This difference in visits per patient between practices persisted after adjusting for the differing age distributions of the practices.
Measures of continuity that looked at all visits to the practices showed, as expected, both private practices having greater continuity than the resident practice. The measures of continuity are presented in tabular form in Table 2. The measures of continuity within each practice differed by as much as 30.5 percentage points for PAT minus COC in RCP to as little 16.8 percentage points for MMCI minus COC in PP2. All measures of continuity with denominators of visits (UPC) or patients (PAT, COC, MMCI) demonstrated statistically significant differences between all practices. These statistically significant differences were related to the large number of observations (visits, patients) and may not represent clinical significance.
UPC, with a denominator of visits, was statistically different between all practices with PP2 at 77.4%, PP1 at 70.4%, and RPC at 52.8% (Table 2). PAT, COC, and MMCI, measures with denominators of patients, were statistically different between all practices. Continuity was highest when measured by MMCI in both private practices (MMCI, PP1/PP2/RCP, 84.9/83.6/52.5). Continuity measured by COC was lowest in all practices and substantially lower in RCP (COC, PP1/PP2/RCP, 63.5/66.8/24.0; Table 2).
PHY, with a denominator of physicians, was statistically different between the private practices and RCP, but no statistical difference was found between PP1 and PP2 (PHY, PP1/PP2/RCP, 68.4/76.4/53.1).
Within each practice, different measures of continuity varied by 21.4 percentage points for PP1, 16.8 percentage points for PP2, and 30.5 percentage points for RCP. The relative rank of the practices by these measures of continuity was not as variable; the private practices had higher continuity than the resident practice on every measure. However, between the 2 private practices, PP2 was generally higher than PP1, although this relationship was reversed for MMCI. With exception of COC, the RCP was very stable across measures of continuity. The maximum difference between measures was only 2.6 percentage points.
Clinicians and patients, depending on the type of visit, may value continuity differently. We examined health maintenance visits (International Classification of Diseases, Ninth Revision code V20.2), a type of visit valued by both patients and educators. The 3 practices were different from each other in the percentage of patients never seen for health maintenance. RCP had a significantly lower percentage of their patients who were not seen for health maintenance over the period studied (PP1/PP2/RCP, 21.6/29.6/15.1, P < .5). On 2 measures (UPCHM and PHYHM) of continuity for health maintenance visits, the resident practice performed extraordinarily well. Residents saw their assigned patients for health maintenance 96% of the time. This was indistinguishable from the best performing private practice (PHYHM, PP1/PP2/RCP, 82.0/96.2/95.9; Table 3).
Continuity, as the term is commonly used, is measured in many ways. When to use the various indices is an open question. The 5 measures examined tend to vary in the same direction, but they are not identical.
Most measures examine continuity from the perspective of the patient and have the number of patients as the denominator (COC, MMCI, PAT). Conceptually, for the attainment of patient goals related to continuity this seems appropriate. It is not clear that one measure is superior to the other in its association with patient outcome. The Bice Index or COC has 2 disadvantages. It is extremely sensitive to the number of clinicians a patient has seen and it is not a linear measure.6,9 The MMCI is more linear but is also relatively difficult to calculate. Continuity for patient or PAT is an intuitive measure and easy to understand and to calculate. Its disadvantage is that it requires each patient to have an identified primary care clinician within the data.
The UPC measure, with a denominator of visits, is conceptually related to an episode of care. If the data contains an identified primary care clinician, this measure is the easiest to calculate. The disadvantage of UPC is that the measure is not conceptually directly related to either patient or physician outcomes.
We have proposed a measure of continuity from the perspective of the physician (PHY). This measure is easy to calculate, however, it does require that the assigned physician for each patient visit be identified in the data. This measure would be appropriate when outcomes related to physicians are relevant. This measure should be considered when examining the educational experience of trainees such as pediatric residents.
Each of the measures of continuity examined varies significantly both between and within practices. Continuity is clearly better in these private practices when compared with the resident practices. However, the continuity in the resident’s practice compares favorably when examined from the perspective that residents are only present for 1 half-day per week, whereas in the private practices the pediatricians were in the office 8 or 9 half-days per week. It is interesting to note the high proportion of health maintenance visits where patients saw their own resident physician in RCP. Also, the percentage of patients never seen for a health maintenance visit was higher in the private practices.
There are studies that show that patients value immediate access to health care more highly than they value continuity.22,,35,36,37,38 Most of these studies indicate that it is for acute illnesses that patients are most anxious to have immediate access to any competent clinician. There are also studies that suggest that for certain types of visits, parents prefer to wait to see their child’s regular physician.8,,12,,23,,39 In examining the available evidence, our interpretation is that parents value their child being seen by their own clinician for: health maintenance visits, visits for behavior/emotional problems, school performance problems, long-term management of chronic health problems, and follow-up for recurring acute illnesses.
The types of visits where parents value long-term continuity with the same physician are the types of visits where continuity can be accommodated in a RCP or in a private group practice. Both types of practices can also provide a type of continuity for acute illnesses, by a physician who has easy access to the patient’s record, thus providing the orderly transfer of information between clinicians.5
The results of this project should be interpreted with some caution. It involves a relatively small number of practices in one area of the country and may not be generalizable to practices or residencies in different areas or with different organizational structures. We relied on billing data in all practices rather than data specifically gathered to measure continuity. Another issue with this study is the substantial differences in the types of patients and volume of patients. Some differences between the private practices and resident practices may be attributable to practice characteristics other than residency training.
The types of visits where parents highly value continuity are the types of visits for which many practicing generalist pediatricians find they were poorly prepared by their residency programs.27,28,29 The preparation for these patients and types of visits should be and are best taught in a continuity experience. It is also worth noting that it is in the continuity practice where residents provide a valuable service to underserved children, and learn to enjoy being an advocate for this segment of our population. In most programs, these opportunities are only offered in the resident’s continuity practice, whether in a hospital outpatient setting, or in a private practice.
It seems reasonable that there are types of visits for which continuity is associated with a better outcome and other types for which this is not true. Perhaps those visits for which continuity is valued by parents are the types of visits that are associated with better outcomes and so indicate quality care. To be most informative, studies of continuity need to differentiate between types of visits and their outcomes.
A comparison of continuity in 1 pediatric residency practice to 2 private group pediatric practices shows continuity for patients and physicians to be higher in the private groups. However, the level of continuity in the resident’s continuity practice compares favorably, particularly in health maintenance visits and in the percentage of patients seen for health maintenance.
Continuity in resident practices is espoused as a desirable educational goal. Residency programs should be monitoring the continuity of their residents as an outcome of the residency training and correlating continuity with patient outcomes. This monitoring can be problematic within systems that do not routinely collect resident information. These problems are magnified when residents are in multiple sites, especially when those sites include both hospital-based and office-based practices.
APPENDIX: MEASURES OF CONTINUITY
Usual Provider of Care (UPC):7
Calculation of the UPC determines the overall proportion of visits during which a patient was seen by his or her assigned physician.
Cupc = Continuity, usual provider of care
nc = Number of visits in which a patient saw the assigned provider
nt = Total number of visits to the office
Continuity for Physicians (PHY):PHY is the average proportion of patients seen by each provider who were his or her assigned patients.
Cphy is Continuity for Physicians
npi is the number of patients that physician p has seen who are his own
npt is the total number of patients that physician p has seen
Pt is the total number of physicians
Continuity for Patients (PAT):PAT is the average proportion of visits for each patient in which they were seen by their assigned physician.
Cpat is Continuity for patients
nsi is the number of visits for patients that they are seen by their own physician
nst is the total number of visits that patients makes
St is the total number of patients
Bice Index or Continuity of Care Index (COC):6The Bice Index is a measure of dispersion. It does not take into account the assigned clinician, rather it looks at the number of clinicians seen.
Ccoc is the Bice Index, or Continuity of Care Index
nj is the total visits by the patient to provider j
n is the total number of visits to the practice by the patient
pn is the total number of patients in the practice
The COC is calculated for each patient individually, summed over all patients in the practice, and then divided by the total number of patients in the practice.
Modified, Modified Continuity Index (MMCI):11MMCI is a measure of dispersion. It is modified from the Bice Index to make the measure perform in a more linear fashion. This measure does not take into account the assigned physician but only the number of physicians seen.
Cmmci is Modified, Modified Continuity Index
np is the number of physicians the patient saw
nv is the total number of visits to the practice
pn is the total number of patients in the practice
The MMCI is calculated for each patient individually, summed over all patients in the practice, and then divided by the total number of patients in the practice.
This study was supported, in part, by the Healthy South Carolina Initiative, Medical University of South Carolina, Charleston, South Carolina.
We thank the physicians and staff of Parkwood Pediatrics and Charleston Pediatrics, Charleston, South Carolina, for help in obtaining and understanding data on their practices the authors.
- Received November 17, 2000.
- Accepted May 29, 2001.
- Address correspondence to Paul M. Darden, MD, Department of Pediatrics, Medical University of South Carolina, 326 Calhoun St, Box 250106, Charleston, SC 29425. E-mail:
Portions of this research were presented at the Southern Societies Meeting; New Orleans, LA; February 18–20, 1999; and at the Pediatric Academic Societies Meeting; San Francisco, CA; May 1–5, 1999.
- ↵American Academy of Pediatrics, Ad Hoc Task Force on Definition of the Medical Home. The medical home. Pediatrics.1992 ;90:774
- ↵American Academy of Pediatrics. The medical home statement addendum: Pediatric Primary Health Care (RE9262). Pediatric News.1993 ;
- ↵Starfield B. What is Primary Care? Primary Care: Concepts, Evaluation, and Policy. New York: Oxford University Press, Inc; 1992:3–9
- ↵Starfield B. Longitudinality and Managed Care. Primary Care: Concepts, Evaluation, and Policy. New York: Oxford University Press, Inc; 1992:41–55
- ↵Alpert JJ, Robertson LS, Kosa J, Heagarty MC, Haggerty RJ. Delivery of health care for children: report of an experiment. Pediatrics.1976 ;57:917–930
- ↵Christakis DA, Wright JA, Koepsell TD, Emerson S, Connell FA. Is greater continuity of care associated with less emergency department utilization? Pediatrics.1999 ;103:738–742
- ↵Christakis DA, Mell L, Koepsell TD, Zimmerman FJ, Connell FA. Association of lower continuity of care with greater risk of emergency department use and hospitalization in children. Pediatrics.2001 ;107:524–529
- ↵McCune YD, Richardson MM, Powell JA. Psychosocial health issues in pediatric practices: parents’ knowledge and concerns. Pediatrics.1984 ;74:183–190
- ↵Charney E. The education of pediatricians for primary care: the score after two score years. Pediatrics.1995 ;95:270–272
- ↵Roberts KB, Starr S, DeWitt TG. The University of Massachusetts Medical Center office-based continuity experience: are we preparing pediatrics residents for primary care practice? Pediatrics.1997 ;100(4):. Available at: http://www.pediatrics.org/cgi/content/full/100/4/e2
- ↵Accreditation Council for Graduate Medical Education (US). Manual of Policies and Procedures For Graduate Medical Education Review Committees. Chicago, IL: Accreditation Council for Graduate Medical Education; 1999
- ↵American Medical Association, Accreditation Council for Graduate Medical EducatioTn (US). Graduate Medical Education Directory. 1999–2000 Chicago, IL: American Medical Association; 1999
- ↵Dumont-Driscoll MC, Barbian LT, Pollock BH. Pediatric residents’ continuity clinics: how are we really doing? Pediatrics.1995 ;96:616–621
- ↵Garfunkel LC, Byrd RS, McConnochie KM, Auinger P. Resident and family continuity in pediatric continuity clinic: nine years of observation. Pediatrics.1998 ;101:37–42
- ↵Dixon RA, Williams BT. Patient satisfaction with general practitioner deputising services. BMJ.1998 ;297:1519
- ↵Rizos J, Anglin P, Grava-Gubins I, Lazar C. Walk-in clinics: implications for family practice [see comments]. Can Med Assoc J.1990 ;143:740–745
- American Academy of Pediatrics