Effect of Method of Defining the Active Patient Population on Measured Immunization Rates in Predominantly Medicaid and Non-Medicaid Practices
Objective. To examine the effect of patient selection criteria on immunization practice assessment outcomes.
Methods. In 3 high- (50%–85%) and 7 low- (<25%) Medicaid pediatric practices in urban eastern Virginia, we assessed immunization rates of children 12 and 24 months old comparing thestandard criteria (charts in the active files excluding those that documented the child moved or went elsewhere) with 3 alternative criteria for selecting active patients: 1)follow-up: the chart contained a complete immunization record or the patient was found to be active in the practice through follow-up contact by phone or mail; 2) seen in the past year: the chart indicated that the patient was seen in the practice in the past year; 3) consecutive: patients that were seen consecutively for any reason.
Results. Of the 1823 charts assessed in the high- and low-Medicaid practices, follow-up identified 61% and 83% as active patients; 78% and 95% were ever seen in the past year. At 24 months, mean practice immunization rates were lower for standard (70%) than all 3 alternative criteria (78%–86%). Immunization rate differences between standard and alternative criteria were greater in high- (17%–23%) than low-Medicaid practices (5%–13%).
Conclusion. The standard for practice assessment should be based on a consistent definition of active patients as the immunization rate denominator.
- immunization assessment
- quality of pediatric care
- methodologic research
- assessment of preventive services
The immunization status of children <2 years old remains a leading indicator of quality of care.1–5Routine assessment of immunization rates is a national standard for pediatric practice,4 and results in increased practice immunization rates.6–8 The national standard for immunization practice assessment, established by the Centers for Disease Control and Prevention (CDC), involves random or systematic selection of the records of age-eligible children documented as having at least 1 visit to the practice, which is operationalized in different states either as 1 medical or immunization visit or as 1 well-child or immunization visit.9 Charts are excluded if they contain adequate documentation that the patient has moved or gone elsewhere for care.9,,10 However, this standard is being reevaluated to promote widespread adoption of immunization practice assessment in the private sector.7,,8,11,12 A key issue in immunization assessment methodology is defining the patient population for which the private pediatric provider is responsible, and the appropriate selection of charts for assessment. The ideal target population to include in practice assessment may be patients who are active in the practice as defined by the parent or guardian, but this definition is difficult to operationalize. Practical alternative criteria for defining the active patients to be assessed include consecutive patients or patients seen in the practice in the past year.12 A methodologic study in the Pediatric Research in Office Settings (PROS) network found that the CDC standard assessment yielded mean immunization rates 8 to 10 percentage points below the rates obtained from assessment of consecutive patients or patients determined to be active in the practice by follow-up contact.11 Given the national emphasis on quality assurance assessment in private practice, more methodologic studies are needed.
In 1996–1997, we conducted a physician-led quality improvement initiative that increased immunization rates in 10 pediatric practices in eastern Virginia.8 During this initiative, many participating pediatricians indicated that they considered the current national assessment standard to be biased because it fails to exclude inactive patients. To address this concern, we evaluated the immunization rates obtained from practice assessments conducted using standard criteria for patient selection compared with 3 alternative criteria used to define active patients: 1) follow-up:patients included in the standard assessment who had a complete immunization record in their chart or were identified as active in the practice by phone or mail follow-up contact; 2) seen in the past year: patients included in the standard assessment whose chart noted that they were seen in the practice in the past year for any reason; and 3) consecutive: a separate survey of patients seen consecutively in the practice in the past month for any reason. We compared assessment outcomes for the practices overall, and between predominantly Medicaid and non-Medicaid practices to examine the relationship between immunization assessment methodology and type of practice or patient population.
Study Design and Population
This study included 10 pediatric practices that were participating in a previously reported immunization quality improvement initiative.8 The study population consisted of practice patients between 12 and 30 months old on December 12, 1997. In 3 practices, defined as high-Medicaid, 50% to 85% of patients were Medicaid-insured. In 7 practices, defined as low-Medicaid, <25% of patients were Medicaid-insured. The percent of patients who were Medicaid-insured was estimated by office managers based on patient visit data. Participating practices varied in size, from 2 to 10 full-time pediatricians, and in their policy for purging records from their active files. High-Medicaid practices tended to maintain records of all patients seen, purging the records of inactive patients only every few years, while low-Medicaid practices typically purged the records of inactive patients annually or semiannually, with inactive status based on not having been seen in the past year.
The primary outcome measures were the percent of children up-to-date (UTD) with immunizations at 12 (UTD12) and 24 (UTD24) months old, defined as having all recommended doses of diphtheria-tetanus- whole cell or acellular pertussis (DTP), polio, measles-mumps-rubella (MMR), and Haemophilus influenzae type b (Hib) vaccines. UTD12 was defined as having 3 DTP, 2 polio, and 2 Hib vaccine doses; UTD24 was defined as having 4 DTP, 3 polio, 1 MMR, and 3 Hib vaccine doses. A secondary outcome was the percent of patient charts included in the standard assessment that were classified as active based on independent criteria.
All practices were assessed comparing 4 different criteria for defining the active patient population and selecting patient charts, as detailed below: standard, follow-up, seen in the past year, and consecutive assessment criteria. This study was approved by the Institutional Review Board of Eastern Virginia Medical School.
We conducted practice assessments using the Clinic Assessment Software Application (CASA),9,,10 which determined the number of records to be assessed per practice based on the practice size. To ensure that all charts had equal chance of inclusion, assessed charts were systematically selected from active patient files using a randomly selected start point and a calculated sampling interval. Charts were eligible for inclusion if the patient fell within the target age range and had at least 1 well-child or immunization visit in the practice but were excluded if the chart documented that the patient had moved or gone elsewhere for care.9 Based on that criteria, we excluded charts that documented that the patient was receiving all of their immunizations from a military or public health clinic. In practices with 2 systems of record-keeping (eg, paper and computerized medical records), we searched both systems as needed. All assessments were performed by well-trained staff who were blind to study hypotheses.
Based on follow-up criteria, patients were defined as active patients of the practice if their immunization record in the practice was complete or, if they were otherwise identified as active patients of the practice through contact with a parent or guardian by telephone or mail. Children lacking 1 or more doses of DTP, polio, MMR, or Hib based on their age at assessment were identified using CASA software. Current contact information was obtained from each practice for all children who were overdue for 1 or more immunizations given their month of age. Trained interviewers representing the practice network attempted to contact the family by telephone, with at least 3 attempts made, as necessary, on different occasions. When the patient's parent or guardian was contacted, they were informed that the call was part of a quality assurance initiative of their doctor's practice and that their participation was helpful but not required. Parents who agreed to participate were asked if the practice was their child's current, usual source for well-child care and immunizations, any other providers the child may have ever seen for immunizations, and they were asked to read the child's immunization record to the interviewer over the phone if possible.
A 1-page print version of the survey was mailed to families that could not be reached by telephone, with a preaddressed, postage-paid return envelope and a cover letter from the medical director (A.B.F.) of the physician-hospital organization, to which all of the practices belonged. The external envelope requested the US Postal Service to provide an address correction, if needed. A second wave of surveys was mailed to families that did not respond initially. Thus, we made up to 5 attempts to contact the family by telephone and mail.
To complete partial records, we requested immunization records from all providers listed by the family except for military and out-of-country providers. Updated information was provided to each participating practice as part of their ongoing quality assurance efforts. Based on the follow-up results, patients identified as moved or gone elsewhere or with unknown status were considered inactive and excluded from the assessment. The primary analysis of immunization outcomes was restricted to the practices' original immunization data; however, a secondary analysis assessed the effect of having augmented immunization records.
Seen in the Past Year
This assessment used the standard assessment patients excluding those not seen in the practice in the past year. During the standard assessment, staff recorded the date that the child was last seen in the practice for any reason, and this date classified children as seen or not seen in the past year.
The consecutive assessment involved a separate survey of 100 patients who were seen consecutively for any reason. Each practice generated a list of patients who visited the practice between November 12 and December 12, 1997. We analyzed only those patients who were between 12 and 30 months old on November 12, 1997.
We calculated immunization rates for each practice and assessment method, and the mean and 95% confidence interval of the practice level rates. All patients were included in the UTD12 calculations; patients 24 to 30 months old were included in the UTD24 calculations. The mean differences in practice level immunization rates between the different assessment methods were analyzed using pairedt tests, or when the data were not normally distributed, by the Wilcoxon signed rank test. The difference in mean immunization rates between high- and low-Medicaid practices were analyzed by the 2-sample t test. We examined differences in patient follow-up between high- versus low- Medicaid practices using a χ2 test adjusted for practice level clustering.13 Data analysis was conducted using SAS statistical software (version 6.12; SAS Institute, Cary, NC).
Practice Charts Assessed
As shown in Table 1, the standard assessment included a total of 1823 patients charts, 641 in the high-Medicaid practices and 1182 in the low-Medicaid practices. The consecutive patient assessment included an additional 851 patient charts, 241 in the high-Medicaid and 610 in the low-Medicaid practices.
Of the 1823 patient charts included in the standard assessment, 63% had complete immunization histories and were presumed to be active patients. Telephone and mail surveys documented an additional 13% to be active patients, 18% to have unknown status, and 6% to have moved or gone to another practice. As shown in Fig 1, the follow-up status of patients differed significantly between high- and low-Medicaid practices (χ2adj = 25.5;P < .001). For high- and low-Medicaid practices alike, patients of unknown status were 20 to 25 percentage points below documented active patients in terms of immunization rates and the percent who were seen in the past year, but were similar to patients documented to have moved or gone elsewhere. This pattern confirmed our decision to classify patients of unknown status as not active in the practice.
Comparison of Active Patient Definitions
Based on the charts included in the standard assessment, we compared the percent of patients classified as active by the 2 independent criteria used in this study: follow-up and being seen in the past year (Fig 2). Consecutive patients were not compared because they were obtained by a separate survey and, by definition, all were considered active patients. Classification of patients as active or inactive by follow-up and seen in the past year criteria was significantly correlated (r = .44; P = .024). However, significantly (P < .001; paired t test) fewer patients were classified as active based on follow-up (76%) compared with seen in the past year criteria (89%). Further, high-Medicaid practices had significantly (P < .001; 2-sample t test) fewer patient charts classified as active than low-Medicaid practices, whether by follow-up (61% vs 83%) or by seen in the past year criteria (78% vs 95%). Compared with follow-up criteria as the gold standard, having been seen in the past year provided high sensitivity as a screening criterion for identifying active patients: 367 of 394 (93%) active patients in the high-Medicaid and 974 of 985 (99%) in the low-Medicaid practices. On the other hand, the specificity was modest to poor for identifying inactive patients: 112 of 247 (45%) inactive patients in the high-Medicaid and 53 of 197 (27%) in the low- Medicaid practices. Thus, defining active patients as those seen in the past year did not exclude many of the patients who were classified as inactive by follow-up.
The mean practice immunization rates obtained using the 4 different selection criteria are shown in Table 1, and the mean differences in practice immunization rates between alternative and standard criteria are shown in Fig 3. Use of any of the alternative patient selection criteria resulted in immunization rates that were consistently higher than the standard (all practices: at 12 months–4 to 10 percentage points, at 24 months–8 to 16 percentage points); all of these comparisons were significant (P ≤ .05) by paired t test or by Wilcoxon signed rank test. Further, the mean differences in immunization rates were greater for high- than low-Medicaid practices, comparing alternative to standard assessments (Table 2). At 12 months, the mean differences in immunization rates between alternative and standard assessments ranged from 7 to 15 percentage points for high-Medicaid practices versus 3 to 7 percentage points for low-Medicaid practices. At 24 months, the mean differences in immunization rates between alternative and standard assessments ranged from 17 to 23 percentage points for high-Medicaid practices versus 5 to 13 percentage points for low-Medicaid practices. In general, the alternative assessment rates were similar and could not be readily distinguished from each other (Table 1), but follow-up assessments tended to have the highest rates. In the high-Medicaid practices at 24 months, the mean immunization rate for the follow-up assessment was 5 percentage points higher than the seen in the past year assessment rate (P < .05; paired ttest).
As previously noted, the follow-up assessment immunization rates presented in this article used only the original data from each practice to compare the effects of patient selection criteria. However, we conducted a secondary analysis to examine the effect of combining all sources of immunization data. The combined data yielded rates 0 to 4 percentage points higher than those obtained using the original practice data only.
Our study, conducted in a single urban area and network of pediatric practices, demonstrates that substantial differences in immunization rates can occur because of variation in functional definitions of the patient population denominator. The mean practice immunization rates were significantly lower using the standard criteria compared with 3 independent alternative criteria for defining and selecting active patients: at 12 months, 4 to 10 percentage points, and at 24 months, 8 to 16 percentage points. In this methodologic study, the immunization rates obtained using the standard criteria compared with the alternative criteria differed more for high-Medicaid practices (at 24 months, 17 to 23 percentage points) than for low-Medicaid practices (at 24 months, 5 to 13 percentage points).
Using the standard criteria, the patient denominator is largely defined by the charts found in the active files of each practice and the extent to which practices document that patients have moved or gone elsewhere for care.9,,12 We found, as previously reported, that pediatric practices vary in their policy and routine of purging patient records, as well as the percent of charts maintained in active files that pertain to patients seen in past year (only 78% of high-Medicaid practice charts versus 95% of low-Medicaid practice charts). After exhaustive follow-up efforts to determine the status of patients lacking a complete immunization history, the percent of patients considered active in the practice also differed between high-Medicaid (61%) compared with low-Medicaid (83%) practices. Our findings suggest that some practices, notably those that care for predominantly Medicaid patients, may appear to have substantially lower immunization rates when assessed using the standard criteria because they maintain records of patients not seen in the past year in the active patient files, and/or because they have greater difficulty in identifying the status of former patients, despite reasonable follow-up efforts. We found that the wide gap in immunization rates between high- and low-Medicaid practices seen in the standard assessments narrowed but did not wholly disappear when alternative assessment criteria were used. Differences in record-purging policies therefore appear to explain some, but not all, of the reason for lower immunization rates measured in high-Medicaid practices.
Health departments throughout the country work with private practices to conduct assessments, typically using a variant of the standard assessment method first issued by the CDC in 1992. The current guidelines are applied somewhat flexibly, allowing assessors to include the charts of patients with 1 or more medical visits or 1 or more well-child visits.9 Had we used any medical visit rather than a well-child visit as the inclusion criterion in this study, review of our excluded charts indicated that the standard assessment rates would have decreased 1 to 2 percentage points, and the differences between standard and follow-up or seen in the past year assessments would have increased accordingly. Thus, some differences in immunization rates can occur as a result of using these different standard criteria. Another variant of the standard assessment, which is increasingly used, restricts assessment to patients seen in the past year. We found that using only the charts of patients seen in the past year is a simple and appropriate modification to current practice assessment methodology.
In several aspects, this study confirms the findings of the previous methodologic study that involved 15 practices of the PROS network.11 Darden and others found that immunization rates at 24 months were 8 to 10 percentage points higher for assessments based on consecutive patients or active follow-up compared with standard chart selection. Our findings for the low-Medicaid practices resemble those of the PROS network, while our high-Medicaid practices had much greater disparities between alternative and standard assessment rates than previously reported. Also, the PROS network practices had notably better patient follow-up (unknown: 7%) than we could achieve in our highly mobile urban population (unknown: 11%, low-Medicaid practices; 30%, high-Medicaid practices).
While contributing new insights to practice assessment, our study has several limitations. First, this study was conducted in a single region, and thus may not be generalized to the United States as a whole. Nevertheless, our region and practice sample may be similar to practices in urban areas with mobile populations, and provides a complement to the PROS network sample.11 Second, the follow-up criteria used in this study assumed that patients with a complete immunization history were active with the practice. This approach is consistent with the advice given to providers, ie, charts lacking complete immunization data can be excluded if providers document that the patient has moved or gone elsewhere for care. We estimate that our approach may have biased the measured immunization rates upward by about 6 percentage points. However, we did not include immunization data from outside sources in the primary analysis, which is a downward bias of up to 4 percentage points. Thus, on balance, the overall bias in the follow-up assessment rates reported in this study is likely to be minimal.
To our knowledge, this is the first published study comparing the relative impact of differing patient definition and selection criteria on assessment outcomes in different types of practices (high- and low-Medicaid), and to examine the effect of restricting chart selection to patients seen in the past year. Our study demonstrates the importance of standardizing the patient denominator, and suggests that immunization assessments should be conducted using an appropriate, operationally feasible definition of active patient. Assessment of consecutive patients is easiest for practices to enact, but Darden and Taylor note that the approach tends to oversample frequent users of the health care system.12 Assessment of patients seen in the past year is consistent with the record-purging policy of many practices, is less likely to oversample frequent users of the health care system, and appears to be a conservative measure of being active with the practice. Finally, our findings underscore the problem of discontinuity in medical care,14 especially in the Medicaid population, and the need to strengthen the medical home as part of improving immunization rates and quality of care in urban, mobile populations.
This work was supported in part by Children's Hospital of The King's Daughters Health Foundation and Virginia Department of Health (VDH), Division of Immunization.
We gratefully acknowledge the funding provided by the Children's Hospital of The King's Daughters Health System and VDH, Division of Immunization, the support of Jim Farrell (VDH), Dr Jorge Rosenthal (CDC), and the technical consultation provided by Igor Bulim and John Stevenson (CDC). This project would not have been possible without the excellent pediatricians and staff of Children's Hospital of The King's Daughters and Its Physician Partners; the staff of the Center for Pediatric Research, including J. Andrew McCraw, Krystal Hilton, Nermina Nakas, Cynthia Collins-Odoms, Nancy Stromann, and Anne Wright; and local health department staff. We thank them all for their invaluable assistance with this project.
- Received August 2, 1999.
- Accepted December 29, 1999.
Reprint requests to (A.L.M.) Center for Pediatric Research, 855 W Brambleton Ave, Norfolk, VA 23510. E-mail:
- CDC =
- Centers for Disease Control and Prevention •
- PROS =
- Pediatric Research in Office Settings (network) •
- UTD =
- up-to-date •
- DTP =
- diphtheria-tetanus-whole cell or acellular pertussis vaccine •
- MMR =
- measles-mumps-rubella vaccine •
- Hib =
- Haemophilusinfluenzae type b vaccine •
- CASA =
- Clinic Assessment Software Application
- Rodewald L,
- Maes E,
- Stevenson J,
- Lyons B,
- Stokley S,
- Szilagyi P
- Fairbrother G,
- Friedman S,
- DuMont K,
- Lobach K
- ↵National Committee for Quality Assurance. Health Plan Employer Data Information Set (HEDIS). Washington, DC: National Committee for Quality Assurance; 1999
- Massoudi MS,
- Walsh J,
- Stokley S,
- et al.
- ↵Centers for Disease Control and Prevention. Records in Private and Public Settings: Revised Assessment Methods. Atlanta, GA: National Immunization Program; 1997
- ↵Centers for Disease Control and Prevention. CASA User's Guide Version 3.2a. Atlanta, GA: Centers for Disease Control and Prevention; 1997
- Donner A,
- Klar N
- Copyright © 2000 American Academy of Pediatrics