Most childhood immunizations in the United States are provided by office-based practitioners.1 The assessment of immunization rates is key to any attempt to increase immunization rates in office practice. The measurement of immunization rates and feedback of that information to providers, as an isolated intervention, has been shown to be effective in improving immunization coverage.2 The article by Morrow et al3 in this issue of the Journal represents an important contribution to our knowledge about the measurement of immunization rates in office practice.
Measuring immunization rates in office practice is conceptually easy to understand. One simply measures the immunization status of the patients in the practice. As Morrow et al3 demonstrate, implementing this concept is fraught with difficulties. The authors compare several different methods of assessing immunization rates. Two of these methods (the standard and the consecutive methods) are currently recommended for practices to assess their immunization rates.4,,5 All of the methods used the same definition of up-to-date immunizations—the numerator. The difference between these methods is in the definition of the patients in the practice—the denominator. The biases inherent in each of these methods have only recently begun to be realized and explored.
The method for assessing immunization rates in offices that is most commonly recommended by the Centers for Disease Control and Prevention (CDC) and epitomized by the Clinic Assessment Software Application (CASA) audit involves defining the active patient population by medical records in the chart room (standard method).4 This method clearly biases the measured immunization rate in an office downward. Morrow et al3 provide evidence that this method is not only biased but also confounds the association between practice immunization rate and percentage of Medicaid in the practice. In this study, the standard method of measuring immunization rates is associated with both the percentage of Medicaid in the practice and with the measured immunization rate.
The consecutive method used in this article is one that has been proposed as easier to perform in office practice than a CASA audit.5 As this study and a previous study show the consecutive method is a more accurate measure of the practice immunization rate than a standard CASA is.6
An additional issue is that the implementation of these methods does not necessarily conform to a standard. It is clear from discussions with staff in health departments and at the CDC that CASA, while thought to be a uniform process is actually implemented in different ways in different locations and, perhaps, differently over time. What effect does it have on the measured rates by changing your definition? The most “rigorous” definition of an active patient in a CASA audit would be any patient ever seen in the practice who does not have an explicit indication in the medical record that they have changed physicians. What happens when “ever seen” turns into “ever seen for a well visit” or “seen in the last year” or, perhaps, into “seen for a well visit in the last year”? As Morrow et al3 demonstrate, these changes in definition will substantially change the measured immunization rate.
The reasons why, even with a supposedly standard methodology such as CASA, there are different denominators used is illuminating. Practitioners do not want to be held accountable for patients that they do not consider their patients and resist the most rigorous definitions. The local health departments to be able to perform CASAs in private practice may modify the CASA definition to one that the practitioners will accept as, in the practitioner's view, more closely corresponding to their patients. The practitioners may well be on the right track. However, we need to clearly define the population we are trying to measure and try to understand how the compromises that we make affect the measurement.
The standard assessment appears to have a downward bias that also increases with the age of the patient. The bias also appears to be related to practice characteristics, perhaps related to the difference in purging of medical records as the authors hypothesize. The consecutive and CASA “seen in past year” method provide much more consistent measures and, interestingly and different from all other assessment methods, the difference appears to shrink at the 2-year assessment.
When immunization rates of patients are examined by practice the major determinant of immunization rates is the practice the patient is a member of.7,,8 Some sources of variation between practices seem evident. Studies have shown effectiveness in various interventions to increase immunization in practice such as the assessment of immunization rates and feedback to providers, prompts of immunization, provision of reminders and recall, and standing orders for immunization.2 Patient factors to a lesser extent such as socioeconomic status, and race/ethnicity account for some of the differences seen. Still, most of the variation between practices remains unexplained.
Assessing immunization rates in practice settings is one of a few clearly effective methods for increasing immunization rates in practices. When assessment is used for that purpose clearly defining immunization assessment methodology, beyond that needed for consistency, may not be necessary.
Once we begin comparing between practices, the methodology of assessment becomes crucial. Comparisons are being made. We define good and bad immunization rates. Managed care is mandated to measure rates for their subscribers. We need to explore why some practices do such a good job of immunizing their patients and others do such a poor job.
Can we now recommend the best method for assessing immunization rates in office practice? Probably not, information about how these measures perform including performance over time are not known. However, the available information seems to indicate that the consecutive method and now the CASA method with the denominator of “seen in the past year” are comparable and valid methods of assessing immunization rates in office practice. The CASA audit when used similarly to the standard method in this article seems an inappropriate measure for measuring practice immunization rates.
- CDC =
- Centers for Disease Control and Prevention •
- CASA =
- Clinic Assessment Software Application
- Rodewald LE,
- Peak R,
- Ezzati-Rice T,
- Zell ER,
- Thompson K
- Shefer A,
- Briss P,
- Rodewald L,
- et al.
- Morrow AL,
- Crews RC,
- Carretta HJ,
- Altaye M,
- Finch AB,
- Sinn JS
- ↵Centers for Disease Control. Guidelines for Assessing Vaccination Levels of the 2-Year-Old Population in a Clinic Setting. Atlanta, GA: US Department of Health and Human Services, Public Health Service-Division of Immunization; 1992:1–82
- Taylor JA,
- Darden PM,
- Slora E,
- Hasemeier CM,
- Asmussen L,
- Wasserman R
- Copyright © 2000 American Academy of Pediatrics