Background. Children may fall behind on preventive services because they do not receive needed services at the time of an office visit (a missed opportunity). However, methods are needed to measure problems in the care delivery process that lead to missed opportunities. We developed a method to examine the key steps in the preventive service delivery process and identify problems; we assessed the feasibility and validity of the method in primary care practices for children.
Methods. Using 3 data collection methods, we measured key steps in the process of preventive service delivery in primary care offices: a chart audit was used to measure each child's preventive service status before and after an office visit, a brief parent exit interview was used to assess preventive service delivery not documented in the chart, and a staff checklist was used to assess the role of nursing and other office staff. The feasibility of using this combination of measures to identify problems in the care delivery process was evaluated in 3 representative primary care practices (2 pediatric, 1 family practice) among children 5 years and younger.
Results. The measurement method was implemented in all 3 practices. The validity of the method was supported by its ability to detect differences among practices in the proportion of children eligible for immunizations and screening tests and in the proportion of children undergoing key steps in the process of preventive service delivery. The practice with the lowest proportion of children whose charts were screened for preventive services needs had the lowest performance of preventive services.
Conclusions. It is possible to assess specific elements in the process of preventive service delivery in primary care practices. Use of this approach may help practices design and monitor interventions to improve the quality of preventive care delivery.
Despite widespread professional and public support for immunizations and other preventive services, the delivery of such services is less than optimal. For example, chart reviews in both public clinics and private practice settings have documented only 60% to 70% completion of such services for children by 2 years of age.1–3
Children may fall behind on preventive services because they do not come in at appropriate intervals for preventive care, or because they do not receive needed services at the time of an office visit (a missed opportunity). For example, several studies have assessed the frequency of missed opportunities in the delivery of immunization,4–6 and identified reasons why they occur (such as a failure to review the chart to determine if an immunization is needed, or a failure to give all needed vaccines simultaneously). The Centers for Disease Control and Prevention has created software that will calculate missed opportunities. However, merely counting them will not reveal their root causes, and studies to date have not provided a method to identify problems in the process of care delivery that lead to such failures.
Effective preventive service delivery in a primary care office can be conceptualized as a series of 6 steps that lead the office staff, physicians and patients through a process that results in the provision of the service7:
Step 1. The patient's need for preventive care is assessed.
Step 2. The health care provider is made aware of these needs.
Step 3. The needed preventive service is performed.
Step 4. The service is documented in the patient's chart.
Step 5. The patient is provided with education about the importance and timing of future preventive care and encouragement to obtain it.
Step 6. Appropriate follow-up is conducted. If the patient does not return for routine preventive care at the appropriate interval, the patient is recalled. If a patient is referred elsewhere for preventive services, such as the health department for immunizations, efforts are made to obtain the records.
Adherence to this general process should lead to high rates of delivery of preventive services and appropriate follow-up.
Although the process is conceptually simple, in practice it is complex, as shown by the flow diagram of the 6-step process in Fig 1. If practices do not have a systematic way to identify which steps are associated with a problem such as a missed opportunity, it is difficult to make changes that will lead to improvements. For example, training office staff about valid contraindications to vaccines will not lead to improvements in immunization rates if immunizations are poorly documented and staff cannot easily locate information about a child's vaccination status.
Primary care practices for children need a method for identifying problem points in the process so they can develop targeted plans to improve their care. This study was designed to test the feasibility of a comprehensive method for measuring the performance of the 6 key steps in the preventive services delivery process and identifying the problem points.
Our approach to identifying problems in preventive services delivery was based on the algorithm shown in Fig 1. In developing a measurement approach, we concentrated on children <6 years old, because children in this age group require the greatest number of preventive services. We focused on a range of representative preventive services: immunizations, and screening for lead, anemia, tuberculosis, blood pressure and vision. To quantify the frequency with which all 6 steps in the preventive service delivery process were completed, we combined 3 different methods of collecting information: a method to assess needed services and determine which ones were provided, a method to detect services that were performed but not documented in the chart, and a method to assess staff participation in preventive care delivery. These methods are described in more detail below.
1. A chart audit was used to identify the preventive services for which a child was eligible on the date of the assessment. A child was deemed eligible if the preventive service had not already been performed and, based strictly on age criteria, the recommended service could take place at that visit. The chart audit was also used as a way to assess which services were performed. All services recorded on the bill and in the chart note, including any risk assessments for lead, anemia or tuberculosis, or any contraindication to a service, were noted. Finally, the chart audit was used to collect demographic data such as a child's date of birth, age, sex, race, insurance status, and the reason for the visit.
2. A parent exit interview was used to detect preventive service delivery that might not have been documented in the chart. Parents were asked about any interactions they had had with the front office staff, the nurse, or the practitioner regarding preventive care. Because practices may conduct risk-based screening for anemia and TB exposure, this interview asked specifically about risk assessment questions that might have been asked, preventive services performed, education that had taken place, and discussions of follow-up appointments and plans. The interview was also used to assess whether the child had obtained immunizations at the health department. This interview required <3 minutes.
3. An office staff/nursing checklist was used to determine the extent and manner in which staff were involved in the process of care delivery. The checklist consisted of a 1-page sheet listing 12 activities. The checklist was completed by a front office person or a nurse before the patient's encounter with the physician. Embedded in the list were 2 items to determine if patient's need for preventive care had been assessed. For example, for Step 1, the checklist asked if the staff member had reviewed the chart to check the patient's immunizations. For Step 2, the checklist asked if the health care provider had been made aware of the needed preventive services (eg, the staff person made a note for the clinician about shots, lab tests or other screening due today).
The checklist also asked staff to record who performed each activity, because this information could be used to further characterize the practice's preventive service process. Step 1 and Step 2, for example, could be done by nurse or clinician or both. We assumed that if a child received a service but the nurse did not indicate having screened the chart or alerted the clinician, then the clinician must have performed both steps.
A preventive service was defined as not completed at a visit if a child was eligible for any service, had no valid contraindication noted, and did not receive the service or have a risk assessment performed for that service. The 1996 American Academy of Pediatrics/Advisory Committee on Immunization Practices/American Association of Family Practitioners (AAP/ACIP/AAFP) schedule for immunizations was used as the basis for determining a child's eligibility for immunizations. The AAP criteria have a flexible age range of immunization administration; therefore, any child within this range was considered eligible. We did not assess administration of the varicella vaccine, given the relatively recent addition of this vaccine. Hepatitis B vaccine was also not included in the assessment because at the time of the study, many children 4 to 5 years old were born after the recommendation to initiate the hepatitis B series in all newborn infants. A child was deemed to have undergone screening if there was documentation of a risk assessment or a screening test (eg, hematocrit, purified protein derivative). Children were considered to have been adequately screened for a particular problem if they had had at least 1 screening procedure in the following age ranges: lead and tuberculosis (9–24 months of age), anemia (6–24 months of age), vision and blood pressure (formal testing between 3 and 5 years of age). The schedule for these preventive services was in accord with the AAP's “Recommendations for Preventive Pediatric Health Care”.8
Assessment of Measurement Approach
We examined the ability of the measurement approach to detect steps in the care delivery process in three primary care practices in North Carolina, which were selected because they represented a spectrum of the settings and manners in which children receive preventive services. Two of the practices (a private pediatric practice and a private family medicine practice) were located in an urban area and primarily served insured clients. A second private pediatric practice was located in a rural area and served mostly low-income patients. This practice was selected because it had participated in the development of an intervention to improve preventive care delivery and therefore was expected to show better rates of completion of each step in the process than the other practices. The study was approved by the Institutional Review Board of the University of North Carolina.
Because opportunities for preventive care exist during all types of visits (acute, follow-up, well-child), we sampled children coming to each practice for any reason. We assumed that assessing patients' needs for multiple preventive services would enable clinicians to decide how and when to deal with any particular need. Such an approach enables clinicians to avoid inadvertent omissions of preventive care. We restricted the sample to English-speaking families. All age-eligible children were enrolled. The number of patients sampled in each practice varied depending on the type of practice, the patient volume per day, the age distribution of the patients, and the number of days that the practice agreed to participate in the study. Our objective was to sample at least 30 consecutive charts per practice because a sample this size provides a precision of approximately .07 s.e.
Because one of the practices had a very few pediatric patients (the family practice), it was not feasible to station a research assistant there to conduct parent exit interviews. Despite the lack of this interview information, we were able to acquire some information on Steps 1 through 4 in the practice.
Assessment of Validity
The validity of the 3-part method was assessed in 3 ways. First, the method was reviewed by individuals knowledgeable about office systems to determine if it appeared to be measuring all the steps in the process of preventive service delivery (face validity). Second, we examined whether the method discriminated among the 3 practices and revealed different process problems in each practice (discriminant validity). Third, we determined whether the performance of Steps 1 and 2 (screening the chart and making the provider aware of a need for a service) was associated with higher rates of completion of preventive services (Step 3) (criterion validity). We did not examine the relationship between Steps 4 through 6 and the overall rates of preventive service delivery because neither documentation of the service, provision of parental/patient education, nor arrangements for follow-up preventive care are necessary for Step 3 to occur. These latter steps may have more impact on provision of future preventive services. We did not conduct a formal assessment of the reliability of the measurement approach; however, we did assess whether the method provided similar information about the same practice on consecutive days. Because of the small number of practices and the small sample of charts in each, the assessment of discriminant and criterion validity should be regarded as preliminary.
Each practice's results were compared with the idealized AAP/AAFP/ACIP recommended schedule for immunization and other preventive services, and also to their own practice's clinical preventive services schedule. Frequencies were generated for the 6 steps and results were stratified by the type of visit and practice.
Table 1 shows the characteristics of the 3 practices involved in the project. Of note, the family practice (practice 2) had a lower immunization rate and saw a much lower volume of children than the pediatric practices (practices 1 and 3). In addition, the mean age of the sampled children was much higher in this practice. The rural pediatric practice saw a larger volume of Medicaid patients, and a slightly greater proportion of the visits sampled were for acute care.
Data from the chart audit (Table 2) indicated that approximately two-thirds of children presenting for acute or well-child care, according to the practices' own schedules, were eligible for at least 1 preventive service. As this Table indicates, there was considerable variation among the 3 practices in the proportion of preventive services for which children were eligible. In practices 1 and 3, nearly one-third of the children needed immunizations alone, while only 8% of those in practice 2 required immunizations alone. More than one-third of children due for services in practices 1 (35%) and 2 (42%) required only services other than immunizations (lead, tuberculosis, anemia, vision, or blood pressure screening or >1 of the above), only 17% of children in practice 3 needed only other preventive services. The results were similar when the AAP's schedule of preventive services was used as the standard against which eligibility was assessed.
Virtually all of the children presenting for well-child care were eligible for a preventive service (80%–100%). Over half of the children presenting for sick visits were eligible for preventive services. As expected, practices were more successful in providing preventive services at well-child visits than at sick visits. Practice 1 provided preventive services to 7% of eligible children during sick visits, while the other 2 practices provided no preventive services during such visits.
Although the data from the chart audit indicated the frequency of opportunities to provide preventive services and whether services were provided, they did not indicate the source of problems or whether problems differed among practices. Data from the staff checklist conducted before the encounter with the physician, and from the parent exit interview (Table 3) showed the proportion of patients in each practice for whom some steps in the delivery of preventive services were not performed. There was substantial variation among the 3 practices in the frequency with which the various steps were omitted. For example, practice 1 staff failed to screen the charts of 37% of children seeking care, and failed to make the clinician aware of the service needs of the eligible children 44% of the time. The practice thus failed to provide services 44% of the time. However, for patients who did receive a preventive service, the practice was successful in documenting and educating about services provided (only 3% failure; Step 4), and most parents said that the staff told them when they were due for future preventive services (only 10% failure; Step 5). Thus, the problems in this practice appeared to exist at the level of chart screening and clinician prompting.
It was more difficult to assess the failure rates for the steps in Practice 2 because of the absence of interview data. The checklist alone indicated that this practice had a failure rate of 81%, at Step 1 (not assessing the need for a service) and a 59% failure rate at Step 3 (performing the service). Given the latter, we assumed a failure rate to notify the practitioner of 59% as well (no nurses indicated that they notified the clinician of services due via the checklist). In this practice, it was unclear who was doing the screening for preventive services. Problems clearly existed at the first 2 steps. The practice, however, documented services well.
Practice 3 had the lowest failure rate, 27%, at Step 1. As noted earlier, this practice had participated in an intervention program to improve preventive service delivery and thus was expected to differ from other practices in failure rates. For example, they had worked for some time on Step 1 in the process and had begun placing colored post-it prompts on the charts of children in need of immunizations. This practice also had the lowest failure rate, 31%, at Step 2, yet the practice had a failure rate at Step 3 (performing the service) of 58%, similar to the other 2 practices. When we examined the use of the post-its more closely, we found that they were used in only 63% of well-child visits and 43% of sick visits. In addition, the post-its were sometimes inaccurate. In a sample of 40 post-its, the sensitivity of the nurse screening was 63% and the specificity was 100%. In other words, when a nurse indicated that the child needed a screening test on the post-it, she was correct. When she indicated that the child did not need a screening test, she was incorrect 37% of the time. This may explain the high service failure rate at Step 3 if providers were relying on this information to determine service eligibility.
Overall, our measurement approach was found to be feasible in all 3 study practices, with some modifications based on practice-specific features. Each of the 3 practices commented that the method of preventive service assessment was not disruptive, and when the results were presented to the practices, representative clinicians, nursing staff, and office personnel reported that the patterns observed reflected the current state of their preventive service delivery system. The method also appeared to be reliable. When the data for practices 1 and 3 were sorted by date of the study, or by the interviewer present for the parent interview, the patterns of responses to the questions in the interview and the completion of the checklist were similar.
Our study suggests that it is possible to assess the process of preventive service delivery in primary care practices for children and detect the degree of success with which practices execute key steps in the process. The method was feasible in 3 busy primary care practices and proved flexible enough to adapt to the differences among practices. Practices reported that the results of the assessment accurately reflected their preventive services delivery and said that the information was useful in office redesign efforts.
To date, most efforts to explore practice-level barriers to preventive care delivery have focused on isolated services such as immunizations. Studies of missed opportunities to immunize children have detailed the pervasiveness of the problems, provided possible reasons for their occurrence, and estimated their contribution to underimmunization.4,9,10 For example, Szilagyi et al,4 in a study of 7 primary care settings, found that missed opportunities contributed 13% to the total undervaccination time in a suburban practice, 27% in a hospital-based clinic, and >40% in a diverse array of other clinical settings including a neighborhood health center, a group model health-maintenance organization, a rural health center, a rural private practice, and an urban group practice. Surveys of immunization practices have suggested broad changes to improve service delivery.5,6,11 For example, practitioners are urged to offer preventive services at both acute and well-child visits, to administer all needed immunizations at the same visit, and to cease deferring immunizations for invalid contraindications such as low-grade fevers or upper respiratory tract infections.12
These studies have relied primarily on questionnaires or chart audits and reviews of missed opportunities, rather than focusing specifically on the process of care delivery in detail. Such studies miss the contribution of other office personnel to the preventive care delivery process, and fail to describe adequately the complex nature of the process. As Berwick and Nolan13 point out, “In health care, the tendency has been to seek improvement by trying to perfect the elements of care—to make doctors better at doctoring, to make nurses better at nursing. All of this ‘discipline-specific’ improvement helps, but modern systems theory suggests that greater leverage often lies in changing the patterns of interactions and in redesigning the overall flow of work.” A missed opportunity represents a failure of the preventive service delivery system, and a system approach to the problem is needed. Focusing on elements of care may not provide enough information to make useful improvements. Clinicians may be unaware of the duties of office personnel, and chart documentation of encounters is often incomplete. Charts do not document whether the child's chart was assessed for services due, whether the provider had the information or not, or why the provider decided to postpone the immunization. This information is critical if practice level interventions are to be effective.
The results from the practices reported here provide evidence of the validity of this type of measurement approach. We selected practices because we anticipated differences among them. Although we did not test for differences among the practices statistically, we observed the differences anticipated. For example, the practice that had devoted energy to improving chart screening (Step 1) had the lowest proportion of patients failing at this step. The direction and magnitude of the differences observed suggest that this approach may be useful in identifying problem points in care delivery. The 3 complementary data collection methods are necessary to gain a comprehensive picture, though the choice of whether to use all or 1 or 2 of the methods might depend on the improvement issue.
Although the measurement approach proposed in this article appears feasible, it has several weaknesses. The accuracy of measurement of different measurement components may vary. For example, much of what takes place during clinical encounters is not documented in the chart. Therefore, a chart note may not accurately reflect clinical decisions, particularly an active decision not to provide a preventive service. Further, the use of the checklist may lead to overestimation of the frequency of chart screening if staff check off socially desirable responses indicating that screening occurred whether or not it took place. Finally, while this study provides evidence of validity, given the small sample and the potential for bias, the result should be considered preliminary. Despite these limitations, the method appeared to provide useful information to practices and helped to catalyze further efforts at improvement. For example, all 3 practices found that they did a much better job of providing preventive services in well visits than in sick visits. In fact, essentially no preventive services were provided during acute care visits, reflecting the traditional division of pediatric care into well and sick visits. The better performance of practice 3 in Steps 1 and 2 occurred despite the fact that more visits in this practice were for acute care. This suggests why measuring the process of preventive services delivery at non–well-child visits may be useful in developing a comprehensive system for preventive services delivery.
Screening the chart for preventive services (Step 1) seems to be one of the keys to service provision. The practices in this study did a much better job of screening for immunizations than for any other preventive service. Although the children who presented to the practices were eligible for immunizations, many also were eligible for other preventive services. Screening needs to be directed to these services as well if rates are to improve. In addition, once a practice decides to implement a tool such as a post-it to increase rates of preventive service delivery, all involved staff must be trained adequately and use it consistently. Such tools require ongoing monitoring to ensure proper implementation.
To avoid problems such as missed opportunities, it is necessary to understand why they occur and to target solutions to the root causes of service delivery failures. The method described here provides a way to detect what the problems are and to measure whether changes are leading to improvements in the process of care delivery. This process-oriented method is a diagnostic tool, not a solution to problems. However, it examines the process of pediatric preventive service delivery in office practices from both a quantitative and qualitative perspective and makes it possible to target office system interventions to the particular problem(s) of the practice, and monitor a new process once it has been implemented. The availability of a method to assess the process of preventive services delivery in practices is but 1 of the elements needed for practices to improve the effectiveness of care. Practices need to develop realistic, practical solutions for their setting, and gain the cooperation of the staff in implementing change.
We are grateful for the participation and enthusiasm of the physicians and staff of the Lumberton Children's Clinic, Durham Pediatrics, and Triangle Family Practice for this project. We also appreciate the assistance of Laura Dominguez and Beth Holloway, RN. Elizabeth Tornquist provided assistance editing the manuscript.
- Received July 13, 1999.
- Accepted May 1, 2000.
- Address correspondence to Peter Margolis, MD, PhD, Pediatrics and Epidemiology, University of North Carolina, Children's Primary Care Research Group, 1700 Airport Rd CB 7226, Chapel Hill, NC 27599-7226. E-mail:
Dr Margolis was a Robert Wood Johnson Generalist Faculty Scholar during the execution of this project.
- AAP/ACIP/AAFP =
- American Academy of Pediatrics/Advisory Committee on Immunization Practices/American Association of Family Practitioners
- Szilagyi PG,
- Rodewald LE,
- Humiston SG,
- et al.
- Szilagyi PG,
- Rodewald LE,
- Humiston SG,
- et al.
- American Academy of Pediatrics. Guidelines for Health Supervi-sion. Elk Grove Village, IL: American Academy of Pediatrics; 1997:257
- McConnochie KM,
- Roghmann KJ
- Farizo KM,
- Stehr-Green PA,
- Markowitz LE,
- Patriarca PA
- Berwick D,
- Nolan T
- Copyright © 2000 American Academy of Pediatrics