Skip to main content

Advertising Disclaimer »

Main menu

  • Journals
    • Pediatrics
    • Hospital Pediatrics
    • Pediatrics in Review
    • NeoReviews
    • AAP Grand Rounds
    • AAP News
  • Authors/Reviewers
    • Submit Manuscript
    • Author Guidelines
    • Reviewer Guidelines
    • Open Access
    • Editorial Policies
  • Content
    • Current Issue
    • Online First
    • Archive
    • Blogs
    • Topic/Program Collections
    • AAP Meeting Abstracts
  • Pediatric Collections
    • COVID-19
    • Racism and Its Effects on Pediatric Health
    • More Collections...
  • AAP Policy
  • Supplements
  • Multimedia
    • Video Abstracts
    • Pediatrics On Call Podcast
  • Subscribe
  • Alerts
  • Careers
  • Other Publications
    • American Academy of Pediatrics

User menu

  • Log in

Search

  • Advanced search
American Academy of Pediatrics

AAP Gateway

Advanced Search

AAP Logo

  • Log in
  • Journals
    • Pediatrics
    • Hospital Pediatrics
    • Pediatrics in Review
    • NeoReviews
    • AAP Grand Rounds
    • AAP News
  • Authors/Reviewers
    • Submit Manuscript
    • Author Guidelines
    • Reviewer Guidelines
    • Open Access
    • Editorial Policies
  • Content
    • Current Issue
    • Online First
    • Archive
    • Blogs
    • Topic/Program Collections
    • AAP Meeting Abstracts
  • Pediatric Collections
    • COVID-19
    • Racism and Its Effects on Pediatric Health
    • More Collections...
  • AAP Policy
  • Supplements
  • Multimedia
    • Video Abstracts
    • Pediatrics On Call Podcast
  • Subscribe
  • Alerts
  • Careers

Discover Pediatric Collections on COVID-19 and Racism and Its Effects on Pediatric Health

American Academy of Pediatrics
SUPPLEMENT

Development and Evaluation of a Satisfaction Scale for Parents of Children With Special Health Care Needs

Henry T. Ireys and Jamie J. Perry
Pediatrics November 1999, 104 (Supplement 6) 1182-1191;
Henry T. Ireys
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jamie J. Perry
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • Comments
Loading
Download PDF

Abstract

Objective. This article describes the development and evaluation of the Multidimensional Assessment of Parental Satisfaction (MAPS) for Children With Special Needs, a tool for measuring satisfaction with providers at the individual level of care.

Methods. Two studies were conducted. The first study created and pilot-tested the scale, using data from 158 parents of children with 4 selected chronic conditions to calculate estimates of reliability and validity. Initial psychometric characteristics were sufficiently strong to warrant further testing. The second study was a field trial of the 12-item MAPS, using data from 302 parents of children with diverse chronic conditions.

Results. Reliability estimates were >.85. The scale's discriminative validity was supported by sharp distinctions between satisfaction ratings for different types of providers. Correlations in the .80s with general satisfaction items indicated strong concurrent validity. Factor analysis revealed a single factor.

Conclusions. The MAPS has psychometric integrity. Assessing satisfaction for children with special health care needs is a complex, necessary part of a comprehensive assessment of quality of care.

  • satisfaction with care
  • children with special needs
  • chronic illness
  • childhood disability
  • health services

Increasing numbers of children, including children with disabilities and chronic illnesses, are enrolling in both commercial and public managed care plans.1 This trend has underscored the need for a better conceptualization and assessment of quality of care.2–5 An important component in evaluating health care interventions is patient satisfaction.6

For children with special health care needs, satisfaction data may be especially important because few other general indices of outcome have been identified to date. Satisfaction as an outcome of care can be measured and compared across the wide variety of chronic diseases and disabilities affecting this population, thereby assuring adequate data for statistical analyses. The usefulness of satisfaction data also stems from its unique role as a personal appraisal of care that cannot be obtained from abstracting administrative data or by observing care directly.7 Credible satisfaction data can be obtained efficiently and will reflect the personal and psychological realities of care for these children.

Because most health-oriented researchers, policymakers, and administrators agree that patient satisfaction is an important measure of quality and, hence, of system and health plan performance, the majority of managed care health plans collect substantial satisfaction data as part of their ongoing administrative procedures. In recent years, there has been an effort to develop standardized satisfaction tools as a means of not only monitoring quality but as a means of comparing satisfaction across diverse managed care and traditional health plans.5,,9 Although some of this work has focused on children with special health care needs, no single instrument has yet to be widely adopted.

It is essential that pediatricians and other pediatric health care providers understand the complexities and implications of assessing satisfaction for this group of children because the results of satisfaction surveys will be used in developing policies for service delivery and financing. This article describes the development and evaluation of a satisfaction instrument specifically for children with special needs and their families, and illustrates the major challenges in assuring that satisfaction scales reflect adequately the characteristics of the complex service system for these children. Specifically, this article presents the methods and results of 2 studies. The first study involved the initial development of items and pilot testing of the instrument, and led to a scale that we named the Multidimensional Assessment of Parental Satisfaction (MAPS) for Children With Special Needs (or MAPS/CSN). The second study was a larger field trial of the MAPS that provided additional estimates of its psychometric characteristics and illustrated its potential usefulness for distinguishing among different types of providers under contract with a managed care organization.

STUDY 1: DEVELOPMENT OF A PILOT INSTRUMENT

Critical Conceptual Issues

Many children with special needs receive services from several providers, including general pediatricians, subspeciality pediatricians, speech and physical therapists, nutritionists, psychologists, social workers, and case managers. As a whole, this population is quite varied in the number and type of providers utilized, and the relative importance of these providers in managing the child's condition. For example, some children with chronic illnesses or disabilities may have a primary care physician and a subspecialist, each with distinct roles; other children in this population may have a primary care pediatrician who handles most or all of the specialized care; still others may have a subspecialty pediatrician who meets all or most of their primary care needs. Each provider has a different but potentially overlapping set of roles and responsibilities.10,,11 Specific arrangements vary from family to family, and may vary within a particular family over time as the child moves through new developmental stages or as the condition changes in its expression. A survey that refers to only 1 provider (such as the primary care provider [PCP]) will fail to capture satisfaction with other members of the health care team who may be equally or more important to the child's health status and medical outcomes.

A second conceptual issue in assessing satisfaction involves the multilevel nature of the health care system.2 Satisfaction with health care for children with special needs can be measured at the 1) individual level, where it seeks to evaluate the quality of the interaction between patients and their health care providers; 2) health provider network or plan level, where the focus is on satisfaction with receipt of services and outcomes of care; or 3) community level, where it attempts to measure the effectiveness with which multiple provider networks, public health programs, and community-based programs share responsibility for addressing the health needs of the entire population. Because each level involves different issues, a comprehensive assessment of quality of care must address them separately; satisfaction items must be constructed so that they clearly refer to only one particular level of care. For our purposes, we focused on creating a tool to assess satisfaction at the individual level of care.

Related to the issue of assessing satisfaction separately for each level of care is the concept of multidimensionality of care at each level. Multidimensionality refers to the different aspects of care such as cost, coordination, and provider competence. Each level of the health care system has its own set of these dimensions across which satisfaction can be measured. For instance, dimensions of care relevant to the individual level for children with special needs include such features as developmentally appropriate care and technically competent care, while dimensions related to the health plan level might include such aspects as physical access to facilities and availability of services. A comprehensive quality assessment measure should be able to capture differences between families in their experiences across the multiple dimensions of care. Furthermore, patients and their families may express more satisfaction with some dimensions of care than others.12 Measuring satisfaction for children with disabilities and chronic illnesses must include items that tap the diverse dimensions that are particularly relevant to this population.

A final conceptual issue involves the distinction between disease-specific and general measures of outcomes. Given that >200 chronic conditions affect children, a satisfaction measure that can be used generically would be more practical than one that is linked to specific chronic conditions.

The satisfaction scale described in this article was designed to account for these conceptual issues within its structure and administrative procedures. Specifically, it was designed to: 1) account for different members of a health care team who are interacting with a family of a child with special needs, 2) include items related to the multiple relevant dimensions of care that pertain to this population, 3) distinguish among the different levels of care, and 4) be useable with parents of children who have any serious ongoing physical health condition, as defined by a noncategorical, functional approach.13

Item Development and Refinement

Our first step involved specifying relevant dimensions of care that would provide the conceptual foundation for survey content. In reviewing the literature, we brought together previous work on defining quality of care for this population14–16 with methodologic studies on satisfaction.7,17–20 Five (5) dimensions of care, described in Table 1, were identified as influential in determining parental satisfaction with the many persons who provide health-related services for children with special needs.

View this table:
  • View inline
  • View popup
Table 1.

Dimensions of Care Pertaining to Parental Satisfaction With Providers

Using these 5 dimensions of care as guides, items for the instrument were developed by reviewing existing scales, drawing on extensive clinical experience, and discussing concepts and items with parents and professionals. About 60 items were developed through these sources, and their relevancy and clarity were reviewed with parents in a focus group format. The subsequent item pool of 25 items was reviewed by several clinicians and researchers in this field for conceptual clarity, specificity, and lack of redundancy. This process yielded 20 items. Ten face-to-face test interviews with these items were conducted with parents from 2 subspecialty clinics serving children with special needs. These interviews were used to identify and address potential problems with the technical components of administering the instrument, such as the mechanism for provider identification, unclear terminology, and survey length. Some items were reworded based on these interviews. The final set of 20 items was used in Study 1.

Technical Issues in Scale Construction

Referent Identification and Item Wording

In the first study, we first asked the respondent to list all the health care providers that the child had seen in the last year. The respondent was then asked to identify which person on the list was most important for providing care for the child's condition (ie, subspecialty care) and which person was most important for providing general health care (ie, primary care). The 20 satisfaction items were asked twice if 2 different persons were identified. The items were asked once if the same person was identified as most important in providing both types of care.

The actual items are worded so that the referent can be any type of health care provider or even a specific clinic that has been chosen by the respondent. This mechanism allows us to measure satisfaction at the individual provider-family level within the context of varied health care arrangements. In addition, it creates a means by which valuable information can be obtained regarding the relative importance families place on different providers of care.

Reference Period

Children with disabilities and chronic illnesses often have multiple providers, some of whom may be quite important although seen as infrequently as once a year. A reference period of 12 months was selected because this period of time will include the majority of a child's regular providers and maximize the likelihood that the child will have had at least 1 visit with important providers.

Response Categories

Each item has a 5-point anchored response scale (excellent, very good, good, fair, and poor), with numbers assigned in descending order from 5 to 1. We selected this approach based on research that judged this scaling method superior to others (eg, a 5-choice Likert-type scale or a “very satisfied” to “very dissatisfied” scale).20 In addition, respondents may indicate that the item is not applicable to the provider in question. For example, a respondent may judge that “helping to coordinate services” is not part of the responsibility of a particular provider and thus would view this item as not applicable to a satisfaction rating. This feature turned out to be extremely important in assessing satisfaction.

Scoring Strategies

Response data from satisfaction scales are likely to be highly skewed.20–22 As a result, the range of total scale scores within a population may be seriously attenuated. Estimates of a scale's psychometric characteristics must take into account this potential skewness. From an insurer's perspective, scoring a scale in the positive direction (ie, leading to satisfaction scores) may be important for purposes of social marketing.22 In contrast, scoring a scale in the negative direction (ie, leading to dissatisfaction scores) may serve to emphasize small but important differences between subgroups. Estimating psychometric characteristics can be accomplished with similar results using either satisfaction or dissatisfaction scores; in our studies, we investigated both of these scoring methods.

Length of Scale

A long scale is unlikely to be used in actual clinical settings; a scale with only a few items may fail to represent critical dimensions. Our goal was to create a psychometrically sound, provider- focused, satisfaction scale that had 12 to 15 items.

Scale Administration Method

The scale can be formatted for use in a self-administered pencil-and-paper format or by a trained interviewer in a telephone or face-to-face interview. In our studies, telephone interviews were used in protocols approved by the Institutional Review Board at the Johns Hopkins School of Hygiene and Public Health.

Study Methods

Sample

Study 1 participants were 163 mothers of children aged 7 to 10 with diabetes, sickle cell anemia, cystic fibrosis, or moderate to severe asthma; families were recruited through subspecialty clinics and private practices as part of an ongoing evaluation of a family support intervention. Five mothers did not wish to answer the satisfaction questions or were noted by interviewers to have “little understanding of most items” as a result of language or intellectual limitations; these mothers were excluded from the analyses. Study 1 analyses used data from 158 respondents. About 56% of these respondents were white and 40% were black; 68% of respondents reported having taken 1 or more years of college courses; 11% were receiving welfare payments.

Interviewing and Analytic Procedures

Trained interviewers administered the satisfaction scale in the context of a longer 20- to 25-minute telephone interview that included items not relevant to this study. All completed interviews were reviewed by the lead author. Data were missing on <3% of all items. Responses were entered from the interviews into a d-base program that allowed for the creation of SPSS data files. All analyses were performed using SPSS Base 7.0.23

RESULTS

Response Distribution by Item

We first examined response distributions for all items. Three criteria were used to identify weak items. The first criterion was extent of skewness. In light of the positive skew that is typical in satisfaction surveys, we specifically looked for items where this positive skew was extreme (ie, where >90% of the respondents used the “excellent” or “very good” choices for all referents). The second criterion was a high rate (>60%) of “don't know” or “does not apply” responses for all referents. The final criterion was interviewer assessment of respondents' understanding of an item, as indicated by the number of respondent requests to repeat or clarify the item.

Eight items were deleted using these criteria: 2 because of extreme skew, 3 because of high rates of “don't know” or “does not apply,” and 3 because of apparent respondent uncertainty about item meaning. A ninth item was subsequently deleted because of high inter-item correlations with 2 other items. Additional information about items that were removed can be obtained from the lead author (H.T.I.). The revised scale that underwent further assessment in Study 1 had 11 items.

Type of Service System

Of the 158 respondents, 93 (58.9%) reported that they had both a subspecialist and a PCP for their child; these respondents, therefore, completed the satisfaction scale twice. Fifty-four respondents (34.2%) indicated that the child had a PCP who managed the child's condition. Eleven respondents (7.0%) indicated that the child had only a subspecialist.

Respondent Burden

For the 93 parents who completed the scale twice (ie, in relation to their child's subspecialist and PCP), postinterview ratings were made by interviewers on whether the respondent was impatient with answering the same questions twice with different referents. Only 7% of the respondents were judged to be “moderately” impatient; none were judged to be “very” impatient. These results suggest that iterative rating of 2 different providers does not create serious respondent burden.

Defining Provider Role

We first examined the question of whether different activities were differentially judged to be “not applicable” for different provider types. Table 2 presents these data. As expected, in the group of 93 children who had both a PCP and a subspecialist, a large percentage of respondents judged that the PCP had no role in managing the child's condition and the subspecialist had no role in providing primary care. In contrast, all 54 mothers of children who had only a PCP expected this person to have a role in managing the child's condition (ie, none said this activity was not applicable). About 20% of mothers of children who had both a PCP and a subspecialist for their child said that “referring to other doctors or services that your child needs” was not part of the subspecialist's role. About 30% said that “putting me in touch with other parents who have similar concerns” was not part of the PCP's role. To avoid small sample sizes in these analyses, we excluded data from the 11 mothers whose child had only a subspecialist.

View this table:
  • View inline
  • View popup
Table 2.

Percent of Study 1 Respondents Judging Specific Provider Efforts to Be “Not Applicable,” by Service System Group

Scoring

We calculated both satisfaction and dissatisfaction scores. As expected, satisfaction scores were distributed in a skewed fashion. Forty percent of the sample had scores in the top 6% of the range. We also calculated dissatisfaction scores by summing the number of fair or poor responses for each provider, dividing by total number of responses (excluding items judged to be “not applicable”), and multiplying by 10. For example, out of 11 items a respondent could have rated 1 item as not applicable and given a poor or fair rating on 2, yielding a score of 2.0. Respondents' scores were then averaged. Scores could range from .00 to 10.00; higher scores indicate greater dissatisfaction.

We first compared dissatisfaction scores for the subspecialist and the PCP within the group that had both. Mean scores were 1.17 (standard deviation [SD] = 2.0) for the PCP and .60 (SD = 1.4) for the subspecialist, suggesting greater dissatisfaction with the PCP than with the subspecialist. For the group of respondents who indicated that the child had only a PCP, the mean score was .38 (SD = .81).

We calculated the percent of each group that judged the referent to be fair or poor on each item, after removing all “not applicables.” Results are presented in Table 3. This table illustrates several key points. First, percent of poor or fair ratings varies widely across items, suggesting that respondents are making distinctions among the various activities. Second, the pattern of percentages differs by type of provider. In general, for children who have 2 providers, poor and fair ratings are given more often to the PCP than the subspecialist (consistent with the analyses of mean scores described above). Third, providers are rated more poorly on their efforts to link parents to other parents than on any other item.

View this table:
  • View inline
  • View popup
Table 3.

Percent of Study 1 Respondents Judging Provider Efforts to Be “Fair” or “Poor,” by Service System Group

Diagnostic Group Analyses

Diagnostic groups differed in whether the child was reported to have both a subspecialist and a PCP. In the diabetes group, 58 of 64 respondents (90.6%) so indicated. In the sickle cell group, 12 of 32 respondents (37.5%) so indicated. The percentages were 50.0% and 32.7% for the cystic fibrosis and asthma groups, respectively. In this sample, children with asthma were less likely to have 2 physicians involved in their care than children with other conditions. We elected not to conduct additional analyses focused on diagnostic groupings for 2 reasons. First, this would have led to extremely small cell sizes. Second, the primary objective of the study did not require these analyses. This issue remains an important topic for further study.

Reliability Estimates

The standardized α coefficient, a measure of internal consistency, was estimated for the satisfaction scale that all 158 respondents completed in relation to the provider or clinic identified as most important for the child's subspecialty care. This index of reliability was .87, well above the value of .70 often cited to support claims of internal reliability.

Validity Estimates

Pearson correlation coefficients were calculated between the satisfaction scale and the mean score of 3 items7,,12,14,15used to assess general satisfaction: “When it comes to doing everything possible to help (child), this provider does an excellent, very good, good, fair, or poor job”; “If the parents of another child with (condition) asked you about this provider, you would tell them that he/she is … ”; “Overall, the medical care that (child) receives from this provider is …” We combined these items into a single index rather than calculating multiple correlations with each of the 3 items. The standardized αs for the 3 items were between .85 and .91 for different subgroups; this result indicates that combining the 3 items leads to a reliable index of overall satisfaction.

Correlations between this index and our satisfaction scale were .79 for the group overall (N = 158) and .86 (N = 94) for the subgroup that completed the scale in relation to the PCP. These high correlations suggest that the scale as a whole does assess the construct of satisfaction as defined by commonly used items. It is noteworthy that the correlation between the 2 satisfaction scales for the group that had both a subspecialist and a PCP was .26, suggesting that respondents were making substantial distinctions between the 2 referents.

Claims of construct validity are strengthened if the scale can distinguish among different providers. Therefore, we calculated dissatisfaction scores for 3 subspecialists who had 14 or more patients in the sample. Scores were .23, .42, and .90—a range that reflects substantial differences in level of dissatisfaction. Although differences in characteristics of the 3 respondent groups may have influenced the ratings, these results nonetheless suggest that the scale can detect differences among providers.

Finalizing Scale for Further Field Testing

These initial findings indicated that the scale had sufficient promise to warrant additional testing. A new item was added based on experience in a separate study at another institution (S.G. Epstein, personal communication, January 1997). This item concerned the ability of the provider to understand how the child's condition influenced family functioning. Thus, the scale that was used in the subsequent field test included the 12 items listed in the Appendix. Each of the domains noted in Table 1 is represented by at least 1 item in the scale. At this point, the scale was named the Multidimensional Assessment of Parental Satisfaction (MAPS) for Children with Special Needs (or MAPS/CSN).

STUDY 2: FIELD TEST OF THE MAPS

Methods

Sample

A sample of 302 participants was drawn randomly from the population of approximately 1800 families enrolled in a managed care organization in Washington, DC, Health Services for Children With Special Needs (HSCSN), Inc. HSCSN was established in 1996 specifically for children receiving Supplemental Security Income payments. Only families who had been enrolled in HSCSN for at least 6 months were eligible.

Telephone calls were placed to a total of 686 families in April and May 1997. Repeated attempts were made to contact selected families; call-backs were scheduled as needed. One hundred forty-three telephone numbers were either disconnected or wrong numbers, with no correct number found. Seventy calls were either not answered, answered with recorded messages, rang busy on repeated attempts, or were found to be duplicates. Fifty-six parents agreed to participate but call-backs could not be scheduled during the study period. Fifty parents refused participation. Twenty-five calls were terminated for diverse reasons. A total of 350 respondents completed the survey. Posthoc ratings by interviewers identified 46 families (13%) who appeared to have little understanding of most of the questions, either because of language problems or intellectual limitations. In most cases, it appeared that respondents' primary language was Spanish. Because the scale was administered in English only, we elected to exclude data from these respondents. Based on the available information, the target children in these families were somewhat more likely to be male and to have a primary diagnosis of a developmental disability.

Subsequent analyses were completed on the remaining 302 families. For some analyses, data from 21 respondents were deleted because of strategically missing data that made comparisons less valid. Less than 3% of the respondents in the sample were not the mothers or grandmothers of the target children. All had income low enough for the children to qualify for Supplemental Security Income. About 61% of the children were male and about 83% were African-American.

Twenty-two respondents (7.3%) were unable to provide the child's primary diagnosis. The remaining 280 were grouped into 7 categories, based on parental report of child's primary diagnosis: developmental disabilities (N = 129; 46.1%), neurologic disorders (N = 73; 26.1%), sensory deficits (N = 23; 8.2%), serious mental health disorders (N = 21; 7.5%), hematologic/oncological conditions (N = 13; 4.6%), pulmonary disorders (N= 12; 4.3%), and other disorders (N = 9; 3.2%).

Procedures

Items for the MAPS were included in a telephone survey completed in May 1997 by trained interviewers used by National Research, Inc, a public-opinion survey firm based in Washington, DC. The lead author participated in the training of the interviewers. At enrollment, HSCSN designates a PCP for the child. In most instances, this is a physician with whom the family already has an ongoing relationship. If a child does not have a PCP, one is assigned based on geographic proximity. All participating children, therefore, have been assigned a PCP.

RESULTS

Of the 302 respondents, 135 indicated that their child had both a PCP and a subspecialist, 54 indicated that their child had a PCP who also provided needed subspecialty-type care (ie, a PCP who managed the child's condition), and 113 indicated that their child had a PCP only and was not receiving subspecialty care at the time of the survey. This last group may represent children with an unmet need for subspecialty care. Table 4 illustrates that, as expected, some items were judged by respondents to be not applicable to either the subspecialist or the PCP. For example, 27% of respondents whose children had both a subspecialist and a PCP judged that the item “managing the child's medical condition” was not applicable to the PCP; 48% of this group judged that the item “general health care” was not applicable to the subspecialist. These results indicate that many respondents are able to distinguish between what items are relevant to different types of providers. It is noteworthy that one-quarter to one-third of respondents in all groups felt that “putting you in touch with other parents who have similar concerns” was not part of the provider's role.

View this table:
  • View inline
  • View popup
Table 4.

Percent of Study 2 Respondents Judging Provider Efforts to Be “Not Applicable,” by Service System Group

Response Distribution by Item

Table 5 shows for each item the percentage of respondents in each subgroup who indicated a response of poor or fair, after “not applicable” ratings were excluded. This table suggests that within the group of respondents whose children had both a PCP and a subspecialist, the PCP was more likely to have ratings of poor or fair on most items compared with the subspecialist. The respondents whose children had a PCP but no subspecialty care rated this PCP on most items in a manner similar to what respondents in the 2-provider group rated a PCP. In the group that said the PCP also provided subspecialty-type care, ratings were generally in the middle ground.

View this table:
  • View inline
  • View popup
Table 5.

Percent of Study 2 Respondents Judging Provider Efforts to Be “Fair” or “Poor,” by Service System Group

Mean dissatisfaction scores showed a similar pattern. Within the group of respondents reporting that their child had 2 providers, the PCP had a mean score of 1.27 and the subspecialist had a score of .66. In the group of respondents whose child had a PCP but no subspeciality care, the score was .98. In the group where the PCP also provided subspeciality-type care, the score was .60, indicating the least dissatisfaction.

Reliability Estimates

Standardized item αs were calculated using the group of respondents (N = 133) whose children had 2 providers. These reliability estimates were .91 and .92 for the subspecialist and PCP satisfaction ratings, respectively. No deletion of any item would have increased the αs.

We also examined corrected item-total correlations to determine if any items showed poor relationships to the overall score. In analyses with 2 subgroups, responses to the item, “putting you in touch with other parents” were correlated with the overall score at a lower level than other items (in the .35 to .40 range as compared with the .50 to .80 range). However, even deletion of this item would not have raised the standardized α coefficient.

Validity Estimates

As done in Study 1, Pearson correlation coefficients were calculated between the satisfaction scale and 3 items used to assess general satisfaction. These correlations were .85 for the PCP and .80 for the subspecialist ratings (N = 302). Again, these high correlations suggest that the scale as a whole does assess the construct of satisfaction as defined by commonly used items. It is noteworthy that the correlation between the 2 satisfaction scales for the group that had both a subspecialist and a PCP was .31, suggesting again that respondents were making substantial distinctions between the 2 referents.

Factor Structure

Unlike Study 1, this study had sufficient cases to conduct an exploratory factor analysis to examine the internal structure of the scale. The sample for these analyses included all 302 respondents; for those who completed the scale twice, the PCP ratings were used. Two items (“skill in managing the condition” and “linking to other parents”) were excluded because of high “not applicable” rates. A principal components analysis with varimax rotation extracted a single factor that had an eigenvalue of 5.8 and explained 58.4% of the variance. These results suggest that the scale measures a single construct, assuming that “not applicable” items are accounted for.

Diagnostic Groupings

As in the first study, the diagnostic groups differed in respect to type of provider system. In the groups of children with developmental disabilities, neurologic problems, and sensory deficits the percentages of children with both types of providers were 31.8, 63.0, and 43.5, respectively. Again, further analyses were not pursued for the purposes of this article.

DISCUSSION

Our analyses indicate that the MAPS has psychometric merit. Standardized α coefficients were >.85 in both studies; item-total correlations were, with the exception of 1 item, >.55. These results indicate excellent scale reliability. Several analyses support its validity. The items themselves were derived from relevant conceptual domains and refined through a process based on iterative reviews by professionals and parents; discriminant validity was indicated by evidence for differential satisfaction ratings for different types of providers as well as for different individual providers; strong construct validity was indicated by correlations >.75 with items often used to assess general satisfaction.

Our analyses also raise several other important issues. First, if the MAPS is correlated with a few general items of satisfaction, why use the longer scale? Does it provide any additional information that the more limited set of items does not? The answer depends on the purpose for using the data. If a general satisfaction score is needed for “report cards” or to make a global assessment of outcomes, then the MAPS contributes little beyond the general items.

If a more nuanced assessment is needed for purposes of identifying areas for quality improvement, the MAPS (or a similar instrument) can provide a great deal of additional information. For example, data from both studies suggest that parents are especially dissatisfied with providers' efforts to link them with other families, and that this is especially true for PCPs. This information points to the need for improvements in providers' knowledge and capacity to address this dimension of care. It is also important to distinguish between overall ratings of satisfaction and specific satisfaction items that may serve as “red flags.” For example, a heightened number of fair or poor ratings on “referrals to other providers” may signal that parents are perceiving serious problems in accessing subspecialists or related therapists.

A second important issue relates to the use of satisfaction scales for both quality assessment and marketing purposes.8 Pediatricians and other pediatric providers need to understand critical components of scale structure and composition to make judgments about the integrity and value of satisfaction measures. Poorly developed or validated instruments, or scales that inadvertently mix different levels of items (individual, plan, community), may yield scores that are not useful for quality assessment, or that may be misused in the service of inappropriate financial or organizational objectives. Our studies demonstrate the applicability of the MAPS to the diverse provider-family arrangements that characterize the service system for this population.

Third, our results suggest that parental satisfaction is highly dependent on service delivery structure. Satisfaction with a PCP may be influenced substantially by whether the child also has a subspecialist and by that subspecialist's actions. Furthermore, parental expectations may influence satisfaction. Assessment of satisfaction must account for how the parents define the roles and responsibilities of the provider they are judging. This type of interaction was also noted in a study of the relationship between satisfaction and use of services by adults, which found that this relationship is influenced by the context in which medical services are delivered.24

Results from our studies indicate that parents are generally more satisfied (or at least less dissatisfied) with their subspecialists than with their PCPs. A number of interacting factors may account for this finding. First, parents may have more contact with subspecialists and see them as the person who is contacted first when a problem comes up; as a result, they are more likely to seek out and stay with a physician whom they like. In this sense, continuity will be both the cause and consequence of heightened satisfaction.25 Furthermore, the subspecialty physicians may have more of an opportunity to provide information and counseling about the challenges of dealing with the child's health and chronic condition—elements that have been linked to higher levels of satisfaction.26

Overall, our studies provide initial support for the integrity and validity of the MAPS and are generally consistent with previous work. However, our work represents early results on a new scale and it has several limitations. First, it used samples of convenience, and hence its generalizability is limited. Second, sample size limitations constrained our ability to assess effects of important variables such as a child's diagnosis, functional capacity, ethnic background, and socioeconomic status. Understanding potentially complex interactions among these variables, and particularly how they might affect estimates of validity, must await subsequent studies. Finally, resources did not permit longitudinal investigation of how satisfaction may change over time in relation to changes in the child's condition or access to insurance coverage.

Evaluating satisfaction is an important component of a comprehensive assessment of quality of care for children with special needs. Moreover, satisfaction appears to be highly dependent on service delivery structure. Our efforts illustrate the conceptual and technical complexity in developing, implementing, and evaluating a satisfaction scale for this vulnerable population. Child health professionals and researchers need to be aware of these conceptual and methodologic challenges to ensure that satisfaction data account for the complexities of the service systems through which children with special needs receive their care.

APPENDIX

Before completing the satisfaction items, the respondent is asked to identify the person who provides care for the child's condition and then the person who provides “general health care, like the care the child would need if he had a cold or the flu.” If the respondent indicates that it is the same person, the items are asked only once with the provider's name used as the referent. If 2 providers are indicated, the items are asked a second time, as a complete set, with a different referent. Response categories are Excellent, Very Good, Good, Fair, Poor, or Does Not Apply. Scale items are listed below. The domains from which they were originally derived are indicated also; these domain categories are not included in the actual scale formatting.

  1. (This person)'s skill in managing your child's condition is… (Technical competence)

  2. (This person)'s ability to provide general health care, like the care your child would need for a cold or the flu is… (Technical competence)

  3. When it comes to helping you coordinate services for your child, (this person) does a(n) __ job. (Coordinated care)

  4. When it comes to communicating with other professionals about your child's care, (this person) does a(n) __ job. (Coordinated care)

  5. (This person)'s effort to be flexible in the way that he/she works with your family is… (Family-centered care)

  6. (This person)'s sensitivity to your family's cultural background and your beliefs about health is… (Family-centered care)

  7. When it comes to really listening to your opinions about your child's care, (this person) does a(n) __ job. (Interpersonal competence)

  8. (This person)'s ability to answer your questions regarding your child's condition is… (Interpersonal competence)

  9. The amount of information and guidance (this person) gives you to help prevent future problems for your child is… (Developmentally appropriate care)

  10. When it comes to referring you to other doctors or services that your child needs, (this person) does a(n) __ job. (Coordinated care)

  11. (This person)'s effort to put you in touch with other parents who have similar concerns is… (Coordinated care)

  12. When it comes to understanding how your child's condition affects your family, (this person) has a(n) __ understanding. (Family- centered care)

ACKNOWLEDGMENTS

This work was supported by funds from the National Policy Center for Children With Special Health Care Needs (under cooperative agreement with the Maternal and Child Health Bureau; Title V, Social Security Act, Health Resources and Services Administration, Department of Health and Human Services, #MCU24 MCP2), by Grant MCJ-240804 from the Maternal and Child Health Bureau, and by funds from the Women's and Children's Health Policy Center (under cooperative agreement with the Maternal and Child Health Bureau, Title V, Social Security Act, Health Resources and Services Administration, Department of Health and Human Services, #MCU243A19). This project was made possible through collaboration with Health Services for Children with Special Needs, Inc, in Washington, DC.

We thank David Corro and Steve Massey of HSCSN for considerable support and assistance; Amy Martin for help in the project's early stages; Kathy DeVet for data analysis; the Institute for Family Centered Care for assistance in organizing parent focus groups; and the parents who completed the survey.

Footnotes

    • Received April 26, 1999.
    • Accepted August 2, 1999.
  • Reprint requests to (H.T.I.) Johns Hopkins School of Hygiene and Public Health, Department of Population and Family Health Sciences, 624 N Broadway, Rm 247, Baltimore, MD 21205. E-mail: hireys{at}jhsph.edu

MAPS =
Multidimensional Assessment of Parental Satisfaction •
MAPS/CSN =
Multidimensional Assessment of Parental Satisfaction for Children With Special Needs •
PCP =
primary care provider •
SD =
standard deviation •
HSCSN =
Health Services for Children With Special Needs

REFERENCES

  1. ↵
    Leatherman S, McCarthy D. Opportunities and challenges for promoting children's health in managed care organizations. In: Stein REK, ed. Health Care for Children: What's Right, What's Wrong, What's Next. New York, NY: United Hospital Fund; 1997
  2. ↵
    1. Ireys HT,
    2. Grason HA,
    3. Guyer B
    (1996) Assuring quality of care for children with special needs in managed care organizations: roles for pediatricians. Pediatrics. 8:178–185.
    OpenUrl
  3. ↵
    1. Newacheck PW,
    2. Stein REK,
    3. Walker DK,
    4. Gortmaker SL,
    5. Kuhlthau K,
    6. Perrin JM
    (1996) Monitoring and evaluating managed care for children with chronic illnesses and disabilities. Pediatrics. 98:952–958.
    OpenUrlAbstract/FREE Full Text
  4. ↵
    Rosenbaum S. Protecting children: defining, measuring, and enforcing quality in managed care. In: Stein REK, ed. Health Care for Children: What's Right, What's Wrong, What's Next. New York, NY: United Hospital Fund; 1997
  5. ↵
    1. Perrin JM,
    2. Kuhlthau K,
    3. Walker DK,
    4. Stein REK,
    5. Newacheck PW,
    6. Gortmaker SL
    (1997) Monitoring health care for children with chronic conditions in a managed care environment. Matern Child Health J. 1:15–23.
    OpenUrlCrossRefPubMed
  6. ↵
    1. Edgman-Levitan S,
    2. Cleary PD
    (1996) What information do consumers want and need? Health Aff (Millwood). 15:42–56.
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Ware JE,
    2. Snyder MK,
    3. Wright WR,
    4. Davies AR
    (1983) Defining and measuring patient satisfaction with medical care. Evaluation Program Plann. 6:247–263.
    OpenUrlCrossRef
  8. ↵
    1. Cooperman T
    (1995) Member satisfaction information as competitive intelligence: a new tool for increasing market share and reducing costs. Managed Care Q. 3:36–40.
    OpenUrl
  9. ↵
    1. Ford RC,
    2. Bach SA,
    3. Fottler MD
    (1997) Methods of measuring patient satisfaction in health care organizations. Health Care Manage Rev. 22:74–89.
    OpenUrlPubMed
  10. ↵
    1. Liptak GS,
    2. Revel, GM
    (1989) Community physician's role in case management of children with chronic illnesses. Pediatrics. 84:465–471.
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Young PC,
    2. Shyr Y,
    3. Schork MA
    (1994) The role of the primary care physician in the care of children with serious heart disease. Pediatrics. 94:284–290.
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Hall JA,
    2. Dornan MC
    (1988) What patients like about their medical care and how often they are asked: a meta-analysis of the satisfaction literature. Soc Sci Med. 27:935–939.
    OpenUrlCrossRefPubMed
  13. ↵
    1. Stein REK,
    2. Coupey S,
    3. Bauman L,
    4. Westbrook L,
    5. Ireys H
    (1997) Framework for identifying children who have chronic conditions: the case for a new definition. J Pediatr. 122:343–347.
    OpenUrl
  14. ↵
    Epstein SG, Taylor AB, Halberg AS, Gardner JD, Walker DK, Crocker AC. Enhancing Quality: Standards and Indicators of Quality Care for Children With Special Health Care Needs. Boston, MA: New England SERVE; 1989
  15. ↵
    1. Kelley MA,
    2. Alexander CS,
    3. Morris NM
    (1991) Maternal satisfaction with primary care for children with selected chronic conditions. J Community Health. 16:213–224.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Kvist SBM,
    2. Rajantie J,
    3. Kvist M,
    4. Siimes MA
    (1991) Perceptions of problematic events and quality of care among patients and parents after successful therapy of the child's malignant disease. Soc Sci Med. 33:249–256.
    OpenUrlCrossRefPubMed
  17. ↵
    Hanes P, Tenison M, Capizzi J, Mohr-Peterson J. 1997 OHP Parent Satisfaction Survey. Portland, OR: Child Development and Rehabilitation Center, Oregon Health Sciences University; 1998
  18. ↵
    1. King SM,
    2. Rosenbaum PL,
    3. King GA
    (1996) Parent's perceptions of caregiving: development and validation of a measure of processes. Dev Med Child Neurol. 38:757–772.
    OpenUrlPubMed
  19. ↵
    1. Stump TE,
    2. Dexter PR,
    3. Tierney WM,
    4. Wolinsky FD
    (1995) Measuring patient satisfaction with physicians among older and diseased adults in a primary care municipal outpatient setting. Med Care. 33:958–972.
    OpenUrlPubMed
  20. ↵
    1. Ware JE,
    2. Hays RD
    (1988) Methods for measuring patient satisfaction with specific medical encounters. Med Care. 26:393–401.
    OpenUrlPubMed
  21. ↵
    Steiber SR, Krowinski WJ. Measuring and Managing Patient Satisfaction. Washington, DC: American Hospital Publishing, Inc; 1990
  22. ↵
    Strasser S, Davis RM. Measuring Patient Satisfaction for Improved Patient Services. Ann Arbor, MI: Health Administration Press; 1991
  23. ↵
    SPSS, Inc. SPSS Base 7.0 for Windows. Chicago, IL: SPSS, Inc; 1996
  24. ↵
    1. Zastowny TR,
    2. Roghmann KJ,
    3. Cafferata GL
    (1989) Patient satisfaction and the use of health services. Med Care. 27:705–723.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Breslau N,
    2. Mortimer EA
    (1981) Seeing the same doctor: determinants of satisfaction with specialty care for disabled children. Med Care. 19:741–758.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Brody DS,
    2. Miller SM,
    3. Lerman CE,
    4. Smith DG,
    5. Lazaro CG,
    6. Blum MJ
    (1989) The relationship between patients' satisfaction with their physicians and perceptions about interventions they desired and received. Med Care. 27:1027–1090.
    OpenUrlCrossRefPubMed
  • Copyright © 1999 American Academy of Pediatrics
PreviousNext
Back to top

Advertising Disclaimer »

In this issue

Pediatrics
Vol. 104, Issue Supplement 6
1 Nov 1999
  • Table of Contents
  • Index by author
View this article with LENS
PreviousNext
Email Article

Thank you for your interest in spreading the word on American Academy of Pediatrics.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Development and Evaluation of a Satisfaction Scale for Parents of Children With Special Health Care Needs
(Your Name) has sent you a message from American Academy of Pediatrics
(Your Name) thought you would like to see the American Academy of Pediatrics web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Request Permissions
Article Alerts
Log in
You will be redirected to aap.org to login or to create your account.
Or Sign In to Email Alerts with your Email Address
Citation Tools
Development and Evaluation of a Satisfaction Scale for Parents of Children With Special Health Care Needs
Henry T. Ireys, Jamie J. Perry
Pediatrics Nov 1999, 104 (Supplement 6) 1182-1191;

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Development and Evaluation of a Satisfaction Scale for Parents of Children With Special Health Care Needs
Henry T. Ireys, Jamie J. Perry
Pediatrics Nov 1999, 104 (Supplement 6) 1182-1191;
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
Print
Download PDF
Insight Alerts
  • Table of Contents

Jump to section

  • Article
    • Abstract
    • STUDY 1: DEVELOPMENT OF A PILOT INSTRUMENT
    • RESULTS
    • STUDY 2: FIELD TEST OF THE MAPS
    • RESULTS
    • DISCUSSION
    • APPENDIX
    • ACKNOWLEDGMENTS
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • Comments

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • About This Synopsis Book
  • Section on Allergy and Immunology
  • A Synopsis of the Synopses, 2019–2020
Show more Supplement

Similar Articles

Subjects

  • Developmental/Behavioral Pediatrics
    • Developmental/Behavioral Pediatrics
    • Cognition/Language/Learning Disorders
  • Journal Info
  • Editorial Board
  • Editorial Policies
  • Overview
  • Licensing Information
  • Authors/Reviewers
  • Author Guidelines
  • Submit My Manuscript
  • Open Access
  • Reviewer Guidelines
  • Librarians
  • Institutional Subscriptions
  • Usage Stats
  • Support
  • Contact Us
  • Subscribe
  • Resources
  • Media Kit
  • About
  • International Access
  • Terms of Use
  • Privacy Statement
  • FAQ
  • AAP.org
  • shopAAP
  • Follow American Academy of Pediatrics on Instagram
  • Visit American Academy of Pediatrics on Facebook
  • Follow American Academy of Pediatrics on Twitter
  • Follow American Academy of Pediatrics on Youtube
  • RSS
American Academy of Pediatrics

© 2021 American Academy of Pediatrics