Objective. Patient assessments of care are increasingly being considered an important dimension of quality of care. Few studies have examined the types and extent of problems identified by parents in the care of hospitalized children and whether hospital characteristics are associated with some of these problems. The objective of this study was to describe the quality of pediatric inpatient care as perceived by parents of hospitalized children and test whether hospital characteristics (academic status, market competition, freestanding children’s hospital) are associated with variations in quality.
Methods. We performed a cross-sectional analysis of surveys from 6030 parents of children who were discharged for a medical condition from 38 hospitals that used the Picker Institute’s Pediatric Inpatient Survey. The Pediatric Inpatient Survey measures 7 dimensions of inpatient care quality: partnership, coordination, information to parent, information to child, physical comfort, confidence and trust, and continuity and transition. Our main outcome measures included an overall quality of care rating (1 = poor, 5 = excellent), as well as overall and dimension-specific problem scores (0 = no problems, 100 = problems with 100% of processes asked about in the survey). We used Pearson correlation to determine the strength of association between the overall quality of care rating and dimension problem scores. We tested for associations between hospital characteristics and problem scores using linear regression models, controlling for patient health status and other socioeconomic status variables.
Results. Parents on average rated their child’s care as very good (mean: 4.2) but reported problems with 27% of the survey’s hospital process measures. Information to the child (33%) and coordination of care (30%) had the highest problem rates. Parent communication problems correlated most strongly with overall quality of care ratings (r = −0.49). Parents of children who were hospitalized at academic health centers (AHCs) reported 4% more problems overall (29.8% vs 25.5%) and almost 9% more problems with coordination of care (34.1% vs 25.6%) compared with those at non-AHCs. Parents in more competitive markets reported almost 3% more problems than those in the less competitive ones (28.9% vs 26.3%). The freestanding children’s hospital classification was not associated with overall problem scores. We found wide variation in problem scores by hospital, even among AHCs. Hospital and patient characteristics explained only 6% of the variance in problem scores.
Conclusions. Despite high subjective ratings of quality of care, measures of specific processes of care reveal significant variations among hospitals and identify areas with opportunities for improvement. Improving the quality of communication with the parent of a hospitalized child may have the most positive impact on a hospital’s overall quality of care rating. AHCs and hospitals in more competitive markets may be more prone to problems. With wide variation in parental perceptions of hospital quality of care, a systems analysis of individual hospitals may provide strategies for hospitals to deliver higher quality care.
The Institute of Medicine has indicated that patients can provide information about important aspects of quality, information that should be included in the National Healthcare Quality Report to be developed by the Agency for Healthcare Research and Quality annually starting in 2003.1 A recent Institute of Medicine report2 also highlighted the adverse effects that poorly structured health care systems can have and suggested that including the patient in the design of in-hospital processes could help. Processes of care that are more patient-centered have been associated with improved health outcomes.3 Given this, the use of patient perceptions of care for systems improvement4 may lead to better health outcomes.
Parents of hospitalized children can act as proxies for children in identifying shortcomings of hospital systems that health care professionals cannot. Studies of adult patients have identified the dimensions of care that matter most to hospitalized patients. These include respect for patient preferences, coordination of care, information and education, physical comfort, emotional support, involvement of family and friends, and continuity and transition.5 Investigators at the Picker Institute have developed surveys for surveying patients (or parents of children) about their assessment of these processes of care in both inpatient and outpatient settings.
Studies of the relationship of structural characteristics such as academic status and ownership to outcomes such as morbidity and mortality in various conditions have had mixed findings.6–10 Few, if any, studies have examined the relationship of these and other structural measures to parental perceptions of the quality of care.11 Identifying hospital characteristics that predispose to certain types of problems would be valuable for planning targeted quality improvement efforts.
To assess the variation in parent-reported quality measures among pediatric inpatients, we used reports from a diverse group of 38 hospitals that used the Picker Institute’s Pediatric Inpatient Survey (PIS) to describe the quality of care for hospitalized children as experienced by parents. We wanted to determine the extent of problems in each of the dimensions of care available in the PIS and assessed the association between patient ratings of care and reports of problems. In addition, we investigated the extent to which selected hospital structural characteristics (academic status, market stage, and freestanding children’s hospital status) were associated with parental reports of quality of care for hospitalized children. Because previous work has shown some association of academic status and market stage with other dimensions of quality,8–10 we hypothesize that these variables may be related to parental reports of quality. Studying the relationship between hospital characteristics and parental reports can help both for understanding what influences quality and for devising quality improvement efforts for different hospital types.
We performed a cross-sectional analysis of parent responses to the PIS. Hospital use of Picker instruments was voluntary. Hospitals that used the survey either 1) were members of hospital consortia interested in quality improvement or 2) independently decided to use the PIS for quality improvement. Responses to the PIS were obtained from the Picker Institute with permission from each participating hospital. All hospitals that used the survey approved the use of their data for research purposes. Study investigators were blinded to identities of both the parents and the hospitals participating in the study, as the independent survey company (DataStat) who administered the study assigned study codes for both variables. The Massachusetts General Hospital Institutional Review Board approved this study.
Parents of children who were discharged for medical (nonsurgical, non-intensive care unit) conditions from hospitals (N = 38) that used the PIS and discharged within 6 weeks of the beginning of a survey cycle within the study period (January 1997 and December 1999) were eligible for the study. DataStat randomly selected parents from the eligible population. Data regarding nonrespondents was not available. To decrease potential bias associated with differences in case-mix among hospitals, we limited our investigation to nonsurgical and non-intensive care unit patients by using survey responses to exclude admissions that included surgical care or intensive care unit stays.
The PIS was developed through collaboration between the Picker Institute and Children’s Hospital of Boston. The PIS was designed to measure aspects of a child’s hospitalization that were important to parents, with the intent of providing hospitals with useful information for quality improvement efforts. Picker’s Adult Inpatient Survey and Children’s Hospital’s own inpatient quality of care survey12 provided the foundation for survey development. PIS development included literature review and focus groups both to revise existing items and to add new ones (Table 1). The PIS was targeted to a high school reading level.
The PIS solicits parental reports on inpatient care quality in 7 dimensions: partnership, coordination, information to parent, information to child, physical comfort, confidence and trust, and continuity and transition. Psychometric analysis conducted by Picker has shown this instrument to be reliable, with high item correlation with overall satisfaction and dimension problem scores. Most dimensions have a Cronbach’s α of 0.7 or higher.13,14
Quality of Care Measures
We developed 3 groups of quality measures: an overall quality of care rating, dimension-specific problem scores, and an overall problem score. For the overall rating of care, parents were asked to rate the general quality of hospital care on a 5-point Likert scale (1 = poor, 5 = excellent). Additional PIS items inquired about specific dimensions of care. We dichotomized individual item responses as “problems” or “not problems.” For items with >2 response categories, we considered responses to represent problems when any of the least preferable 2 of 4 or 3 of 5 categories was chosen. For example, for questions with response categories of “poor,” “fair,” “good,” “very good,” and “excellent,” responses of “poor,” “fair,” or “good” would be considered problems. For each patient, we calculated each dimension’s problem score by noting what percentage of questions in that dimension were noted to be “a problem.” Each item is included in only 1 dimension. The overall problem score represents an average of the problem scores of all of the dimensions. Problem scores for similar dimensions of care have been used in instruments devised by Picker to evaluate adult care as well as in the Consumer Assessment of Health Plans Surveys, the industry standard for rating health plans.15 Measures of specific processes and dimensions of care provide useful information for quality improvement efforts that ratings cannot and are associated with other indicators of quality.16
Parents were mailed the survey approximately 2 weeks after their child was discharged from the hospital. Postcard reminders were sent to parents who had not returned the survey at either 2 or 4 weeks after mailing. Parents who had not returned the survey within 1 month after the second reminder were considered nonresponders. This survey was used for benchmarking and quality improvement purposes by numerous University Health Consortium (UHC) and National Association of Children’s and Related Institution members. Hospitals that used the PIS were given data regarding their performance as well as the performance of other hospitals that used the survey.
We used data from the American Hospital Association (AHA) Guide 2000 Edition17 as well as the UHC to gather information on hospital characteristics, with study co-investigators confirming structural classification. In classifying a hospital’s academic status, we classified them as nonteaching when they did not have a residency-training program in pediatrics and were not affiliated with the Council of Teaching Hospitals. Teaching hospitals had to have either a residency-training program in pediatrics or Council of Teaching Hospitals membership. Academic health centers (AHCs) were teaching hospitals whose faculty conducts research as part of their work. Market stage (or competition) of the hospital area was determined by the UHC Market Stage Classification system.18,19 Because UHC market stage was available only for cities that contain a hospital that belongs to the UHC, only 28 of the 38 hospitals in our sample were in cities with a market classification. Market stage class incorporates a wide range of market characteristics, including managed care penetration, hospital integration, physician integration, and hospital utilization patterns. The stages include are described as “unstructured” (stage I), “loose framework” (stage II), “consolidation” (stage III), and “hyper-competitive” (stage IV). Freestanding children’s hospitals were hospitals that were classified as children’s general or children’s other specialty hospitals by the AHA. Hospital ownership (public vs private) was determined using AHA data and hospital self-report.
We used information on patient and parent characteristics collected from the PIS to control for demographic and health status characteristics in our analyses. These included main language spoken at home (English or other), child ethnicity (white, black, Hispanic, Asian, or other), child age (in years), child gender, child health status, and parental education (less than high school, high school or GED, some college, college graduate, postgraduate). Child health status assessment included 2 measures: 1) child chronic health status (child defined as having a chronic illness when the parent at the time of the survey indicated that the child had a medical condition that has lasted >3 months) and 2) rating of child’s health at the time of the survey (1 = poor, 5 = excellent).
To assess the quality of inpatient care, we used the patient as the unit of analysis in computing averages for the overall quality of care rating, as well as the overall and dimension-specific problem scores. Problem scores were treated as continuous variables ranging from 0 to 100 (higher problem score is worse). We used Pearson correlation to determine the associations of individual dimension problem scores with the overall ratings of care. Using bivariate linear regression, we determined unadjusted estimates of the associations of hospital structural and patient characteristics with overall and dimension-specific care. We performed multiple linear regression to obtain adjusted estimates of hospital characteristics effects on dimension-specific and overall problem scores. Both hospital and patient characteristics that were significantly related (P ≤ .05) to problem scores were included in the final models. When using the patient as the unit of analysis, we controlled for clustering effects by hospital in our statistical modeling. Finally, we calculated overall and dimension-specific problem scores using the hospital as the unit of analysis to determine the range of problem scores achieved by hospitals. Hospital-specific problem scores have been used by several hospital consortia for the purposes of benchmarking and improvement.
Overall, 26 250 surveys were mailed, with 12 600 parents responding (48% response rate). Hospital response rates ranged from 32% to 59% and were not associated with hospital type. Of the 12 600 respondents, 6030 (48%) were parents whose children were hospitalized for a nonsurgical, non-intensive care unit condition. Table 2 describes the characteristics of the children and their families who were included in the study sample. Ninety-four percent of the responding parents came from homes where English was the predominant language, and 60% of the respondents were white. Fifty-two percent of the parents reported that their child had a chronic condition.
Table 3 describes the characteristics of the hospitals that participated in the study. Of the 38 hospitals, 66% (25 of 38) of the institutions were academic medical centers; 8% (3 of 38) were nonteaching, nonacademic medical centers; 13% (5 of 38) were freestanding children’s hospitals; and 61% (23 of 38) were public institutions. Although the AHA considered only 5 of the hospitals in the sample to be freestanding, their classification may have been restrictive in not including hospitals that, although not freestanding in physical space, may have services comparable to that of freestanding hospitals. Of the 28 hospitals that were in cities where market stage was available, 18% (5 of 28) were in less competitive markets (I and II) and 61% were in stage III markets.
Parents generally rated their care as very good (mean quality of care rating: 4.2) yet reported problems with 27% of the hospital process measures (Fig 1). Information to the child (33%) and coordination of care (30%) had the highest problem rates. Information to parents was the dimension of care most inversely correlated (r = −0.49) with overall quality ratings (Table 4), followed by partnership in care (r = −0.48). Physical comfort was least correlated (r = −0.27) with overall quality ratings.
We investigated the extent to which the hospital characteristics that we had collected were associated with parental reports of problems. Table 5 presents parameter estimates only for those dimensions in which we found at least 1 hospital structural variable to be significantly related in an adjusted model. Compared with non-AHCs, parents of children who were hospitalized at AHCs reported 4% more problems overall (29.8% vs 25.5%; P < .0001), with 8% more problems in coordination (34.1% vs 25.6%; P < .0001). Patients in more competitive markets (market stage III or IV) reported almost 3% more problems than those in the least competitive ones (P < .001). Children’s hospital status had no significant association with overall problem scores and was significantly related to only 1 dimension (coordination). Hospital ownership (public vs private) had no significant association with overall or dimension-specific problem scores. We found no association between hospital characteristics and problem scores in the dimensions of partnership, information to the child, and continuity and transition.
To assess the level of hospital variation, we calculated problem scores for hospitals. We also stratified the hospitals in our sample according to AHC status. The distribution of overall problem scores of the AHC and non-AHC hospitals has significant overlap (Fig 2). In the linear regressions, hospital and patient characteristics explained only 6% of the variance in overall problem scores (adjusted r2: 0.06).
We found that although parents on average rated the hospital care that their child received as very good to excellent, they still reported problems with a significant percentage of specific hospital processes. With the exception of physical comfort, all dimensions had problem scores above 20%, indicating significant room for improvement in most aspects of care. We also found that overall ratings of care were associated most closely with improved communication with parents and partnership in care, indicating that parents view being kept informed and involved in the care of their child as the highest priority dimensions of patient-centered quality of care.
Parents of children who were hospitalized in AHCs or in hospitals in more competitive markets (market stage III or IV) tended to report lower quality of inpatient care across most dimensions, as measured by parent reports on the PIS. Except for coordination of care, freestanding children’s hospitals were not associated with either fewer or more problems. Although parents of children who were hospitalized at AHCs reported more problems in all dimensions compared with those at non-AHCs, the difference was greatest in coordination of care.
Several hypotheses may explain the greater number of problems at AHCs. First, patients who are hospitalized at AHCs may have more complex problems and thus require more complicated care than those at non-AHCs. Some studies have shown that academic/teaching status is associated with higher rates of adverse events but also with a lower percentage of adverse events as a result of negligence.6 We attempted to control for severity of illness using measures for both current health status and the presence of a chronic illness in the child, but these measures may not fully capture the complexity of caring for these children. Another hypothesis is that the teaching mission of AHCs may result in a larger number of providers for each patient and thus more potential problems in miscommunication and coordination of care.20 Our study findings were consistent with AHCs’ having more problems with care coordination. Finally, the competing missions of research, teaching, and patient care at AHCs may also cause conflicts for time of attending physicians that may be reflected in these quality measures.21 Bedside case presentations at teaching institutions have been shown to increase patient perceptions of physician time spent with the patient and thus may positively affect perceived level of physician/patient partnership in care.22 Our study found no relationship between AHC status and partnership problem scores. Instead, AHC physicians in our study had more problems in establishing confidence and trust with their patients compared with physicians at non-AHCs.
Parents of children who were hospitalized in more competitive markets reported lower quality of care compared with those who were hospitalized in less competitive markets. Children’s hospitals may not respond to market pressures the same way as general hospitals do secondary to differences in mission and business relationships with other health care organizations.23,24There are competing hypotheses regarding how competition in the hospital area may affect quality of care. In a free market, more competition may spur organizations to improve quality as a means to distinguish themselves and gain market share. Alternatively, hospitals in highly competitive areas may seek to contain costs by limiting funds devoted to patient-centered care programs. Although in our study we cannot explain the mechanism for effect of market competition on quality, our findings are consistent with the latter hypothesis.
Freestanding children’s hospital status was not significantly related to parent reports of quality. There is some evidence that institutions that deliver condition-specific care may deliver higher quality of care than those that provide care for a more diverse set of conditions.25,26 These centers become proficient at treating the narrow spectrum of conditions that they choose because their resources and programs are devoted to these conditions and they gain expertise in treating these conditions through repeated exposure to patients with these conditions. Unmeasured selection bias with regard to disease severity and case-mix may account for these centers’ having higher reported quality of care. Although freestanding children’s hospitals may be focused on treating children, this focus may not be sufficiently narrow to result in measurable differences in these types of quality measures with the pediatric service of a general hospital. Additional investigation with a larger, more representative group of hospitals is indicated.
This study had several limitations. First, the sample of hospitals in our study was not randomly determined or nationally representative, being more biased toward AHCs and nonfreestanding children’s hospitals. Differential selection of hospital participants could account for some of our findings. If, for example, hospitals in more competitive markets are more likely to participate and participants have higher or lower problem scores, then this could result in associations between market characteristics and problem scores. Second, the overall survey response rate was 48%, with the majority of respondents being a white, English-speaking population. The low response rate may be attributable to the survey length (70 items) but is consistent with response rates for patient surveys. Because the intent of the survey is to obtain parental reports of specific processes of care in many dimensions, shortening the survey would have been difficult. Parents who responded to surveys may have different characteristics compared with nonrespondents.27 Although respondent bias might change the problem frequency in each dimension, it would be less likely to change which dimensions rank higher in problem frequency. Furthermore, studies have shown patients to be a valid source of information for assessing health care quality.28,29 The associations (or lack thereof) of parent-reported problems with hospital characteristics should be viewed as exploratory. Additional research to confirm or refute these findings is indicated.
Despite these limitations, this study has some important implications. First, AHCs may have more problems providing high-quality patient-centered care than non-AHCs. Although AHCs seemed to perform worse in most dimensions, coordination of care seemed particularly problematic. Insofar as improved coordination of care leads to better health outcomes,16 AHCs should develop strategies to improve coordination without adversely affecting their teaching and research missions. This may be particularly challenging given the imminent restrictions in resident duty hours for all residency training programs starting in July 2003.30 Effects of these changes on patient-centered care quality should be monitored closely.
Second, if higher competition is associated with lower parent-reported quality of care, then institutions in highly competitive markets must be especially creative in providing high-quality, patient-centered care without compromising the organization’s financial viability. Furthermore, our study does not support the notion that freestanding children’s hospitals provide higher quality pediatric care than nonfreestanding facilities.
Finally, the wide variation in problem scores suggests that there is ample room for improvement in all hospitals and hospital types. The finding that AHCs in our sample were able to achieve overall problems scores similar to that of non-AHCs suggests that AHCs have the ability to deliver parent-reported quality of care equivalent to that of non-AHCs. This, combined with the low explanatory power of our model, suggests that unmeasured factors, including underlying hospital organizational structure, may play a large role in affecting the quality of care delivered. Examples of this could include a hospital culture for improvement, more successful use of data for improvement, and use of hospitalists or service lines on the inpatient units. A case-based systems analysis of individual hospitals, including their structure and services that they provide, may provide information about how hospitals may be able to improve quality of care on these patient-centered measures.
This work was conducted during a Harvard Pediatric Health Services Research Fellowship Program sponsored by the Agency for Healthcare Research and Quality (formerly the Agency for Health Care Policy and Research), grant T32HS00063, and also supported by the Deborah Munroe Noonan Memorial Fund.
We thank David Drachman and Ellen Schwalenstocker for assistance in obtaining the data for our study.
- ↵Institute of Medicine. Envisioning the National Health Care Quality Report. Washington, DC: National Academy Press; 2001
- ↵Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: The National Academy Press; 2000
- ↵Cleary PD. The increasing importance of patient surveys. BMJ.1999;319 :720– 721
- ↵Cleary PD, Edgman-Levitan S, Roberts M, et al. Patients evaluate their hospital care: a national survey. Health Aff.1991;10 :254– 267
- ↵Mitchell PH, Shortell SM. Adverse outcomes and variations in organization of care delivery. Med Care.1997;35 :S19
- ↵Reliability and Validity of Picker Questionnaires. Boston, MA: The Picker Institute; 1999
- ↵Review of the Pediatric Inpatient Survey. Boston, MA: The Picker Institute; 1999
- ↵Young GJ, Charns MP, Desai K, et al. Patterns of coordination and clinical outcomes: a study of surgical services. Health Serv Res.1998;33(5 Pt 1) :1211– 1236
- ↵AHA Guide to the Health Care Field, 1999–2000 Edition. Chicago, IL: American Hospital Association; 1999
- ↵Bourne S, Malcolm C. 1996 Market Classification and Revisions and Review. Chicago, IL: University HealthSystem Consortium; 1996
- ↵Burns LR, Bazzoli GJ, Dynan L, Wholey DR. Managed care, market stages, and integrated delivery systems: is there a relationship? Health Aff (Millwood).1997;16 :204– 218
- ↵Ferris TG, Dougherty D, Blumenthal D, Perrin JM. A report card on quality improvement for children’s health care. Pediatrics.2001;107 :143– 155
- ↵Herzlinger RE. Focused factories. Giving consumers what they want. Interview by Mark Hagland. Healthc Forum J.1997;40 :22– 26
- ↵Herzlinger RE. Market-Driven Healthcare: Who Wins, Who Loses in the Transformation of America’s Largest Service Industry. Cambridge, MA: Perseus Publishing; 1999
- ↵Streiner DL, Norman GR. Health Measurement Scales. Oxford, UK: Oxford University Press; 2000:189–205
- ↵Draper M, Cohen P, Buchan H. Seeking consumer views: what use are results of hospital patient satisfaction surveys? Int J Qual Health Care.2001:13 ;463– 468
- ↵Accreditation Council for Graduate Medical Education. Resident Duty Hours.Available at: http://www.acgme.org/DutyHours/dutyHoursLanguage.pdf
- Copyright © 2003 by the American Academy of Pediatrics