Children’s Hospital Characteristics and Readmission Metrics
BACKGROUND AND OBJECTIVE: Like their adult counterparts, pediatric hospitals are increasingly at risk for financial penalties based on readmissions. Limited information is available on how the composition of a hospital’s patient population affects performance on this metric and hence affects reimbursement for hospitals providing pediatric care. We sought to determine whether applying different readmission metrics differentially affects hospital performance based on the characteristics of patients a hospital serves.
METHODS: We performed a cross-sectional analysis of 64 children’s hospitals from the Children’s Hospital Association Case Mix Comparative Database 2012 and 2013. We calculated 30-day observed-to-expected readmission ratios by using both all-cause (AC) and Potentially Preventable Readmissions (PPR) metrics. We examined the association between observed-to-expected rates and hospital characteristics by using multivariable linear regression.
RESULTS: We examined a total of 1 416 716 hospitalizations. The mean AC 30-day readmission rate was 11.3% (range 4.3%–19.6%); the mean PPR rate was 4.9% (range 2.9%–6.9%). The average 30-day AC observed-to-expected ratio was 0.96 (range 0.63–1.23), compared with 0.95 (range 0.65–1.23) for PPR; 59% of hospitals performed better than expected on both measures. Hospitals with higher volumes, lower percentages of infants, and higher percentage of patients with low income performed worse than expected on PPR.
CONCLUSIONS: High-volume hospitals, those that serve fewer infants, and those with a high percentage of patients from low-income neighborhoods have higher than expected PPR rates and are at higher risk of reimbursement penalties.
- AC —
- APR-DRG —
- All Patient Refined Diagnosis-Related Group
- DRG —
- diagnosis-related group
- PPR —
- Potentially Preventable Readmissions
What’s Known on This Subject:
State Medicaid offices often use Potentially Preventable Readmissions software to calculate pediatric readmission penalties. Little is known about factors that affect hospital performance on this metric.
What This Study Adds:
Hospitals with high volume, those that serve fewer infants, and those with high percentage of patients from low-income neighborhoods have higher than expected potentially preventable readmission rates and are at higher risk of reimbursement penalties.
Readmission rates are increasingly used as a measure of hospital quality.1,2 The Centers for Medicare & Medicaid Services reduces payments to hospitals with high rates of Medicare readmissions for a select but growing list of conditions.3 Particularly relevant to pediatrics, in several states Medicaid reimbursement penalties focus on overall, rather than condition-specific, hospital readmission rates.4–10 As a result, pediatric hospitals and health care systems highly dependent on Medicaid reimbursement are becoming financially liable for their overall readmission rates, prompting a greater focus on valid metrics including true preventability in a diverse population of readmitted patients.
Although there are several methods to calculate pediatric readmission rates,11–13 many state Medicaid offices use proprietary software developed by 3M (Saint Paul, MN) called Potentially Preventable Readmissions (PPR).12 PPR software considers certain conditions nonpreventable (eg, neonatal conditions); such conditions are excluded from PPR software determination of readmissions and are not considered when calculating penalties. These exclusions were developed from expert panel opinion and not validated with medical record review. As a result of these diagnosis-based exclusions, hospitals may perform differently on their overall all-cause (AC) readmission rate and their PPR readmission rate. Understanding the differential performance on different readmission metrics is essential for improvement efforts to lower readmission rates.
Patient socioeconomic status is linked to overall health outcomes14; risk adjustments for social determinants of health have been shown to affect hospital performance on readmission rates as measured by PPR software.15 It is unknown whether hospital performance as measured by PPR software is a true reflection of care quality or represents the inclusion and exclusion of conditions inherent to the PPR algorithm. Thus, to assess hospitals’ performance on the PPR metric as compared with overall readmissions, we compared hospital PPR rates with AC readmission rates (ie, including all readmissions regardless of any consideration of preventability). Our primary aim was to determine the extent to which patient demographics affect a hospital’s performance difference in PPR and AC readmission rates.
We used data from 64 children’s hospitals included in the Children’s Hospital Association (Washington, DC) Case Mix Comparative Database and included index hospitalizations from 2012 and 2013. The Children’s Hospital Association and participating hospitals jointly ensure data quality.16 We excluded hospitalizations for normal newborn birth (diagnosis-related groups [DRGs] 626 and 640) because they are typically not included in examination of readmissions for children.11,17,18 We calculated 30-day readmission rates by using 3M PPR software (version 30). Observed AC and PPR rates were calculated for each hospital at an All Patient Refined Diagnosis-Related Group (APR-DRG) severity of illness level, then aggregated to an overall rate for the individual hospital. Expected readmission rates were defined as the average of the rates for all hospitals in this study at each APR-DRG severity level, aggregated to the level of the individual hospital based on that hospital’s case mix.
Observed-to-expected ratios (observed AC/expected AC and observed PPR/expected PPR) were calculated for each institution based on APR-DRG severity of illness. Policymakers often use observed-to-expected ratios to assess quality by comparing actual performance (observed rates) with theoretical performance (expected rates). An observed-to-expected ratio of 1 means that a hospital has the same number of readmissions as is expected given their level of patient severity. An observed-to-expected ratio <1 means that a hospital has fewer readmissions than would be expected given the severity of the patients at that hospital. Likewise, an observed-to-expected ratio >1 indicates more readmissions than would be expected. For example, an observed-to-expected ratio of 1.1 indicates that a hospital has 10% more readmissions than would be expected given their patient population, accounting for severity. Because observed-to-expected ratios can be difficult to interpret in a clinical setting, we also calculated a risk-standardized readmission rate for each hospital (methods and results are presented in the Supplemental Information). We constructed scatter plots to visualize variation in hospital observed AC and PPR rates and observed-to-expected ratios.
We examined multiple hospital characteristics including payer mix (as the percentage of Medicaid patients), percentage of minority patients, hospital admission volume, case mix index, percentage of infants (patients <1 year old), hospital type (freestanding children’s hospital versus children’s hospital within an adult system), percentage of pediatric intensive care beds, nursing magnet status, and region of the country. We also examined the Commonwealth Fund State Health Ranking, because it has been associated with pediatric readmission.19 We linked patient zip code to median annual household income from the US census to determine the hospital’s percentage of patients living in zip codes with median household incomes <1.5 times the federal poverty limit for a family of 4 ($33 525). We chose this household income given previous work demonstrating that patients with selected conditions living in zip codes with median household incomes lower than this level have higher inpatient costs.20
Linear regression was used to evaluate the association of hospital characteristics with the observed-to-expected ratios for both 30-day AC and PPR rates. We verified nonmissing outliers for payer mix and percentage of minority patients. For example, we ensured that a hospital serving 90% Medicaid patients was located in a neighborhood with a high poverty rate. Hospitals with >10% missing data for key characteristics were excluded from the regression analysis out of concern for the accuracy of the calculated hospital characteristic. Multivariable linear regression was performed on variables that were significant in bivariate analyses or were considered a priori to be relevant to hospital performance (ie, percentage of patients living in poor neighborhoods and percentage of minority patients). We constructed scatter plots of AC and PPR observed-to-expected ratios for each variable significant in multivariable modeling to visually compare hospitals in upper and lower quartiles for that variable.
The 64 hospitals over the 2-year study period included 1 416 716 hospitalizations. Hospitals had a wide range of payer mix and a variety of patient populations served. Most were freestanding children’s hospitals located throughout the United States (Table 1).
The average AC 30-day readmission rate was 11.3% (range 4.3%–19.6%). The average PPR 30-day readmission rate was 4.9% (range 2.9%–6.9%). The observed AC rates varied more than the observed PPR rates (Fig 1A).
The average 30-day AC observed-to-expected ratio was 0.96, with a range across hospitals of 0.63 to 1.20. Despite less variation in the observed PPR rates, observed-to-expected PPR ratios also varied across the hospitals. The average 30-day PPR observed-to-expected ratio was 0.95, with a range of 0.65 to 1.23 (Fig 1B).
Overall Performance on AC and PPR Metrics
Most hospitals (n = 38; 59%) performed better than expected on both the PPR and AC measures (as indicated by observed-to-expected ratios of <1) (Fig 2, quadrant III). The second largest group of hospitals (n = 20; 31%) performed worse than expected on both PPR and AC measures (Fig 2, quadrant I). A few hospitals performed worse than expected on the PPR metric but better than expected on the AC metric (Fig 2, quadrant IV; n = 6). No hospitals performed better than expected on the PPR metric but worse than expected with the AC metric (Fig 2, quadrant II).
Relative Performance on AC and PPR Rates
Fewer than half (n = 28) of the hospitals had similar performances on the PPR and AC metrics. These hospitals have PPR and AC observed-to-expected ratios within 0.05 of each other (denoted by triangles, Fig 2). Sixteen hospitals perform worse on PPR than on AC (denoted by red squares, Fig 2); 20 hospitals perform better on PPR than AC (denoted by green circles, Fig 2). Similar data are displayed with standardized readmission rates in the Supplemental Information.
Regression Analysis: 30-Day Readmission Rates
In bivariate linear regression, several hospital characteristics were associated with both AC and PPR observed-to-expected ratios. Hospitals with higher annual patient volumes and freestanding children’s hospitals had statistically significantly higher observed-to-expected ratios, whereas hospitals with more infants had significantly lower observed-to-expected ratios. Increasing case mix index was significantly associated with higher observed-to-expected ratios for AC readmission but not for PPR (Supplemental Information).
In multivariable linear regression, 3 attributes were significantly associated with AC observed-to-expected performance (Supplemental Information): hospital volume (such that larger hospitals performed worse), case mix index (hospitals with more complex patients performed worse), and infant volume (hospitals with more infants performed better). Three hospital attributes were also significantly associated with PPR observed-to-expected performance: hospital volume (such that larger hospitals performed worse), infant volume (hospitals with more infants performed better), and percentage of patients living in poor neighborhoods (such that the more patients a hospital sees from a low-income area, the worse the PPR performance).
Because regression coefficients are difficult to interpret, we present each of these significant hospital attributes graphically in Fig 3. The hospitals with the highest hospital volume (Fig 3A) tended to perform worse on both metrics; therefore, they are to the right and above low-volume hospitals (ie, more high-volume hospitals are in quadrant I, more low-volume hospitals are in quadrant III). Hospitals with a high case mix index performed worse on AC but not PPR metrics. Graphically (Fig 3B), these hospitals plot to the right of low–case mix hospitals but are similarly distributed up and down. Hospitals with high volumes of infants performed better on both AC and PPR metrics; therefore, these hospitals are down and to the left of hospitals with low infant volumes (Fig 3C). Finally, as the percentage of a hospital’s patients living in low-income neighborhoods increased, the hospital PPR observed-to-expected ratio increased, indicating worse performance. Hospitals with high volumes of patients from low-income neighborhoods plot above hospitals with lower volumes of patients from poor neighborhoods. Because there was no association between the volume of low-income patients and hospital AC performance, the hospitals with different volumes of low-income patients are not shifted left or right (Fig 3D).
Significant hospital-level variation exists in 30-day AC readmission rates. In contrast, hospital-level variation in PPRs was minor. Despite the lack of variation in overall PPR rates, observed-to-expected ratios varied both for AC and PPR ratios. The majority of hospitals perform better than expected on both PPR and AC metrics. Conversely, approximately one-third of hospitals perform worse than expected on both metrics. More than half of hospitals perform differently on the AC and PPR measures, with 25% of them with worse PPR than AC rates. Thus, a quarter of children’s hospitals that are tracking their readmission performance based on AC rates would be surprised to find their performance to be worse if their state Medicaid performance measure is based on PPR. Hospital characteristics were associated with varying performance on the AC 30-day observed-to-expected ratios; hospitals with higher volumes, higher case mix indexes, and fewer infants had significantly higher observed-to-expected ratios (ie, more readmissions than would be expected for illness severity). Similar to AC readmission when measured with PPR software, hospitals with higher volumes and lower percentages of infants had higher PPR observed-to-expected ratios (ie, more PPR readmission than would be expected for illness severity). Unlike AC, PPR observed-to-expected ratios were higher for hospitals with a higher percentage of low-income patients.
Variation exists for both AC and PPR observed-to-expected ratios. Some hospitals are high performing, with much lower than their expected rates based on the APR-DRG severity of illness. For example, 1 hospital had an AC observed-to-expected ratio of 0.63, indicating that they have only 63% of the number of readmissions they would be expected to have based on their patient population. Other hospitals are poor performers, with as many as 20% more AC readmissions than expected. Similar variation in observed-to-expected ratios is noted in PPR readmission.
More than half of hospitals perform differently when measured by the PPR metric compared with the AC metric (as depicted in red or green in Fig 2). The relative differences in these metrics make performance improvement difficult for hospitals. For example, 1 hospital had an AC observed-to-expected ratio of 0.86, indicating that overall they had 14% fewer readmissions than would be expected. However, the same hospital had a PPR observed-to-expected ratio of 1.08, indicating that on the PPR metric they would have an excess of 8% of readmissions. In some states, this hospital would be penalized for poor performance, even though their overall AC readmission rate would be considered stellar. Because PPR software is proprietary, understanding how to improve PPR performance with an already low overall AC readmission rate would be particularly difficult.
Hospital quality metrics must be examined for clinical relevance, validity, and reliability.1,21–24 Important discrepancies may exist between administrative and clinical review of readmission preventability.25,26 The algorithms in the PPR software have not been validated against medical record review for detecting preventable readmissions. Nevertheless, ≥5 states are using PPR software to assess payment penalties. For Medicaid policy, the readmission rates are compared within a single state. The comparisons needed to calculate expected rates within a state are particularly difficult because many states have only a few children’s hospitals. In states with a single dominant pediatric hospital this limitation could result in a pediatric hospital system being its own only true comparator or benchmark yet is compared with smaller community hospitals that probably have a different case mix and serve socioeconomically different patient populations. These policies could mask poor performers or not adequately highlight good performers that would become apparent if matched against a broader national pediatric sample. Given our findings regarding poorer hospital performance for hospitals with high patient volume (high observed rates compared with expected rates), the paucity of within-state comparators may have direct financial consequences for large hospitals serving children.
We also found that hospitals that serve a larger percentage of infants perform better than hospitals that serve a smaller percentage of infants. One possible reason for this finding is that infants may be less medically complex, and therefore they do not have as many readmissions. The PPR software classifies readmissions after all newborn DRGs (including complex neonatal DRGs) as not potentially preventable. Although the exclusion of many neonatal DRGs may appear to create a more level playing field in the PPR algorithm, infant volume was still significantly associated with PPR observed-to-expected ratios.
Neither AC readmission rates nor PPR software rates account for patients’ sociodemographic factors that may drive health and health outcomes. In our multivariable analysis, we found that hospitals with a high percentage of children living in low-income neighborhoods performed worse on PPR calculations (higher observed-to-expected ratios) even when other potential confounders were controlled for. However, this association was not seen in the AC model. Two possible explanations exist for why patient income would be significant in the PPR model but not the AC model. First, assuming that the PPR algorithm is truly capturing preventable readmissions, worse performance in PPR would mean a relative “better” performance in nonpreventable readmissions to keep the overall AC readmission rate equal. Thus, hospitals that serve low-income children would perform “better” at preventing planned or scheduled readmissions. In this case “better” at preventing planned or scheduled readmissions may actually reflect a lack of access to scheduled procedures and therapies necessitating hospitalization. Alternatively, the PPR software may not be accurately capturing preventable readmissions because the algorithm has not been validated. In fact, a previous study in adults comparing the PPR algorithm with chart review determination of potentially preventable readmissions found poor concordance between the methods.25 The algorithm, based on its diagnostic-level exclusions, could count more conditions that low-income children experience as potentially preventable. Thus, the difference may not be a function of the quality of care but of the algorithm itself.
This study should be considered in the context of several limitations. First, we included children’s hospitals, and generalizability to adult hospitals caring for children is limited. Also, because we chose to examine hospital-level factors, our ability to examine every proposed factor was limited by the number of hospitals in the analysis. Therefore, we chose to examine characteristics in the multivariable model that were either significant in the bivariate model or chosen a priori. As with all analyses of existing data, we are limited by the quality of the data reported to the Children’s Hospital Association. However, these data are reviewed to ensure integrity. Also, we verified that outliers of payer mix and race or ethnicity reflected the neighborhoods where the hospitals exist.
Although 30-day AC readmission rates vary widely across institutions, the PPR rates vary to a lesser extent. PPR observed-to-expected ratios, which are used to assess pediatric readmission penalties in some states, vary across institutions. More than half of hospitals perform substantially differently on the AC and PPR metrics. The majority of hospitals perform better than expected on both AC and PPR metrics, and one-third of hospitals perform worse than expected on both metrics. Hospitals with higher volumes, lower percentages of infants, and higher percentages of low-income patients perform worse on PPR observed-to-expected assessments and are at higher risk for reimbursement penalties.
- Accepted November 17, 2016.
- Address correspondence to Katherine A. Auger, MD, MSc, 3333 Burnet Ave, MLC 9016, Cincinnati, OH 45229. E-mail:
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: No external funding.
POTENTIAL CONFLICT OF INTEREST: Children’s Hospital Association (formerly the National Association of Children’s Hospitals and Related Institutions) was a research partner with 3M Health Information Systems (3M-HIS), which participated in the development of 3M grouping software. CHA currently receives royalties from 3M for past participation in these efforts. 3M-HIS was given a copy of the current manuscript before submission but had no input into study design, the collection, analysis, and interpretation of data, the writing of the report, or the decision to submit the manuscript for publication. This work and the views and opinions expressed herein are solely those of the authors and not of the Children’s Hospital Association. The authors have indicated they have no potential conflicts of interest to disclose.
- Centers for Medicare and Medicaid Services
- Illinois Department of Healthcare and Family Services
- New York State Department of Health Division of Quality and Evaluation Office of Health Insurance Programs
- Texas Health and Human Services Commission
- Oklahoma HealthCare Authority
- State of Maryland Department of Health and Mental Hygiene
- State of Colorado
- Washington State Hospital Association
- Boston Children’s Hospital
- Adler NE,
- Newman K
- Children’s Hospital Association
- Toomey SL,
- Peltz A,
- Loren S, et al
- Copyright © 2017 by the American Academy of Pediatrics