Abstract
BACKGROUND. Children with complex chronic conditions depend on both their families and systems of pediatric health care, social services, and financing. Investigations into the workings of this ecology of care would be advanced by more accurate methods of population-level predictions of the likelihood for future hospitalization.
METHODS. This was a retrospective cohort study. Hospital administrative data were collected from 38 children's hospitals in the United States for the years 2003–2005. Participants included patients between 2 and 18 years of age discharged from an index hospitalization during 2004. Patient characteristics documented during the index hospitalization or any previous hospitalization during the preceding 365 days were included. The main outcome measure was readmission to the hospital during the 365 days after discharge from the index admission.
RESULTS. Among the cohort composed of 186856 patients discharged from the participating hospitals during 2004, the mean age was 9.2 years, with 54.4% male and 52.9% identified as non-Hispanic white. A total of 17.4% were admitted during the previous 365 days, and among those discharged alive (0.6% died during the admission), 16.7% were readmitted during the ensuing 365 days. The final readmission model exhibited a c statistic of 0.81 across all hospitals, with a range from 0.76 to 0.84 for each hospital. Bootstrap-based assessments demonstrated the stability of the final model.
CONCLUSIONS. Accurate population-level prediction of hospital readmissions is possible, and the resulting predicted probability of hospital readmission may prove useful for health services research and planning.
- pediatric
- child
- chronic conditions
- hospital readmission
- prediction model
- longitudinal observational study
The health and well-being of children are influenced by an ecology of care. This complex network includes many interacting components, ranging from family nurturing to interactions with the educational, health care, and social welfare systems, to the sources of finances for these systems, and ultimately to the societal perceptions and priorities that shape and sustain a particular ecology of care. Outcomes for children with chronic conditions especially depend on the effective functioning of this network. Across these diverse systems, children receive care that is managed or coordinated both explicitly (by parents, case managers, care coordinators, physicians, nurses, social workers, or medical homes, for example) and implicitly (such as by electronic medical charts, insurance benefits, and rules regarding the exchange of information between systems).
How can we study the effect of variations in the overall ecology of care on children with chronic conditions? One technique would be to focus on the occurrence of a sentinel event, such as the readmission of a previously hospitalized child, and to determine whether the likelihood of this event differs across various configurations of these systems. To do this would require an accurate predictive model for future hospital use, which would enable comparison of use and outcomes across different settings and populations, and in turn help identify systemic factors in hospital care or the broader child care environment that influence outcomes. An accurate predictive model might also help define patient groups at increased risk of readmission and thereby enable targeting of special services through case management or other methods.1
Currently, 2 problems hinder the advance of accurate prediction. First, few studies compare different forms of predictive models, and the comparative studies that do exist have examined models aiming to guide clinical care, assess quality of care, or identify potential subjects for clinical trials by predicting mortality with extreme accuracy.2–5 Second, to our knowledge existing predictive models are limited to information about a single admission, yet there exists an important population of patients who are admitted frequently, motivating the inclusion of longitudinal information into predictive models.
We developed models to predict hospital readmissions based on 3 hypotheses. First, we drew on evidence in the literature to hypothesize that a core set of demographic and clinical features of patients would predict future hospitalizations. Previous studies examining the rate of hospital readmission of children with specific diagnoses (such as ambulatory-care sensitive conditions,6 pediatric asthma,7 and extremely low birth weight infants8), or of the elderly9,10 have highlighted the importance of demographic factors,7,10–14 diagnosis and comorbidities,10,13,15–17 geographic location,11,18,19 and psychological and social factors.10,20 Second, we postulated that classification of patients into the all-patient refined diagnosis-related groups (APR-DRG, version 20 [3M-Corp, Minneapolis, MN]), and severity levels for the index admission would also be associated with hospital readmission. The APR-DRG group and severity level is assigned at the time a patient is discharged from a hospital, incorporating information about diagnoses and events that occurring during the admission. Perhaps not surprisingly, the APR-DRG severity level has been shown to correlate with length of stay and single encounter resource use for adults21,22 and neonates23 (but was found inferior to an International Classification of Disease, Ninth Revision–based illness severity score in predicting survival of trauma patients24); no studies have evaluated the predictive characteristics of the APR-DRG system for future hospitalizations. Finally, based on studies that have demonstrated an association between the number of previous admissions and the likelihood of future admissions,11,15 we hypothesized that a model including demographic, clinical, and APR-DRG information from an index admission would be improved by including information about any previous hospitalizations for that individual during the 12 months that preceded the index hospital admission.
Guided by these hypotheses, we developed and validated a population-level prediction model, using administrative data, in a retrospective cohort of children admitted to children's hospitals. Specifically, as depicted in Fig 1, after assembling a cohort of children who were discharged from a children's hospital during 2004, we tested (1 and 2) whether information about the patient and that index hospital admission could predict any future hospitalization during the year after discharge and (3) whether inclusion of information from any previous hospitalizations during the year before the index admission improved the performance of the predictive model.
Schematic of study design and the 2 primary hypotheses.
METHODS
Human Subjects Protections
The protocol for the conduct of this study was reviewed and approved by the Children's Hospital of Philadelphia Committee for the Protection of Human Subjects.
Data Source and Quality
Data for this study were obtained from the Pediatric Health Information System administrative database that was developed by the Child Health Corporation of America (Shawnee Mission, KS), a business alliance of children's hospitals. The database contains inpatient demographic, diagnostic, and procedural data from 38 not-for-profit, freestanding pediatric hospitals in the United States. Data are subjected to a number of reliability and validity checks and processed into data quality reports.
Eligibility
Eligible patients were 2 to 18 years of age during 2004, with at least 1 hospital discharge during the index year from any of the 38 participating hospitals. The lower eligible age limit was set at 2 years at the time of the index hospital admission because infants have substantially different patterns of hospitalization than do children beyond infancy, and because our study design required a 1-year look-back period to assess the history of previous hospital use uniformly across all subjects.
Dependent Variable
The dependent variable was readmission to the hospital within 365 days after discharge from the index admission.
Independent Variables
Independent variables from the current index hospitalization (randomly selected if a patient had more than 1 hospital discharge) included the following demographic variables: age, gender, race, and primary payer. All APR-DRGs v20, and complex chronic conditions (CCCs) based on International Classification of Disease, Ninth Revision, Clinical Modification codes,25–28 previous admissions, and number of operative procedures were also included. Discharge disposition categories included discharged to home, discharged to home with home health services, transferred to another facility, left against medical advice, or died. Patients who died during the index admission were excluded from this study.
Statistical Analysis
Covariates were compared between patients who had a readmission within 1 year of the index stay discharge and those that did not by using the χ2 statistic.
Multivariable logistic regression models were built with a response variable of readmission within 1 year of the index hospitalization or not. Covariates in the model included those displayed in the tables as well as the interaction between APR-DRG v20 group and severity. We sequentially built models in increasing complexity to determine the optimal model using information from the index and previous admissions for predicting a readmission within 1 year of the index stay. Model 1 used covariates from the index stay only. To this model, we added APR-DRG v20 group and severity along with the interaction of these terms for model 2. Model 3 added to model 2 the number of admissions within 1 year before the index admission. For the full model (model 4), we added the previous admission covariates. Because models were nested, we used the likelihood ratio test to statistically compare models. For the final model (model 5), we performed backward elimination on model 4 to determine the most parsimonious model.
For each model, we provide the c statistic, the rescaled pseudo-R2,29 the Brier score (which conveys the average prediction error across the range of observed values30) and the Akaike Information Criterion.31 To demonstrate the performance of the chosen model, we also provide a classification of the observed and predicted readmission rates and the sensitivity and specificity at specific probability of readmission percentile cut-points (90th, 95th, and 99th percentiles).
Internal Validation Using Bootstrap Technique
We performed internal validation of the final prediction model using the regular bootstrap,32 for the c statistic, the Brier score, and the pseudo-R2 for the readmission model.
All statistical analyses were performed by using the statistical software packages SAS 9.1 (SAS Institute, Inc, Cary, NC) and Stata (MP version 9.2 [Stata Corp, College Station, TX]), and P values <.001 were considered statistically significant.
RESULTS
Characteristics of Subjects
There were 186856 patients between the ages of 2 and 18 years who were discharged from 1 of the 38 participating children's hospital in 2004 (Table 1). Among these patients who were discharged alive, 16.7% were subsequently readmitted within a year (Table 1). The likelihood of readmission was most strongly associated with the patient's specified primary payer, the number of previous admissions, the diagnosis of a complex chronic condition, and longer lengths of stay during the index admission.
Characteristic of Subjects on Entry Into Cohort
Comparison of Sequentially More Extensive Models
We fitted a sequence of logistic regression models with the dichotomous outcome variable always being whether the patient was readmitted during the 365 days after the index hospital discharge, but with an increasingly extensive set of predictor variables that conveyed information either about the patient's index admission or about previous hospital admissions (Table 2). With each successive addition of variables, the accuracy and explanatory power of the model improved (as conveyed by the increasing c statistic and pseudo-R2 values), even accounting for the improvements that would be expected by simply adding more parameters to the model (as conveyed by the decline in the Akaike Information Criterion). Specifically, the addition of information about past hospital admissions significantly improved the fit of the models.
Comparison of Sequentially More Extensive Readmission Models
Parameter Estimates of the Final Model
The final readmission model included variables regarding patients' demographic and clinical characteristics as reported at the time of the index admission, and clinical information regarding any observed admissions during the year before the index admission (Table 3). Among demographic features, female gender, older age, black race, and public insurance coverage were all associated with a greater likelihood of readmission. Patients with any complex chronic condition (except for hematologic/immunologic CCCs) were more likely to be readmitted than patients without CCCs; among CCCs, malignancy and neurologic conditions displayed the greatest association with readmission.
Parameter Estimates for Final Readmission Model
Patients who had been admitted previously were more likely to be readmitted, and much more likely if the interval of time between the index admission and the previous admission was short. Among those patients who had a malignancy diagnosis during the index hospitalization and a previous malignancy diagnosis (a group that represented 46% of all patients with malignancy diagnoses during the index hospitalization), their adjusted odds ratio (OR) of readmission was 1.43 (95% confidence interval [CI]: 1.31–1.58), or 25% lower than patients with a malignancy diagnosis during the index hospitalization but with no previous malignancy diagnosis (as indicated by the OR of 0.72 displayed in Table 3 for previous history of malignancy diagnosis). The opposite pattern was displayed by those patients with a hematologic/immunologic condition (65% of whom where diagnosed with hereditary immunodeficiency), with a previous history of a hospitalization diagnosis (which occurred in 17% of these cases) increasing the odds of readmission by 27% (OR 1.16; 95% CI: 0.94–1.44). Among these parameters, the largest standardized effect (which provides another measure of the relative influence of various variables on the outcome) were for the number of past admissions, the time since past admission, and the length of stay for the index admission.
Graphical Assessment of Fit of the Final Model
To assess the calibration of the readmission model, we plotted the degree of correspondence between the observed proportions of patients who were in fact readmitted across the range of the predicted probabilities of being readmitted (Fig 2). The fine degree of accuracy of the predictions suggests that the model's predicted output could be used either with a wide range of cut-points (which may vary depending on the specific setting and application) to form 2 or more groups of patients, or the output could be used as a continuous variable.
Calibration of the final readmission model.
Assessment of Predictive Accuracy of the Final Model
We provide in Table 4 more discrete information regarding 3 potential cut-points of the predicted probability of readmission, for the 90th, 95th, and 99th percentiles of the predicted probability for the entire cohort (which corresponded in these data to predicted readmission probabilities of 0.43, 0.60, and 0.86, respectively).
Positive Predictive Value and Sensitivity for 3 Percentile Cut Points of the Predicted Probability of Readmission
Assessment of Fit of the Final Model Across Hospitals
We assessed how well the readmission model performed not only across the whole sample but also within each hospital. As depicted in Fig 3, the c statistic for the readmission model ranged from 0.76 to 0.84 across the 38 hospitals, which in standard qualitative terms attached to the c statistic values of 0.7 and 0.8 ranges from the high end of “good” to “excellent.”
Stability of final readmission model across hospitals. Standard qualitative terms describe the discriminative abilities of a model as “good” if the c statistic is >0.7 and “excellent” if >0.8.
Sensitivity Analyses of the Final Model
We conducted 2 sensitivity analyses to determine the degree to which the predictive performance of the final readmission model would be affected by either (1) omitting all patients who on their index admission had been transferred from another facility (who may represent patients with an increased risk of readmission because of the severity of their conditions or patients with a decreased risk of readmission because of their likely greater distance from the hospital33) or by (2) omitting all oncology patients (who may be readmitted in a predictable manner for chemotherapy). The model without transferred patients displayed an identical c statistic of 0.811, whereas the model without oncology patients displayed a slightly lower c statistic of 0.79.
Internal Validation of the Final Model Using Bootstrap Techniques
To assess the degree to which the performance characteristics were dependent on the particular composition of this large data set, we reperformed our analysis of the performance characteristics using 100 bootstrap replications with replacement resampling (Table 5). After adjusting for the estimated optimism, the final model still displayed good discrimination, calibration, and explanation of variance in the outcome.
Bootstrap Assessment of Performance of Final Readmission Model (100 Repetitions)
DISCUSSION
This study sought to advance the empirical investigation of the “ecology of care” for children by determining whether, among a cohort of children who have been hospitalized, we could identify those children who during the ensuing year are at increased risk for readmission to the hospital. Using readily available hospital administrative and billing information, including the diagnostic and severity of illness information as classified by the APR-DRG system, we have demonstrated that reasonably accurate risk predictions can be made for readmission and are enhanced by using information from not only the most recent admission, but also information from any previous admissions during the preceding year.
This study offers several innovative methods and findings. First, it is the first to develop a model of readmission applicable for the vast variety of pediatric conditions resulting in hospitalizations in children's hospitals. Second, it is the first study to quantify the performance characteristics of the APR-DRG system as a patient classification that can be used to inform models aiming to predict future hospital use for pediatric patients. Third, our findings confirm the value not only of information regarding the most recent hospital admission, but also the history of hospitalizations during the preceding 12 months, in the endeavor of forecasting readmissions.
The findings of this study should be interpreted with 4 limitations kept in mind. First, we could not determine whether our subjects, who by definition had been discharged from 1 of the participating children's hospitals during 2004, were admitted during the following year to a different hospital. If such events occurred in significant numbers, the resulting misclassification error (where children who were readmitted to a different hospital were not counted as “readmitted) most likely would have impaired the performance accuracy of our model. Second, we could not assess whether some patients, after being discharged from the index admission, died out of hospital (most likely at home34) instead of being readmitted; this plausible sequence of events may explain why patients with diagnoses of a malignancy during a previous admission or a previous discharge to a skilled nursing facility were less likely to be readmitted. Third, some key variables in the models, such as the APR-DRG and the length of stay, can only be determined at the time that the patient is discharged from the hospital, thereby limiting the ability of these models to identify groups of children who are currently hospitalized in terms of their risk of readmission. Finally, the population that we studied consisted of children admitted to a large group of children's hospitals that contribute to the Pediatric Health Information System database, so that our findings cannot yet be generalized beyond the specific boundaries of this study, but instead require replication in other types of hospitals and health care systems.
With these limitations kept in mind, how can a population-level readmission model similar to ours be put to use? For researchers, the model could create a set of expectations (the predicted number of patients who would likely be readmitted among a given population of hospitalized patients, or the probability that a particular type of patient [as characterized by their covariates in the model] would be readmitted) that could then be compared with what is actually observed. For instance, the implications of different aspects of the ecology of care (such electronic medical charts, or the level of spending of Medicaid dollars on children) could be evaluated, determining whether these features are associated with the difference between predicted and observed rates of readmissions. These investigations would still have to account for potential confounding between the feature under study and other correlated factors that would influence outcomes (for instance, having an electronic medical chart may be associated with more protocol-based delivery of care, as might different schemes of managed care across different states for patients covered by the Medicaid or State Children's Health Insurance Program programs), and would have to consider whether the effect of the variable under study is so potent that it influences not only the outcomes but also variables included in the models (such as raising or diminishing the likelihood of having been admitted previously in a manner that alters the relationship of this past history to future hospitalizations). Equipped with an accurate means of prediction, however, these analytic challenges become more sharply defined and tractable. Advancing this domain of research is timely and important, given both the dependency of children with chronic conditions on complex systems of care and the likely alternation of these systems in the coming years because of financial considerations and constraints.
How might a given hospital or health care system use the readmission model? The answer would depend on the goals and resources of the organization, and whether the primary aim was to target services to a particular population of patients or to calculate a set of expected future values for readmission rates that could then be used for comparative purposes. For example, the readmission model could be used to inform a system of targeted case management, which might aim to prospectively identify groups of patients with the greatest likelihood of being readmitted to the hospital. Ultimately, the underlying notion that case management efforts would become more effective if guided by data-based methods for identifying patients at elevated risk of future hospitalizations will need to be rigorously evaluated, because previous studies evaluating the effectiveness of home-based support for the elderly after hospital discharge have shown mixed results.35–37
CONCLUSIONS
Future pediatric hospital use can be predicted with sufficient accuracy and stability to advance health services research and potentially improve the care of children.
Acknowledgments
This work was supported, in part, by grant R21-NR008614 (principal investigator, Dr Feudtner) from the National Institute of Nursing Research, and in part by grant number 1K23-HD052553 (principal investigator, Dr Srivastava) from the National Institute of Child Health and Human Development, both of the National Institutes of Health. These funders had no role in the design and conduct of the study, in the collection, analysis, and interpretation of the data, and in the preparation, review, or approval of the manuscript.
Footnotes
- Accepted April 22, 2008.
- Address correspondence to Chris Feudtner, MD, PhD, MPH, Children's Hospital of Philadelphia, General Pediatrics, 3535 Market, Room 1523, 34th Street and Civic Center Boulevard, Philadelphia, PA 19104. E-mail: feudtner{at}email.chop.edu
The authors have indicated they have no financial relationships relevant to this article to disclose.
What's Known on This Subject
Very little is known about factors that influence the likelihood that a pediatric patient will be readmitted; as importantly, methods for studying this question are underdeveloped.
What This Study Adds
We develop and validate a method that, with a high level of accuracy, is able to predict hospital readmission. This methodology will open new and important avenues for health services research regarding readmissions as a quality-of-care indicator.
REFERENCES
- Copyright © 2009 by the American Academy of Pediatrics