Measuring Hospital Quality Using Pediatric Readmission and Revisit Rates
OBJECTIVE: To assess variation among hospitals on pediatric readmission and revisit rates and to determine the number of high- and low-performing hospitals.
METHODS: In a retrospective analysis using the State Inpatient and Emergency Department Databases from the Healthcare Cost and Utilization Project with revisit linkages available, we identified pediatric (ages 1–20 years) visits with 1 of 7 common inpatient pediatric conditions (asthma, dehydration, pneumonia, appendicitis, skin infections, mood disorders, and epilepsy). For each condition, we calculated rates of all-cause readmissions and rates of revisits (readmission or presentation to the emergency department) within 30 and 60 days of discharge. We used mixed logistic models to estimate hospital-level risk-standardized 30-day revisit rates and to identify hospitals that had performance statistically different from the group mean.
RESULTS: Thirty-day readmission rates were low (<10.0%) for all conditions. Thirty-day rates of revisit to the inpatient or emergency department setting ranged from 6.2% (appendicitis) to 11.0% (mood disorders). Study hospitals (n = 958) had low condition-specific visit volumes (37.0%–82.8% of hospitals had <25 visits). The only condition with >1% of hospitals labeled as different from the mean on 30-day risk-standardized revisit rates was mood disorders (4.2% of hospitals [n = 15], range of hospital performance 6.3%–15.9%).
CONCLUSIONS: We found that when comparing hospitals’ performances to the average, few hospitals that care for children are identified as high- or low-performers for revisits, even for common pediatric diagnoses, likely due to low hospital volumes. This limits the usefulness of condition-specific readmission or revisit measures in pediatric quality measurement.
- readmission rates
- hospital performance variation
- quality measurement
- child health research
- delivery of care
- health policy
- AHA —
- American Hospital Association
- APR-DRG —
- All-Patient Refined Diagnosis Related Group
- BLUP —
- best linear unbiased prediction
- CCC —
- complex chronic condition
- CCS —
- Clinical Classification Software
- CI —
- confidence interval
- CMS —
- Centers for Medicare and Medicaid Services
- ED —
- emergency department
- ICD-9 —
- International Classification of Diseases, Ninth Revision
- OR —
- odds ratio
What’s Known on This Subject:
Readmissions have been identified as a priority area for pediatric inpatient quality measurement nationally. However, it is unknown whether readmission rates vary meaningfully across hospitals and how many hospitals would be identified as high- or low-performers.
What This Study Adds:
Only a few hospitals that care for children are high- or low-performers when their condition-specific revisit rates are compared with average rates across hospitals. This limits the usefulness of condition-specific readmission or revisit measures in pediatric quality measurement.
Preventable hospital readmissions are a topic of national focus as potential indicators of clinical failure and unnecessary expenditures.1 In pediatrics, readmissions within a year of an index admission are common and cost more than $1 billion annually.2,3 As a result, development of a pediatric readmissions measure has been identified as a priority for national quality reporting programs.4,5 Adult condition-specific readmission rates for Medicare patients are already publicly reported online in Medicare’s national quality report. Outlier hospitals are identified by whether they have statistically different readmission rates compared with national benchmarks (18.5% for pneumonia, 19.7% for heart attack, and 24.7% for congestive heart failure).6 Starting fiscal year 2013 Medicare is deducting reimbursements to hospitals for excess readmissions.7
However, it is not known whether there is sufficient variation in readmission rates among hospitals admitting children to ensure that using those rates would allow meaningful comparative performance measurement. In particular, studies of other pediatric performance measures have shown that low event rates and low patient volume make identification of performance outliers challenging.8–10 If we can identify outlier hospitals, studying those hospitals can better assess whether readmissions are preventable and help delineate best practices in pediatric inpatient care.11
The existing literature on hospital variations in pediatric readmissions is limited, likely due to the difficulty of tracking readmissions over time and across hospitals. The Agency for Healthcare Research and Quality reports readmission statistics from a nationally representative sample of hospitals12; however, it does not describe hospital-level performance variation. Several other studies report pediatric readmission rates for multiple centers, but most were limited to freestanding children’s hospitals.2,3,13–15 Other large studies did not describe hospital-level performance variation.16–18 Berry et al described hospital-level variation on 30-day readmissions for multiple conditions, but data were again limited to freestanding children’s hospitals, which the authors estimated receive no more than 25% of all pediatric admissions nationally.19 Thus, it is not known how much variation in performance on readmission rates exists for the majority of pediatric providers.
We address this gap in the literature using a multistate database of pediatric hospitalizations to calculate condition-specific, risk-standardized readmission rates for common pediatric inpatient diagnoses and to assess the numbers of outlier hospitals with readmission rates that are higher or lower compared with the overall sample. Additionally, because previous research suggests that event rates for pediatric measures might be too low to distinguish among hospitals9,10 and because unplanned visits to the emergency department (ED) may represent potentially preventable failures in care, we decided a priori also to evaluate a composite measure of “revisit” rates, that is, either a readmission or a return to the ED.
Data and Setting
We used statewide administrative databases because these are the only large data sets that identify readmissions or revisits to hospitals other than the index hospital. States participating in the State Inpatient Databases and State Emergency Department Databases send discharge abstracts from all non-federal hospitals to the Healthcare Cost and Utilization Project annually.20 Several states also send unique patient identifiers that allow identification of patient readmissions or returns to the ED, whether at the index institution or any other in the state, after an index hospitalization. For this study, we used all State Inpatient Databases and State Emergency Department Databases states and years with revisit linkages available (California, Florida, North Carolina, and Nebraska for 2008–2009; Arizona and Utah for 2006–2007),21 combining data sets across years to broaden the study’s generalizability.
We compared characteristics of study hospitals to other hospitals nationally (number of beds, teaching status, profit-status and control, urban versus rural, and percent Medicaid patients) by using the American Hospital Association (AHA) database. Because previous readmissions work has focused on freestanding children’s hospitals, we also looked at proportion of freestanding children’s hospitals in our study sample and nationally, identifying them with the Children’s Hospital Association membership list.22
We excluded visits for which records did not include a unique identifier because revisits for these records could not be recognized. Because unique identifiers in state administrative databases are often social security numbers, we hypothesized that very young patients would be disproportionately likely to be missing a unique identifier. We confirmed this hypothesis (Supplemental Table 6) and excluded patients <1 year old (n = 1 905 936) and then the remaining patients without a unique identifier (n = 419 792).
We chose to examine patients up to age 21 because we anticipated that some patients who were at greater risk of being readmitted were children with complex chronic conditions (CCCs),2 who may be cared for in the pediatric setting up until age 21.23 We excluded patients who were transferred from 1 facility to another because there is no consensus approach to attribution of the readmission in such cases. We also excluded patients who died during the index hospitalization.
Diagnoses of Interest
We focused on the most common diagnoses for pediatric admissions because these have higher sample size than other diagnoses, and the objective of the study was to assess hospital performance variation on condition-specific readmission rates. The diagnoses were identified by querying the top 10 principal admission diagnoses using the Agency for Healthcare Research and Quality’s Clinical Classification Software (CCS)24 groupings in the nationally representative pediatric inpatient KID database.25 We did not analyze pregnancy and childbirth visits (CCS 224) because obstetrics is an independently measured area of quality.26 We also did not analyze bronchiolitis visits because 75% of patients with bronchiolitis in the KID 2009 database are <1 year old, and so it was not a top diagnosis for our study population.
We calculated 30-day and 60-day all-cause readmission and revisit rates for individual conditions. A revisit was defined as either a readmission or a return to the ED after discharge, even if that ED visit did not lead to readmission. For all numerators, each admission or ED visit to any hospital (the index hospital or another hospital) for any diagnosis was counted as a readmission or revisit. The denominators included any hospital discharges with a principal diagnosis for the disease.
If a patient had ≥1 additional admissions within 30 (or 60) days of discharge, we did not consider the additional admissions as index admissions. Thus, any admission was either an index admission or a readmission, but not both.27,28
We used Student’s t and χ2 tests to compare hospital characteristics.
For each diagnostic group, we developed a separate risk-adjusted model for 30-day revisit rates. The base model for each group included age, gender, and the presence or absence of a CCC,29 using a recently published list of CCC categories30 because having a CCC is associated with an increased risk of readmission.2,3 We used a binary “any CCC” indicator because cell numbers were too small to use individual CCCs. Because the epilepsy case definition shared some International Classification of Diseases, Ninth Revision (ICD-9) codes with the CCC ICD-9 codes, the model for epilepsy used a modified CCC variable that did not count the epilepsy diagnoses as a CCC. No other condition shared ICD-9 codes with the CCC definitions.
From our base models, we tested for all interactions. We included state as a fixed effect because pediatric readmission rates have been shown to vary across states and because many public reports are done at the state level, using only within-state comparisons.31 State was not included in the epilepsy model because of collinearity between individual hospitals and the state variable. This collinearity was due to a concentration of epilepsy patients in 1 or 2 hospitals in some states. For instance, ∼70% of epilepsy patients in Utah were seen in 1 hospital. Supplemental Table 7 shows the variables in each risk adjustment model, as well as the discrimination and calibration metrics for each model.
Because the Centers for Medicare and Medicaid Services (CMS) is currently adopting pediatric quality measures for meaningful use reporting,32 we adapted the CMS hospital readmissions methodology to calculate risk-adjusted rates and identify outliers.28 CMS uses a hierarchical model to stabilize performance estimates for low-volume hospitals and avoid penalizing these hospitals for high readmission rates that may be due to chance.28 This is particularly important in pediatrics, given the low pediatric volumes for many hospitals admitting children.9,19
Like CMS, we fit a random effects logistic model to obtain best linear unbiased predictions (BLUPs) of risk-adjusted hospital random effects. The model was implemented by using the xtmelogit procedure in Stata (Stata Corp, College Station, TX). Odds ratios (ORs), calculated as the antilogit transformation of the BLUP for each hospital, capture the deviation of the predicted rate for each hospital from its expected rate, based on the risk-adjustment variables included in the model. The model provides fairer estimates of performance for low-volume hospitals by differentially “shrinking” their BLUPs closer to the group mean. We based inferences about outlier status on 95% confidence intervals (CIs) for the ORs, calculated using model-based SEs. Standardized readmission rates Rs for each hospital were calculated using the formula Rs = Re * OR/[1 + Re * (OR – 1)], where Re is the expected rate, and OR is the hospital-specific OR based on the BLUP; 95% confidence limits for Rs were obtained by using the same equation, replacing OR with either the upper or lower confidence limits. Outliers were defined as those hospitals for which the entire 95% CI for OR was >1 (“worse” performer) or <1 (“better” performer).
To assess for hospital variation overall, we used, for each condition, a likelihood-ratio test comparing the risk-adjustment model to a simpler model omitting the hospital-level random effects. This tests for between-hospital variation not accounted for by the risk adjustment variables, with P values <.05 indicating variation in risk-adjusted readmissions across the group of hospitals.
We performed several sensitivity analyses. First, we excluded hospitals with <25 discharges for the condition of interest over the measurement period, following CMS practice.28 Because of the possibility that hospitals with extensive missing data artificially create more variation in the performance estimates, increasing noise in the overall data set that would make it harder to identify outliers, we also performed an analysis that excluded hospitals with >50% of visits missing unique identifiers. In addition, we treated hospital as a fixed rather than random effect, providing nonshrunken estimates of the ratio of observed to expected readmission rates. We based inference on 95% exact binomial CIs for the observed readmission rate, treating the expected rate as known. This analysis was restricted to hospitals with ≥25 discharges because fixed effects estimates can unfairly penalize low-volume hospitals. Finally, because there is no gold standard for risk-adjusting pediatric readmission rates, we reran the hierarchical analysis using the All-Patient Refined Diagnosis Related Group (APR-DRG) code for severity of illness as a case-mix adjuster, in place of the binary CCC indicator. We used data from Utah because no other state provided APR-DRG data from 2006 through 2009. APR-DRG severity classification is a 4 level categorical variable of degree of loss of function that uses proprietary methodology from 3M.33
All analyses used Stata 12. The University of California at San Francisco Committee on Human Research considered this study exempt.
Compared with other hospitals nationally, hospitals in our study (n = 958) were more frequently large- and medium-sized, urban, and had higher proportions of Medicaid patients (Table 1).
The numbers of eligible visits for each disease category are shown in Table 2. The median and interquartile ranges for hospitals admitting ≥1 patient for each condition were low, and the percent of hospitals admitting at least 25 patients within 2 years was <40% except for appendicitis (49.3%) and mood disorders (63.0%).
Thirty-day readmission rates were <5%, except for epilepsy (6.1%), dehydration (6.0%), and mood disorders (7.6%). Including ED visits in the outcome resulted in 1.0- to 2.3-fold increase in rates. Sixty-day revisit rates were somewhat higher (range: 8.5%–19.1%), but rates were still ≤15% for all conditions, except for mood disorders (19.1%, Table 3).
There was a wide range of raw hospital-level 30-day revisit rates (Table 4), with smaller ranges after risk-standardization. There were few outliers identified for any diagnostic group: for hospitals admitting any children with mood disorders, 15 hospitals (4.2%) were outliers, including 5 “worse” performers; for all other conditions there were ≤5 outliers (Table 4). The variation in hospital random intercepts was significantly (P < .05) greater than zero for all diagnostic categories except for pneumonia (P = .06). In sensitivity analyses including only hospitals with ≥25 admissions for the diagnostic category (Table 5, Fig 1), including only hospitals with ≥50% of visits with a unique identifier (Supplemental Table 8) and using APR-DRGs for risk adjustment instead of CCCs (data not shown), there was almost no change in the number of outlier hospitals identified. A fixed effects model identified more outliers; nonetheless, except for mood disorders (40 outliers [17%]) and epilepsy (14 outliers [12.8%], Supplemental Table 9) the proportion remained under 5% of hospitals.
In this first multistate, all-hospital study assessing variation in 30-day pediatric readmission and revisit rates for common pediatric conditions, admission volumes as well as readmission and revisit rates were low. For condition-specific revisit measures, there were few hospital performance outliers, a finding that persisted in sensitivity analyses.
The mean readmission rates we observed are consistent with previous studies8,13,14,19 and with rates available from the HCUPnet tool.12 Furthermore, the range of rates we observed is similar to previous studies. For instance, Brogan et al report risk-adjusted 14-day revisit rates for pneumonia from 1.5% to 4.4%.14 Thus, it is unlikely that the overall low number of hospital outliers we found reflects substantial underrecognition of readmissions or anomalies of the performance distribution in our study sample.
Although the hospitals in our sample had more beds than hospitals nationally, we detected no outliers for some diagnostic categories with low median hospital volume, such as dehydration and skin and soft tissue infections. Our proportions of outliers are, for all conditions except mood disorders, smaller than the 3% to 6% of outlier hospitals that CMS reports for adult readmissions. In addition, the clinical relevance of adult readmission measures is greater because the absolute numbers of hospitals and patients are higher.34
A recent analysis by Berry et al of all-condition (hospital-wide) 30-day readmission rates at freestanding children’s hospitals found higher proportions of outliers, likely because of higher admission volumes created by pooling all conditions (median hospital volume = 6943).9,19 Our study and Berry et al’s detected statistically significant overall variation across hospital random effects for condition-specific readmissions, implying that there may be variations in quality of care across all hospitals. However, we demonstrate that it is difficult to identify individual hospitals as performing statistically significantly different from average. This is likely due to low patient volumes for specific conditions and due to mean readmission rates mostly hovering around 5%, leaving little space below the mean (eg, 0% to <5%) for hospitals to be identified as having statistically significantly better performance. Hence, providers and policymakers engaged in quality improvement cannot tell, with 95% confidence, which hospitals with a 0% readmission rate have a rate that is significantly lower than 5%, to describe the care they deliver that may be preventing readmissions. Given these limitations, from both a policy and quality improvement perspective, our findings suggest that pediatric condition-specific readmission rates may not be useful performance measures, in particular, for general hospitals admitting children.
The methods we used, adapted from CMS national public reporting methods, are not the CMS methods for decreasing reimbursement rates to hospitals for adult readmissions, so our findings do not reflect the effect of a similar reimbursement policy in pediatrics. The reimbursement rate calculations reward or penalize all hospitals with a predicted/expected ratio of readmissions different from 1,7 so many adult hospitals are affected, even if their readmission rates are not statistically significantly different from expected using the CMS public reporting methods. However, CMS adult readmissions reporting is long-standing, with evidence that some adult readmissions are potentially preventable, giving greater credence to the decision to link payment and performance, whereas the study of pediatric readmissions is still a developing field.
One potential alternative approach that might improve pediatric readmissions measurement is to pool patients with similar conditions, which might lead to identification of more outliers by increasing the sample size at each hospital. If a combined readmissions measure for common diseases was adopted, potential interventions could focus on delivering guideline-recommended care across similar diseases. Previous studies implementing care pathways and improving compliance with guidelines have reduced pediatric readmissions for bronchiolitis35 and asthma,36 although other studies have not shown improvements,8,37 and thus additional work remains to be done. A pooled measure of readmissions for hospitals that admit a large number of complex chronically ill children,38 who are known to be at risk for frequent readmissions,2,3 could focus improvements specifically on these children during their transitions of care. Focusing on improving transitions has been shown to be effective at reducing readmissions for adults with chronic illnesses.39,40 If any of these pooled measures identify more statistically significant outliers than we found with condition-specific measures, and if studies show that focusing quality improvement on these pooled populations could reduce readmission rates, then CMS would have stronger evidence to justify basing payment on such measures.
There are several limitations to our analysis. First, our 6-state database is not nationally representative. There may be greater hospital performance variation in other states, and thus more condition-specific outliers. However, our sample does include hospitals serving 1 in 4 children in the United States.41
Second, we used administrative data, the only currently available source that can capture readmissions or revisits outside the index institution on a statewide basis. However, because patients <1 year of age usually lack a unique identifier, our approach may not be feasible for conditions affecting the youngest pediatric patients, such as bronchiolitis. It is unclear, however, whether a bronchiolitis readmission measure calculated at individual hospitals using medical record numbers (and hence excluding returns outside the index institution) would perform better in identifying outliers. A previous study at a large pediatric hospital had an annual average of 154 bronchiolitis admissions and only 7 readmissions.42 We also excluded visits lacking unique identifiers, which affected volume at some hospitals, although excluding hospitals with >50% of visits missing the unique identifier did not change our results. Administrative data also provide limited information for risk adjustment. However, this probably would not adversely affect our power to detect outliers, because more between-hospital variation is left unexplained.
In addition, the inclusion of nonpreventable readmissions in this analysis may have introduced noise that reduced our ability to detect hospitals with excess preventable readmissions. However, excluding any readmissions would also have reduced event rates even further, so it is not clear that an approach using readmission rates for only preventable readmissions would find more outliers. Nonetheless, a measure excluding nonpreventable readmissions may be easier to interpret and more reliably lead to quality improvements.
Using currently available data and nationally accepted public reporting methods, these analyses demonstrate that although there is statistically significant variation overall across hospitals on condition-specific pediatric 30-day revisit rates, few performance outliers can be identified, likely because of low patient volumes at most hospitals. Pooling across similar conditions, collecting better data for patient tracking, and focusing on children with CCCs in high-volume centers may have some potential to improve the utility of readmission rates as a performance measure.
We thank Steven G. DuBois, MD, and Megumi J. Okumura, MD, MAS, for input on risk adjustment, and Benedict Marafino, BA, for assistance with figure creation.
- Accepted May 30, 2013.
- Address correspondence to Naomi S. Bardach, MD, MAS, 3333 California St, Suite 265, San Francisco, CA 94118. E-mail:
Dr Bardach conceptualized and designed the study, supervised data management and conducted analyses, and drafted the initial manuscript; Dr Vittinghoff helped design the study and assisted with analyses, and critically reviewed and revised the manuscript; Ms Penaloza carried out data management and cleaning, assisted with the analyses, and reviewed the manuscript; Dr Edwards assisted with data management and critically reviewed the manuscript; Dr Yazdany contributed to study design and critically reviewed the manuscript; Dr Lee contributed to study design and reviewed and revised the manuscript; Dr Boscardin contributed to the biostatistical approach, assisted with analyses and critically reviewed and revised the manuscript; Dr Cabana contributed to study design and critically reviewed the manuscript; and Dr Dudley assisted in supervising the study, contributing to design and analytic approach, and reviewed and revised the manuscript. All authors approved the final manuscript as submitted.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: All phases of this study were supported by the National Institute of Child Health and Human Development (grant K23 HD065836) and the National Center for Research Resources, the National Center for Advancing Translational Sciences, and the Office of the Director, National Institutes of Health, through the University of California San Francisco-Clinical and Translational Science Institute Grant Number KL2 RR024130. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. (grant KL2 RR024130-05). Funded by the National Institutes of Health (NIH).
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
COMPANION PAPER: A companion to this article can be found on page 569, and online at www.pediatrics.org/cgi/doi/10.1542/peds.2013-1755.
- ↵Partnership for Patients. Better care, lower costs. 2011. Available at: www.healthcare.gov/news/factsheets/2011/04/partnership04122011a.html. Accessed June 11, 2012
- Feudtner C,
- Levin JE,
- Srivastava R,
- et al
- ↵CHIPRA measures by CHIPRA categories: initial core set and PQMP COE measure assignments. 2012. Available at: www.ahrq.gov/chipra/pqmpmeasures.htm. Accessed May 29, 2012
- ↵Hospital Compare. A quality tool provided by Medicare. 2012. Available at: www.hospitalcompare.hhs.gov. Accessed November 25, 2012
- ↵Readmissions Reduction Program. August 2012. Available at: www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html. Accessed January 30, 2013
- ↵Agency for Healthcare Research and Quality. HCUPnet, Healthcare Cost and Utilization Project (KID). 2009. Available at: http://hcupnet.ahrq.gov/HCUPnet.jsp?Id=4578AE155E24E350&Form=SelDB&JS=Y&Action=%3E%3ENext%3E%3E&_DB=NIS09KID. Accessed March 30, 2012
- ↵Lorch SA, Zhang X, Rosenbaum PR, Evan-Shoshan O, Silber JH. Equivalent lengths of stay of pediatric patients hospitalized in rural and nonrural hospitals. Pediatrics. 2004;114(4). Available at: www.pediatrics.org/cgi/content/full/114/4/e400
- ↵Nationwide and state-specific HCUP databases 2007 and 2009. Available at: www.hcup-us.ahrq.gov/databases.jsp. Accessed May 29, 2012
- ↵HCUP supplemental variables for revisit analyses. 2010. Available at: www.hcup-us.ahrq.gov/toolssoftware/revisit/revisit.jsp. Accessed May 29, 2012
- ↵NACHRI and NACH Champions for Children’s Health. Hospital directory. Available at: www.childrenshospitals.net/am/Template.cfm?Section=Hospital_Directory1. Accessed May 5, 2012
- ↵American Academy of Pediatrics AAoFP, American College of Physicians TCRAG. Supporting the health care transition from adolescence to adulthood in the medical home. Pediatrics. 2011;128(1):182–200
- ↵Healthcare Cost and Utilization Project (HCUP). Clinical Classifications Software (CCS) for ICD-9-CM. 2009. Available at: www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed May 29, 2012
- ↵Agency for Healthcare Research and Quality. National estimates on use of hospitals by children from the HCUP Kids’ Inpatient Database (KID) 2009. Available at: http://hcupnet.ahrq.gov/HCUPnet.jsp?Id=62B35F9AC0D60F81&Form=MAINSEL&JS=Y&Action=%3E%3ENext%3E%3E&_MAINSEL=For%20Children%20Only. Accessed June 3, 2012
- ↵National Quality Forum. Quality positioning system. 2012. Available at: www.qualityforum.org/QPS/QPSTool.aspx?Exact=false&Keyword=readmissions. Accessed November 1, 2012
- ↵Medicare Hospital Compare Information for Professionals. QualityNet measure methodology frequently asked questions. 2012. Available at: www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier4&cid=1219069855841. Accessed May 15, 2012
- ↵Feudtner C, Hays RM, Haynes G, Geyer JR, Neff JM, Koepsell TD. Deaths attributed to pediatric complex chronic conditions: national trends and implications for supportive care services. Pediatrics. 2001;107(6). Available at: www.pediatrics.org/cgi/content/full/107/6/e99
- ↵O’Neil S, Schurrer J, Simon S. Environmental Scan of Public Reporting Programs and Analysis: Final Report. Princeton, NJ: Mathematica Policy Research, Inc.; 2010
- ↵2014 Clinical quality measures. 2013. Available at: www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/2014_ClinicalQualityMeasures.html. Accessed March 8, 2013
- ↵HCUP Central Distributor SID description of data elements—multiple variables for all states Healthcare Cost and Utilization Project (HCUP). April 2008. Available at: www.hcup-us.ahrq.gov/db/state/siddist/sid_multivar.jsp. Accessed February 9, 2013
- ↵Rau J. Medicare IDs few hospitals as outliers in readmissions. Kaiser Health News. 2012. Available at: http://capsules.kaiserhealthnews.org/index.php/2012/07/medicare-ids-few-hospitals-as-outliers-in-readmissions/. Accessed January 12, 2013
- Fassl BA,
- Nkoy FL,
- Stone BL,
- et al
- ↵Hain PD, Gay JC, Berutti TW, Whitney GM, Wang W, Saville BR. Preventability of early readmissions at a children’s hospital. Pediatrics. 2013;131(1). Available at: www.pediatrics.org/cgi/content/full/131/1/e171
- ↵Cohen E, Kuo DZ, Agrawal R, et al. Children with medical complexity: an emerging population for clinical and research initiatives. Pediatrics. 2011;127(3):529–538
- ↵Wonder Census Projections Request CDC. 2009. Available at: http://wonder.cdc.gov/population-projections.html. Accessed June 2, 2012
- Kemper AR,
- Kennedy EJ,
- Dechert RE,
- Saint S
- Copyright © 2013 by the American Academy of Pediatrics