Recognizing Differences in Hospital Quality Performance for Pediatric Inpatient Care
BACKGROUND: Hospital quality-of-care measures are publicly reported to inform consumer choice and stimulate quality improvement. The number of hospitals and states with enough pediatric hospital discharges to detect worse-than-average inpatient care remains unknown.
METHODS: This study was a retrospective analysis of hospital discharges for children aged 0 to 17 years from 3974 hospitals in 44 states in the 2009 Kids’ Inpatient Database. For 11 measures of all-condition or condition-specific quality, we assessed the number of hospitals and states that met a “power standard” of 80% power for a 5% level significance test to detect when care is 20% worse than average over a 3-year period. For this assessment, we approximated volume as 3 times actual 2009 admission volumes.
RESULTS: For all-condition quality, 1380 hospitals (87% of all pediatric discharges) and all states met the power standard for the family experience-of-care measure; 1958 hospitals (95% of discharges) and all states met the standard for adverse drug events. For condition-specific quality measures of asthma, birth, and mental health, 203 to 482 hospitals (52%–90% of condition-specific discharges) met the power standard and 40 to 44 states met the standard. One hospital and 16 states met the standard for sickle cell disease. No hospital and ≤27 states met the standard for the remaining measures studied (appendectomy, cerebrospinal fluid shunt surgery, gastroenteritis, heart surgery, and seizure).
CONCLUSIONS: Most children are admitted to hospitals in which all-condition measures of quality have adequate power to show modest differences in performance from average, but most condition-specific measures do not. Policies regarding incentives for pediatric inpatient quality should take these findings into account.
- AHRQ —
- Agency for Healthcare Research and Quality
- CMS —
- Centers for Medicare & Medicaid Services
- IQR —
- interquartile range
- KID —
- Kids’ Inpatient Database
- NACHRI —
- National Association of Children’s Hospitals and Related Institutions
What’s Known on This Subject:
Hospital quality-of-care measures are publicly reported to inform consumer choice and stimulate quality improvement. The number of hospitals and states with a sufficient number of pediatric hospital discharges to detect worse-than-average pediatric inpatient care quality remains unknown.
What This Study Adds:
Most children are admitted to hospitals in which all-condition measures of inpatient quality are powered to show differences in performance from average, but most condition-specific measures are not. Policy on incentives for pediatric inpatient quality should take these findings into account.
Since the 1980s, the Centers for Medicare & Medicaid Services (CMS) has publicly reported hospitals’ performance on quality measures to inform consumer choice and stimulate quality improvement.1–7 These and other measures have been used by federal agencies (eg, the Agency for Healthcare Research and Quality [AHRQ]), state and local governments, private accreditation organizations, health systems, and insurance plans to facilitate comparisons of hospital performance against other hospitals and established benchmarks (eg, the national average).3,8–12 Financial penalties are imposed on hospitals with worse-than-average performance.13,14
An accurate assessment of performance requires that hospitals have enough discharges (ie, the measure’s denominator) and enough measured events of quality (ie, the measure’s numerator) to statistically test how their results compare with a standard (eg, the national average).15 When performance is measured for hospitals with too few discharges or events, the results may be misleading16,17 because random variation can substantially affect performance measures.18–20 At such hospitals, statistical tests may have insufficient power to detect modest, yet clinically significant, differences in performance (eg, 20% worse than average), and suboptimal performances may not be recognized. For these reasons, CMS withholds public reporting of performance for hospitals with few adult discharges21 and aggregates selected performance data for state-level reporting.22
Public reporting of hospital performance for adult patients has been performed for decades but is much newer for children. The Children’s Health Insurance Program Reauthorization Act of 2009 calls for the development of new quality measures and the enhancement of existing ones to be used by Medicaid and Children’s Health Insurance Programs to assess hospital- and state-level performance.23 The legislation requires that the pediatric measures apply to large populations and cover prevalent and consequential clinical events that indicate quality of care.23–25
It is currently unknown what effect public reporting of hospital performance for children will have on consumer choice and quality improvement. Because children experience fewer hospitalizations and consequential clinical events (eg, in-hospital mortality26) than adults, many hospitals may have too few pediatric discharges to support accurate measurement and interpretation of their performance.26–28 Therefore, in this national study, we estimated the number of hospitals and states that have a sufficient number of pediatric hospital discharges to detect worse-than-average pediatric care quality on a variety of measures for all-condition discharges (ie, admission for any reason) and condition-specific discharges (ie, admission for a specific condition).
Study Design, Setting, and Population
This study used the Healthcare Cost and Utilization Project Kids’ Inpatient Database (KID) 2009, the largest, multistate database of US hospitalizations for children aged 0 to 17 years. KID includes hospital discharges from 3974 acute care hospitals in 44 states. Discharges from the 147 nonacute care hospitals (eg, rehabilitation hospitals) in KID were excluded from analysis.
KID includes a 10% random sample of uncomplicated-birth discharges and an 80% random sample of complicated-birth and nonbirth discharges from each hospital. The numbers of uncomplicated-birth discharges as well as complicated-birth and nonbirth discharges from each hospital were therefore adjusted in the data set by factors of 10 and 1.25, respectively, to obtain an estimate of each hospital’s discharge volume.29 The annual discharge volumes were then multiplied by 3 to obtain each hospital’s estimated 3-year volume. Three years is a standard period of time for measuring and reporting inpatient quality indicators.20,30,31 We calculated the SE of each hospital’s volume of discharges to account for the imprecision created by these adjustments. Children’s and non-children’s hospitals were included, distinguishing them with the 2009 specifications from the National Association of Children’s Hospitals and Related Institutions (NACHRI).32 Hospitals with a missing NACHRI specification (9% of hospitals) were excluded when reporting hospital volumes specifically for children’s and non-children’s hospitals.
Discharge Diagnoses and Quality Measures
We analyzed discharge diagnoses for which there are pediatric inpatient quality measures by using published reports of the average measure performance across hospitals (Table 1). Measures of “never” events (ie, inexcusable events that should never happen such as wrong-site surgery33,34) were not assessed because these events are extremely rare in children, and differences in event rates across hospitals are not discernible.27 Hospital and state volumes were estimated for all-condition discharges (ie, hospitalizations that occurred for any reason other than births), as well as condition-specific discharges of appendectomy, asthma, birth, cardiac surgery, gastroenteritis, cerebrospinal fluid shunt surgery, mental health, seizure, and sickle cell anemia. Condition-specific discharges were identified with the instructions provided for each measure.35–51 Depending on the measure, AHRQ’s Clinical Classification System, 3M Health Information System’s All Patient Refined Diagnosis Related Groups, or individual International Classification of Diseases, Ninth Revision, Clinical Modification, principal diagnosis and procedure codes were used (Supplemental Table 4).52
Main Outcome Measure
The main outcome measures are the numbers (and shares of discharges) of hospitals and states that had enough pediatric discharges to show relative differences in quality of care measures of 20%, 50%, and 100% from average. The relative differences from average were chosen based on the effect sizes used in clinical trials and in quality improvement studies. Average performance was chosen as the benchmark because CMS uses this method when publicly reporting hospital performance.3 Published studies and reports were used to identify average performance for each quality measure (Table 1).
For consistency of presentation, the direction of the measures for family experience of care and receipt of an asthma action plan were reversed so that higher performance rates always indicated worse care quality (eg, the percentage of patients receiving an asthma action plan became the percentage of patients who did not receive an asthma action plan). For the measure of family experience of care, we used a standard response rate (33%) reported in the literature53; we therefore multiplied by 3 the number of discharges needed to determine the number of hospitals and states that had enough discharges to show differences in quality.
For each quality measure, the sample size (ie, the number of discharges for each measure) required at a hospital to detect a hypothesized difference from the national average was calculated by using a 1-sample 2-sided test at the 5% level with 80% statistical power when the actual performance rate on the quality measure is 20%, 50%, or 100% above (ie, worse than) the average (henceforth referred to as the power standard). The critical value and statistical power for the test were estimated from a normal approximation with continuity correction because each quality measure in this study is a proportion that follows the discrete binomial distribution.54,55 The SE of each hospital’s volume of discharges was used in a sensitivity analysis to help determine the best- and worst-case scenarios for the number of hospitals that had a sufficient number of discharges to meet the power standard for each of the quality measures. In the worst-case scenario, hospitals did not meet the standard if the SE of their volume of discharges overlapped with the sample size threshold. In the best-case scenario, hospitals met the SE if their volume of discharges overlapped with the threshold. The POWER procedure in SAS version 9.3 (SAS Institute, Inc, Cary, NC) was used for all analyses.
There were 5 639 982 hospital discharges for children aged 0 to 17 years from 3974 hospitals in 2009 with an estimated 16 919 946 hospital discharges over 3 years. In 2009, the mean ± SD age at admission was 2.4 ± 7.2 years; 73.3% of discharges were for infants <1 year old, 8.4% for 1- to 4-year-olds, 8.5% for 5- to 12-year-olds, and 9.8% for 13- to 17-year-olds. Fifty-one percent of admitted children were male, and 47% used public insurance. Twenty-one percent of discharges occurred at 141 children’s hospitals, 70% occurred at 3482 non-children’s hospitals, and 9% occurred at 351 hospitals of undetermined type.
Annual Hospital Volume
The estimated median hospital volume for all-condition discharges (n = 16 919 946) at all hospitals over 3 years was 1623 (interquartile range [IQR]: 120–5301] (Table 2). Nearly two-thirds (64.0%) of all-condition discharges were births (n = 10 833 006). The median hospital volumes for birth discharges and nonbirth (n = 6 086 940), all-condition discharges were 2328 (IQR: 915–5370) and 300 (IQR: 72–1089), respectively. The median volumes for the condition-specific discharges ranged from 9 to 36. Children’s hospitals had median hospital volumes across the different types of discharges that were 4 to 49 times higher than in non-children’s hospitals; these differences were significant (P < .001 for all).
Sample Size Needed for Hospitals to Meet the Power Standard
For adverse drug events, the number of discharges that a hospital needed to detect a 20%, 50%, and 100% difference in quality worse than average was 1694, 298, and 85, respectively (Table 3). For family experience of care, 3307, 579, and 164 discharges were needed. For condition-specific discharges, receipt of an asthma action plan was the measure with the fewest number of discharges needed for a hospital to detect a difference worse than average (385 to detect a 20% difference, 66 to detect a 50% difference, and 17 to detect a 100% difference); birth trauma was the measure with the greatest number of discharges needed for a hospital to detect a difference worse than average (7106 for a 20% difference, 1258 for a 50% difference, and 361 for a 100% difference).
Hospitals That Met the Power Standard for All-Condition Measures
For adverse drug events (Table 3), 1958 hospitals met the power standard to show a difference in performance that was 20% worse than average; these hospitals had 95% of all-condition discharges (Fig 1). There were 2766 and 3082 hospitals that met the power standard to show a difference in performance that was 50% and 100%, respectively, worse than average; these hospitals had 99% of all-condition discharges. For family experience of care, 1380 hospitals met the power standard to show a difference in performance that was 20% worse than average; these hospitals had 87% of all-condition discharges. There were 2552 and 2904 hospitals that met the power standard to show a difference in performance that was 50% and 100%, respectively, worse than average; these hospitals had 99% of all-condition discharges. For family experience of care and adverse drug events, all 44 states (Fig 2) and all but 2 children’s hospitals met the power standard to show differences in performance that were 20%, 50%, or 100% worse than average.
Hospitals That Met the Power Standard for Condition-Specific Measures
Between 203 and 482 hospitals met the power standard to show a difference in performance on each of the quality measures for asthma (receipt of asthma action plan), birth (birth trauma), and mental health (use of multiple antipsychotic medications without appropriate justification) that was 20% worse than average (Table 3); these hospitals had 52% to 90% of all discharges for these conditions (Fig 1). Between 326 and 1864 hospitals met the power standard to show a difference in performance that was 50% worse than average; these hospitals had 90% to 96% of discharges, respectively. Between 523 and 2471 hospitals could show a difference that was 100% worse than average; these hospitals had 97% to 99% of discharges. For these conditions (asthma, birth, and mental health), 40 to 44 states could show a difference in performance that was 20% and 50% worse, respectively, than average; all states could show a difference in performance that was 100% worse than average (Fig 2).
For sickle cell disease, only 1 hospital met the power standard to show a difference in performance that was 20% worse than average; this hospital had 3% of all discharges. Only 79 hospitals could show a difference for sickle cell disease that was 50% worse than average and 175 hospitals could should a difference that was 100% worse than average. These hospitals had 61% and 84%, respectively, of all discharges (Fig 1). Although only 16 states met the power standard to show a difference in performance for sickle cell disease that was 20% worse than average, 29 states met the power standard to show a 50% difference worse than average and 33 states could show a 100% difference worse than average in performance (Fig 2).
No hospitals met the power standard to show a difference in performance that was 20% worse than average for appendectomy, cerebrospinal fluid shunt surgery, gastroenteritis, heart surgery, and seizure measures. For these measures, ≤96 hospitals met the power standard to show a difference that was 50% worse than average; these hospitals had ≤57% of all discharges. Between 115 and 465 hospitals met the power standard to show a 100% difference worse than average; these hospitals had 38% to 84% of all discharges (Fig 1). For these measures, ≤27 and ≤41 states met the power standard to show a significant difference in performance that was 20% and 50% worse, respectively, than average; at least 39 states met the power standard to show a 100% difference in performance (Fig 2).
In the sensitivity analysis, when incorporating the SE of each hospital’s volume of condition-specific discharges, the best- and worst-case scenarios of the number of hospitals, share of patients, and states with a sufficient number of discharges to meet the power standards were similar to the results without using SEs (Supplemental Table 5).
Assessing Quality of Care With 3 Years Versus 1 Year of Pediatric Discharges
For each quality measure, using 3 years of discharge data increased the number of hospitals that met the power standard (Supplemental Fig 3). This method led to an additional 865 and 927 hospitals that met the power standard to show a 20% difference in performance worse than average when measuring family experience of care and adverse drug events, respectively; an additional 122, 151, and 460 hospitals when measuring mental health, asthma, and birth; and 1 additional hospital when measuring sickle cell disease. For the remaining 5 measures, no hospital met the power standard to show a 20% difference in performance worse than average when using 3 years of discharges. However, for these measures, an additional 13 to 85 hospitals met the power standard to show a 50% difference in performance from average.
Assessing Better-Than-Average Quality of Care
In general, for each quality measure at each level of performance (ie, 20%, 50%, and 100% different from average), fewer hospitals and states met the power standard to identify better-than-average performance compared with the number of hospitals and states that met the power standard to identify worse-than-average performance (Supplemental Fig 4).
The present study found that most children are admitted to hospitals in which measurement of general inpatient quality (ie, using all-condition measures) is precise enough to show a modest (eg, 20%) difference in performance from average. This finding occurs because pediatric inpatient care is concentrated in a subset of hospitals that have sufficient patient populations to reveal this difference. For condition-specific measures, most hospitals have volumes that are orders of magnitude smaller than the number of hospital discharges needed to distinguish their performance. Therefore, a much smaller percentage of children are admitted to hospitals in which measurement of condition-specific quality is sensitive enough to show a modest difference. Condition-specific measures are more sensitive for showing a modest difference at the state level than at the hospital level. Aggregating data across hospitals to the state level may show differences for some measures in which the number of pediatric discharges is insufficient in individual hospitals.
The threshold at which the difference from average in a hospital’s performance becomes clinically meaningful may be controversial. A 20% difference is routinely used in clinical trials to detect when 1 treatment is superior or inferior to another.56,57 Although most hospitals in our study did not meet the power standard to show a 20% difference in performance from average for condition-specific quality, many hospitals could show a much larger (eg, 50% or 100%) difference. A 100% difference worse than average has been used as a conservative threshold to detect outlier hospitals.27,58 Hospitals may strive to improve their quality of care well before their performance reaches 100% worse than average.
There may be ways to structure the measurement of pediatric inpatient quality to achieve adequate statistical power for comparison with average performance. Some hospitals with a small number of pediatric discharges might be part of a network of hospitals.59–61 In this situation, the networked hospitals may share many of the same inpatient pediatric providers, practice guidelines, and patients,62–64 making it appropriate to combine the data from the hospitals for quality measurement. Smaller hospitals without an opportunity to collaborate with a hospital network may need to explore other options to achieve adequate statistical power. For example, enhanced statistical power might be achieved when combining condition-specific measures into a composite measure of pediatric inpatient quality. Composite measures could be appropriate when they are used to assess a particular aspect of quality (eg, patient safety) that is already contained in each condition-specific measure.65–67
Performance comparisons at the state level might be more feasible. CMS reports health care utilization and spending on Medicare beneficiaries according to state.22 AHRQ reports state-specific performance of outpatient quality of care for adult patients with comparisons versus the national average.12,68,69 Less is known about reporting state-level performance on the quality of pediatric care.70,71 Although the Maternal and Child Health Bureau reports state-level outpatient quality measures (eg, access to care) for children with special health care needs,72 it does not publicly report pediatric inpatient quality of care measures. Further investigation is needed to interpret and use a state’s performance on pediatric inpatient measures and to determine whether state-level reporting can stimulate action to improve pediatric quality. In some situations, state-level reporting might be too far removed from frontline clinical care to affect change.
Alternative strategies that do not rely on comparison with average performance might help hospitals interpret their quality of pediatric inpatient care. Some government agencies and hospital systems publicly report how individual hospitals compare with the “best practice” cohort of hospitals instead of the average performance. For example, the AHRQ Achievable Benchmark of Care73 and the Australian Council on Healthcare Standards74 use the top 10th and 20th percentiles, respectively, as target performance values. These targets are meant to encourage hospitals to improve their performance and to shift average performance toward the best practice target.
This study has several limitations. KID contains information on hospital discharges but not individual patients. Because all of the discharges in the KID hospitals and consecutive years of KID data are unavailable, the estimates of 3-year volumes for the number of patient discharges and hospital and state volumes have some imprecision. To affect the study results, extremely large amounts of imprecision would be necessary, however. In fact, the imprecision for cardiac surgery would need to be 255 times larger than the estimated hospital volume to approach the sample size needed to show a 20% difference in performance from average and therefore affect the results. In addition, measuring hospital quality over 3 years may conflate recent with more distant performances. However, this method can more accurately discern better or worse performance compared with measurement over shorter periods of time.20 The 2009 H1N1 influenza pandemic could have transiently increased the number of hospital discharges assessed in our study.75,76 We compared the number of all-condition and condition-specific discharges in the KID 2009 versus discharges in prior years by using KID data in 2000, 2003, and 2006. An atypical increase in discharges during 2009 was not observed.76 There may be variation in International Classification of Diseases, Ninth Revision, Clinical Modification, coding practices across hospitals. We relied on previous literature to assess rates of events for each quality measure because most of these events could not be determined from the data set. Accordingly, we were unable to account for case-mix differences across hospitals.
Nevertheless, our findings have implications for the assessment of performance on measures of quality of pediatric inpatient care. Sample size and statistical power calculations are routinely used when designing analyses that compare the health outcomes between populations of patients. The findings from the present study suggest that thousands of US hospitals are adequately powered to show differences in pediatric quality of care from average performance for all-condition discharges such as family experience of care and adverse drug events because the sample size of children in the hospitals is sufficient. Therefore, publicly reporting hospitals’ performance compared with the average on inpatient quality for all-condition measures may aid consumers in selecting hospitals for children and stimulate improvements in pediatric quality of care.
For some hospitals, the all-condition measures may be too general to affect change; these hospitals may prefer to focus their quality improvement efforts on specific conditions. Most hospitals will not admit enough patients with a specific condition to distinguish their performance. Aggregating data across hospitals (eg, on a network or state level) might help increase the number of children eligible to have their condition-specific quality of care measured and help improve the value of public reporting. Until these conventions are evaluated further, performance may not be judged equally across hospitals when assessing pediatric inpatient quality; the subset of large hospitals that have enough pediatric patients to detect modest differences in quality of care compared with the average will more likely be affected by performance policy than smaller hospitals with an insufficient number of patients. Policy on incentives for pediatric inpatient quality should take these findings into account.
Consideration of statistical power might be helpful when interpreting differences or the lack of differences in performance on pediatric hospital quality across hospitals. This assessment may be particularly important to perform when a hospital with a small number of admissions reports a clinically meaningful relative difference in performance with a negative test of statistical significance against a standard (eg, a smaller hospital’s pediatric surgery mortality rate is twice as high as the national average, but the difference is not statistically significant). Even with the best or worst performance possible, insufficient statistical power may preclude such hospitals from distinguishing themselves. Future studies should assess the prevalence of these occurrences and what approach is best for hospitals to respond to it.
Most children are admitted to hospitals in which all-condition measures of quality have adequate power to show modest differences in performance from average, but most condition-specific measures do not. Policies regarding incentives for pediatric inpatient quality should take these findings into account.
- Accepted May 28, 2015.
- Address correspondence to Jay G. Berry, MD, MPH, Division of General Pediatrics, Boston Children’s Hospital, Harvard Medical School, 300 Longwood Ave, Boston, MA 02115. E-mail:
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: Ms Bryant, Ms Jang, Mr Kaplan, Mr Klein, and Drs Chien, Schuster, Toomey, and Zaslavsky were supported by the Agency for Healthcare Research and Quality (U18 HS020513). Dr Berry was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (K23 HD058092). The funders were not involved in the design and conduct of the study; in the collection, analysis, and interpretation of the data; or in the preparation, review, or approval of the manuscript. Funded by the National Institutes of Health (NIH).
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose. Dr Berry had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
- ↵Centers for Medicare & Medicaid Services. Hospital compare. Available at: www.hospitalcompare.hhs.gov/. Accessed March 19, 2015
- Jha AK,
- Epstein AM
- ↵Commonwealth of Massachusetts Executive Office of Health and Human Services. Massachusetts 2012 HAI data report statewide hospital summary. Available at: www.mass.gov/eohhs/docs/dph/quality/healthcare/hai/hai-data-summary-2012.pdf. Accessed March 19, 2015
- Massachusetts Health Quality Partners. MQHP reports statewide patient experiences in primary care. Available at: www.mhqp.org/default.asp?nav=010000. Accessed March 18, 2015
- State of Maryland Department of Health and Mental Hygiene. Measures. http://dhmh.maryland.gov/ship/SitePages/measures.aspx. Accessed March 19, 2015
- Commonwealth of Massachusetts Executive Office of Health and Human Services. Reports. Available at: www.mass.gov/eohhs/provider/licensing/facilities/health-care-facilities/hospitals/healthcare-assoc-infections/healthcare-associated-infections-reports.html. Accessed February 27, 2014
- ↵Agency for Healthcare Research and Quality. 2011 State snapshots. National healthcare quality report. Available at: http://statesnapshots.ahrq.gov/snaps11/. Accessed February 1, 2015
- ↵Rau J. Hospitals face pressure to avert readmissions. The New York Times. November 26, 2012
- Dillner L
- Marshall EC, Spiegelhalter DJ. Reliability of league tables of in vitro fertilisation clinics: retrospective analysis of live birth rates. BMJ. 1998;316(7146):1701–1704; discussion 1705
- Zaslavsky AM
- ↵Centers for Medicare & Medicaid Services. Frequently asked questions (FAQs): CMS 30-day risk-standardized readmission measures for acute myocardial infarction (AMI), heart failure (HF), and pneumonia (PN). Available at: www.ihatoday.org/uploadDocs/1/cmsreadmissionfaqs.pdf. Accessed March 19, 2015
- ↵Centers for Medicare & Medicaid Services. State reports. Available at: www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Chronic-Conditions/CCStateReports.html. Accessed February 27, 2015
- ↵Agency for Healthcare Research and Quality. Initial core set of children's healthcare quality measures. Available at: www.ahrq.gov/chipra/listtable.htm. Accessed March 18, 2015
- Children's Health Insurance Program Reauthorization Act of 2009. Public Law No. 111-3, 123 Stat. 81 (2009). Available at: http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=111_cong_public_laws&docid=f:publ003.111. Accessed March 19, 2015
- ↵Children's Health Insurance Program Reauthorization Act of 2009. Public Law No. 111-3, 123 Stat. 36 (2009)
- Bardach NS,
- Vittinghoff E,
- Asteria-Peñaloza R,
- et al
- Audet A-MJ
- ↵Centers for Medicare & Medicaid Services. Readmissions reduction program. Available at: www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html. Accessed March 19, 2015
- ↵National Association of Children's Hospitals and Related Institutions. Case mix comparative data program. Available at: www.childrenshospitals.net/AM/Template.cfm?Section=Database&Template=/CM/HTMLDisplay.cfm&ContentID=45023. Accessed March 19, 2015
- ↵National Quality Forum. Serious Reportable Events in Healthcare: A Consensus Report. Washington, DC: National Quality Forum; 2002
- ↵National Quality Forum. Safe Practices for Better Healthcare: A Consensus Report. Washington, DC: National Quality Forum; 2003
- ↵Agency for Healthcare Research and Quality. Percent of pediatric asthma inpatients with documentation that they or their caregivers were given a written home management plan of care (HMPC) document. Available at: www.qualitymeasures.ahrq.gov/content.aspx?id=48109. Accessed March 19, 2015
- Agency for Healthcare Research and Quality. Percentage of children with a pre-operative diagnosis of acute appendicitis, who undergo appendectomy with normal histology, but significant other intra-abdominal pathology, during the 6 month time period. Available at: www.qualitymeasures.ahrq.gov/content.aspx?id=33467&search=pediatric. Accessed March 19, 2015
- Agency for Healthcare Research and Quality. Percentage of children with a pre-operative diagnosis of acute appendicitis, who undergo appendectomy with normal histology, during the 6 month time period. Available at: www.qualitymeasures.ahrq.gov/content.aspx?id=33465&search=appendectomy. Accessed March 19, 2015
- Agency for Healthcare Research and Quality. Inpatient pediatric satisfaction: mean section score for “overall assessment” questions on Inpatient Pediatric Survey. Available at: http://qualitymeasures.ahrq.gov/content.aspx?id=28180. Accessed March 19, 2015
- Berry JG, Hall MA, Sharma V, Goumnerova L, Slonim AD, Shah SS. A multi-institutional, 5-year analysis of initial and multiple ventricular shunt revisions in children. Neurosurgery. 2008;62(2):445–453; discussion 453–454
- Shah SS, Hall M, Slonim AD, Hornig GW, Berry JG, Sharma V. A multicenter study of factors influencing cerebrospinal fluid shunt survival in infants and children. Neurosurgery. 2008;62(5):1095–1102; discussion 1102–1103
- Co JP,
- Ferris TG,
- Marino BL,
- Homer CJ,
- Perrin JM
- Agency for Healthcare Research and Quality. National healthcare quality and disparities reports. Available at: http://nhqrnet.ahrq.gov/nhqrdr/jsp/nhqrdr.jsp#snhere. Accessed February 22, 2015
- Agency for Healthcare Research and Quality. Child health care quality toolbox: established child health care quality measures—AHRQ quality indicators. Available at: www.ahrq.gov/chtoolbx/measure3.htm. Accessed February 27, 2015
- ↵Institute for Healthcare Improvement. Percent of admissions with an adverse drug event. Available at: www.ihi.org/knowledge/Pages/Measures/PercentofAdmissionswithanAdverseDrugEvent.aspx. Accessed March 19, 2015
- ↵Healthcare Cost and Utilization Project. Clinical classifications software (CCS) for ICD-9-CM. Available at: www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed March 19, 2015
- ↵Centers for Medicare & Medicaid Services. Summary of HCAHPS survey results. Available at: www.hcahpsonline.org/files/Report_July_2014_States.pdf. Accessed March 19, 2015
- ↵SAS Institute Inc. The POWER Procedure. SAS/STAT(R) 9.3 User's Guide. Available at: http://support.sas.com/documentation/cdl/en/statug/63962/HTML/default/viewer.htm#statug_power_a0000000996.htm. Accessed February 27, 2014
- Ciani O,
- Buyse M,
- Garside R,
- et al
- ↵Hempel S, Suttorp MJ, Miles JNV, et al. Empirical Evidence of Associations Between Trial Quality and Effect Size. 2011 ed. Rockville, MD: Agency for Healthcare Research and Quality; 2011
- Kemp K
- Lorch SA,
- Myers S,
- Carr B
- Miller M
- Ohio Children's Hospitals Solutions for Patient Safety. Available at: http://solutionsforpatientsafety.org/. Accessed February 16, 2014
- Wirtschafter DD,
- Powers RJ,
- Pettit JS,
- et al
- Department of Health and Human Services
- ↵National Quality Forum (NQF). Composite Measure Evaluation Framework and National Voluntary Consensus Standards for Mortality and Safety—Composite Measures: A Consensus Report. Washington, DC: NQF; 2009
- ↵Texas Department of State Health Services. Center for Health Statistics Texas Health Care Information Collection. Available at: www.dshs.state.tx.us/THCIC/Publications/Hospitals/PDIReport/PDIReport.shtm. Accessed February 12, 2014
- ↵Vermont Department of Financial Regulation. Volume and mortality for selected procedures. Available at: www.dfr.vermont.gov/insurance/insurance-consumer/volume-mortality-selected-procedures. Accessed March 19, 2015
- McDonald KM,
- Davies SM,
- Haberland CA,
- Geppert JJ,
- Ku A,
- Romano PS
- ↵Health Resources and Services Administration. State data. The National Survey of Children with Special Health Care Needs Chartbook 2005-2006. Available at: http://mchb.hrsa.gov/cshcn05/SD/intro.htm. Accessed March 19, 2015
- Howley PP,
- Gibberd R
- National Quality Forum. NQF patient safety terms and definitions. Available at: www.dshs.state.tx.us/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=8589971070. Accessed February 27, 2014
- American Nurses Association. National database of nursing quality indicators. Available at: www.nursingquality.org. Accessed February 27, 2015
- Bachur RG,
- Hennelly K,
- Callahan MJ,
- Chen C,
- Monuteaux MC
- The Joint Commission. Improving America's Hospitals: The Joint Commission's Annual Report on Quality and Safety. Washington, DC: The Joint Commission; 2010
- National Quality Forum. Quality positioning system. Available at: www.qualityforum.org/QPS/. Accessed February 27, 2015
- The Joint Commission
- Copyright © 2015 by the American Academy of Pediatrics