Skip to main content

Advertising Disclaimer »

Main menu

  • Journals
    • Pediatrics
    • Hospital Pediatrics
    • Pediatrics in Review
    • NeoReviews
    • AAP Grand Rounds
    • AAP News
  • Authors/Reviewers
    • Submit Manuscript
    • Author Guidelines
    • Reviewer Guidelines
    • Open Access
    • Editorial Policies
  • Content
    • Current Issue
    • Online First
    • Archive
    • Blogs
    • Topic/Program Collections
    • AAP Meeting Abstracts
  • Pediatric Collections
    • COVID-19
    • Racism and Its Effects on Pediatric Health
    • More Collections...
  • AAP Policy
  • Supplements
  • Multimedia
    • Video Abstracts
    • Pediatrics On Call Podcast
  • Subscribe
  • Alerts
  • Careers
  • Other Publications
    • American Academy of Pediatrics

User menu

  • Log in

Search

  • Advanced search
American Academy of Pediatrics

AAP Gateway

Advanced Search

AAP Logo

  • Log in
  • Journals
    • Pediatrics
    • Hospital Pediatrics
    • Pediatrics in Review
    • NeoReviews
    • AAP Grand Rounds
    • AAP News
  • Authors/Reviewers
    • Submit Manuscript
    • Author Guidelines
    • Reviewer Guidelines
    • Open Access
    • Editorial Policies
  • Content
    • Current Issue
    • Online First
    • Archive
    • Blogs
    • Topic/Program Collections
    • AAP Meeting Abstracts
  • Pediatric Collections
    • COVID-19
    • Racism and Its Effects on Pediatric Health
    • More Collections...
  • AAP Policy
  • Supplements
  • Multimedia
    • Video Abstracts
    • Pediatrics On Call Podcast
  • Subscribe
  • Alerts
  • Careers

Discover Pediatric Collections on COVID-19 and Racism and Its Effects on Pediatric Health

American Academy of Pediatrics
Special Article

Quality Indicators for High Acuity Pediatric Conditions

Antonia S. Stang, Sharon E. Straus, Jennifer Crotts, David W. Johnson and Astrid Guttmann
Pediatrics October 2013, 132 (4) 752-762; DOI: https://doi.org/10.1542/peds.2013-0854
Antonia S. Stang
aDivision of Emergency Medicine, Alberta Children’s Hospital, Alberta Children’s Hospital Research Institute,
Departments of bPediatrics and Community Health Sciences,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sharon E. Straus
cDepartments of Medicine and Geriatric Medicine, University of Toronto, and Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael’s Hospital and University of Toronto, Toronto, Ontario; and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jennifer Crotts
aDivision of Emergency Medicine, Alberta Children’s Hospital, Alberta Children’s Hospital Research Institute,
dPediatrics, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
David W. Johnson
aDivision of Emergency Medicine, Alberta Children’s Hospital, Alberta Children’s Hospital Research Institute,
ePediatrics, Physiology and Pharmacology, University of Calgary, Calgary, Alberta, Canada;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Astrid Guttmann
fDivision of Pediatric Medicine, Hospital for Sick Children, Department of Pediatrics and Health Policy, Management and Evaluation, University of Toronto and Institute for Clinical Evaluative Sciences, Toronto, Ontario, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • Comments
Loading
Download PDF

Abstract

OBJECTIVE: Identifying gaps in care and improving outcomes for severely ill children requires the development of evidence-based performance measures. We used a systematic process involving multiple stakeholders to identify and develop evidence-based quality indicators for high acuity pediatric conditions relevant to any emergency department (ED) setting where children are seen.

METHODS: A prioritized list of clinical conditions was selected by an advisory panel. A systematic review of the literature was conducted to identify existing indicators, as well as guidelines and evidence that could be used to inform the creation of new indicators. A multiphase, Rand-modified Delphi method consisting of anonymous questionnaires and a face-to-face meeting of an expert panel was used for indicator selection. Measure specifications and evidence grading were created for each indicator, and the feasibility and reliability of measurement was assessed in a tertiary care pediatric ED.

RESULTS: The conditions selected for indicator development were diabetic ketoacidosis, status asthmaticus, anaphylaxis, status epilepticus, severe head injury, and sepsis. The majority of the 62 selected indicators reflect ED processes (84%) with few indicators reflecting structures (11%) or outcomes (5%). Thirty-seven percent (n = 23) of the selected indicators are based on moderate or high quality evidence. Data were available and interrater reliability acceptable for the majority of indicators.

CONCLUSIONS: A systematic process involving multiple stakeholders was used to develop evidence-based quality indicators for high acuity pediatric conditions. Future work will test the reliability and feasibility of data collection on these indicators across the spectrum of ED settings that provide care for children.

  • quality improvement
  • quality indicators
  • performance measurement
  • emergency department
  • Abbreviations:
    AHRQ —
    Agency for Healthcare Research and Quality
    CNS —
    central nervous system
    CT —
    computed tomography
    DKA —
    diabetic ketoacidosis
    ED —
    emergency department
    GRADE —
    Grading of Recommendations Assessment, Development and Evaluation
    ICC —
    intraclass correlation
    ICD-10 —
    International Classification of Diseases, 10th Revision
    IO —
    intraosseous
    IV —
    intravenous
    NQF —
    National Quality Forum
    SRs —
    systematic reviews
  • Assessing the quality of health care is an international priority.1–4 Research has revealed that performance measurement improves health care outcomes.5,6 According to the commonly referenced Donabedian framework, quality indicators are explicitly defined and measurable items referring to the structures (staff, equipment, and facilities), processes (prescribing, investigations, interactions between professionals and patients), or outcomes (mortality, morbidity, or patient satisfaction) of care.4,7,8 Quality indicators have been developed for a number of health care settings, including emergency departments (EDs).9–13

    However, despite the fact that children are frequent users of emergency care,14 there is a lack of research on indicators specific to the pediatric population. For example, <5% of children are affected by the 3 conditions most frequently addressed in adult outcomes research (diabetes, heart disease, and arthritis).15 Similarly, quality measures that are part of pediatric emergency practice have not been systematically developed or validated.15 Performance measures specific to pediatrics and pediatric emergency medicine have been identified as a research priority.16–18

    Evidence indicates that there is substantial practice variation for pediatric patients among emergency care providers, and that many providers do not optimally manage seriously injured or ill children.19–21 Most of the recent work on practice variation and lack of adherence to practice guidelines in the pediatric ED setting has been done on common, often lower acuity conditions,21–26 despite evidence of a similar gap between knowledge and practice in severely ill and injured children.20 Identifying gaps in care for high acuity conditions, where improvement is likely to have the largest impact on quality of life and longevity,19 requires valid and reliable quality indicators. The objective of this project was to use a systematic process involving multiple stakeholders to review existing indicators and develop new indicators for high acuity pediatric conditions relevant to any ED setting where children are seen.

    Methods

    We used a systematic, multiphase, Rand-modified Delphi method congruent with the process for quality indicator development as outlined by the Agency for Healthcare Research and Quality (AHRQ)27 and used in previous indicator work.4,7,10,11,13,28 Ethics approval for this study was obtained by the Conjoint Health Research Ethics Board of the University of Calgary.

    Phase 1: Selection of Target Conditions

    We convened a 32-member advisory panel to select target conditions. The panel included representatives from stakeholder organizations, emergency medicine clinicians, administrators, and decision-makers from the United States and Canada. Panel members were identified by contacting stakeholder organizations and the ED directors of all pediatric EDs, and a sample of rural and general EDs across Canada, and asking them to provide names of individuals with expertise in pediatrics, ED care, or quality improvement. We analyzed health administrative data from 2006 to 2008 on the main diagnosis for high acuity pediatric patients, defined as patients age 0 to 19 years old prioritized as resuscitation and emergent by using the Canadian Triage and Acuity Scale29 (Supplemental Information 3). We used these data on the most frequent main diagnoses seen in all EDs in Ontario and Alberta, representing 50% of Canada’s population, to construct an initial list of potential conditions. We provided panelists with frequency data on the initial list and invited them to suggest additional conditions. In an e-mail survey, panelists were asked to use a scale from 1 (strongly disagree) to 9 (strongly agree)27,30 based on National Quality Forum (NQF) measure evaluation criteria31 to rate the final list of conditions on the following: importance (potential for morbidity or mortality associated with the condition), impact (potential to address gap between current and best practice), and validity (adequacy of scientific evidence linking performance of care to patient outcome). The survey was tested for face validity before dissemination and e-mail reminders were sent at weeks 2 and 3 to optimize response. It was decided a priori that conditions with mean scores ≥7 across all 3 criteria by ≥70% of respondents would be retained.

    Phase 2: Indicator Identification and Development

    We conducted a systematic review of the literature to identify existing indicators and high quality national and international guidelines, systematic reviews (SRs), and randomized controlled trials that could be used to generate new indicators for the selected conditions. The search strategy (Supplemental Table 5) was developed by a medical research librarian in consultation with the research team. We searched the following bibliographic databases: PubMed, Cumulative Index to Nursing and Allied Health Literature, Embase, the Cochrane Collaboration and Evidence-Based Emergency Medicine, Database of Abstracts of Reviews of Effects for SRs, the National Guideline Clearinghouse, the Canadian Medical Association InfoBase, the National Electronic Library for Health, Turning Research into Practice, and Best Bets from 1980 to September 2010. Targeted hand searches of relevant journals and conference proceedings (Supplemental Table 6) were conducted for a 3-year period (2007–2010). Due to resource constraints, only articles in English were included. Guidelines and existing indicators were also identified by searching specialty society Web sites, internationally recognized guidelines such as Pediatric Advanced Life Support and Advanced Trauma Life Support, and Web sites that focus on quality and performance improvement (Supplemental Table 7).

    Two research team members (Dr Stang and Ms Crotts) independently screened all titles, abstracts, and guidelines. The reviewers included for full text review any articles or guidelines that either reviewer thought might provide existing indicators for the target conditions or relevant clinical recommendations that could be used to guide indicator development. Two reviewers (Dr Stang and Ms Crotts) then independently reviewed all full text articles and selected for final inclusion any articles that reported on quality indicators for the identified conditions. The Appraisal of Guidelines Research and Evaluation instrument was applied independently by 2 reviewers (Dr Stang and Ms Crotts) to assess the quality of the guidelines.32 We developed new indicators from high quality national or international guidelines. We defined high quality as recommended or strongly recommended by both reviewers by using the Appraisal of Guidelines Research and Evaluation instrument. Criteria for indicator development included the following: (1) the strength of the recommendation with only strong recommendations considered33; (2) the consistency of the recommendation between guidelines; and (3) the strength of the evidence linking ED structure or care process to patient outcome.8 We developed new indicators based on consensus between 2 researchers (Dr Stang and Ms Crotts), and review by remaining authors (Drs Guttmann, Straus, and Johnson). Two reviewers (Dr Stang and Ms Crotts) independently assessed the quality of the evidence upon which each indicator is based by using the Grading of Recommendations Assessment, Development and Evaluation (GRADE)34 system with 1 = very low quality (expert opinion), 2 = low quality, 3 = moderate quality, and 4 = high quality.34 We used SRs and randomized controlled trials identified in the literature search to grade the strength of the evidence supporting a link between the performance of care specified by the indicator and patient outcomes.

    Phase 3: Indicator Selection

    We convened an expert panel of 14 individuals, consisting of general and pediatric emergency medicine clinicians, a nurse manager, a pediatric intensivist, quality improvement and safety researchers, and ED administrators (Supplemental Information 2). The panelists were selected based on recommendations by members of the advisory panel and represented the full spectrum of ED settings where children are seen.

    We used a modified Delphi technique consisting of 2 rounds of anonymous questionnaires and a face-to-face meeting of the expert panel to generate a final list of indicators. Before completing the first questionnaire, panelists were e-mailed a description of the goals of the research project, the full list of existing and newly developed indicators classified according to Donabedian’s framework (structure, process, and outcome),8 and the grading of the quality of evidence. The first e-mailed questionnaire, sent in December 2010, was pilot tested for face validity, and e-mail reminders were sent at weeks 3 and 4 to optimize response. Panelists were asked to rate the identified indicators on the criteria of (1) relevance to the care of high acuity pediatric patients seen in any ED setting and (2) the degree to which measurement of the indicator would impact the quality of care provided. Panelists were asked to rate each indicator on both criteria by using a Likert scale from 1 (strongly disagree) to 9 (strongly agree).27,30 We used a predetermined decision rule that any indicators rated ≤3 on both criteria by all panelists would be discarded from further consideration.

    The expert panel met in-person in January 2011 after completion of the first survey. At the meeting, the panelists reviewed anonymized ratings for each indicator. Panelists were also provided with their individual ratings from the first survey and given the opportunity to suggest additional indicators. At the end of the meeting, panelists were asked to independently re-rate each indicator by using the same criteria of relevance and impact and prioritize the top 5 indicators for each condition considered the most important to measure to improve quality of care and patient outcomes. Based on previous indicator development work28 and consensus by the expert panel, we used a predetermined decision rule that indicators rated ≥7 (moderately agree) across both criteria by ≥70% of panelists would be included in the final list. Transcripts of the meeting were documented.

    Phase 4: Feasibility of Indicator Measurement

    Data from January 2009 to December 2010 on the selected indicators were collected retrospectively from a tertiary care, pediatric ED with 65 000 annual visits. The goal of this phase was to determine the feasibility, described by the NQF as the extent to which the data are readily available, retrievable without undue burden, and can be implemented for performance measurement, and reliability of data collection.31 Based on the NQF description and previous pediatric indicator development work,35 we defined feasibility as measures that could be generated by using existing data sources including chart review, physician order entry, and ED patient tracking systems. Performance measurement in a pediatric ED also provided an initial estimate of practice variation and provider compliance with care processes or structures specified by the indicators. We created a standard profile of measure specifications, the methods by which the target population is identified and the data actually collected.36 This included the International Classification of Diseases, 10th Revision (ICD-10) diagnostic codes and inclusion criteria for each condition, and the specific data elements, such as numerator, denominator, and exclusions, for each indicator (Supplemental Information 3). Data abstractors (3 experienced chart reviewers) used a standardized database (Access 2010) that was piloted for accuracy and clarity on a sample of 10 patient visits for each condition. Interrater reliability was calculated for a random sample (10%) of charts by using intraclass correlation (ICC), Cohen’s unweighted κ, or proportion agreement depending on data elements abstracted.37–40

    Results

    Figure 1 summarizes the process of indicator selection.

    FIGURE 1
    • Download figure
    • Open in new tab
    • Download powerpoint
    FIGURE 1

    Process of indicator selection.

    Phase 1: Selection of Target Conditions

    Ninety-one percent (32/35) of invited advisory panel members agreed to participate and identified 13 potential conditions. The number of high acuity pediatric ED visits for these conditions over a 2-year period ranged from 24 for meningitis to 1458 for burns (Table 1). Eighty-one percent (26/32) of the advisory panel members completed the condition prioritization survey, and 6 conditions had a mean score of ≥7 by ≥70% of respondents (Table 1).

    View this table:
    • View inline
    • View popup
    TABLE 1

    Volume of Pediatric Patients (Age 0–19 Years) With Identified Conditions Seen in Alberta and Ontario in 2006–2008 and Results of Advisory Panel Condition Selection

    Phase 2: Indicator Identification and Development

    Table 2 reveals the results of the search of the literature and quality improvement Web sites. We identified 46 existing indicators for the 6 targeted conditions. We derived 51 new indicators from recommendations contained in national and international guidelines. The interrater reliability for determining the level of evidence upon which each indicator was based was acceptable (κ > 0.6)39 for all conditions except status epilepticus (κ = 0.15). The overall interrater reliability for the GRADE rating was κ = 0.68.

    View this table:
    • View inline
    • View popup
    TABLE 2

    Results of the Systematic Review of the Literature and Indicator Selection Process

    Phase 3: Indicator Selection

    We presented 97 indicators to the expert panel for initial rating, and none of the indicators were discarded after the first survey. The expert panel suggested an additional 17 indicators for discussion at the face-to-face meeting. The expert panel selected 62 quality indicators. In addition to the indicators for each condition, the panel selected 2 general measures relevant for all high acuity pediatric patients (Table 3). The majority of the indicators reflect ED processes (84%, n = 52), with few indicators reflecting structures (11%, n = 7) or outcomes (5%, n = 3).8 Thirty-seven percent (n = 23) of the indicators selected are based on moderate or high quality evidence.

    View this table:
    • View inline
    • View popup
    TABLE 3

    Final List of Indicators Selected by Expert Panel and Results of Indicator Measurement at a Pediatric ED

    Phase 4: Feasibility of Indicator Measurement

    Table 4 reveals the age and proportion of patients who met the inclusion criteria for each of the conditions. A total of 1681 unique visits were identified based on age, acuity, and ICD-10 codes. The proportion of patients meeting the inclusion criteria for each condition ranged from 22% for severe head injury to 84% for anaphylaxis. The interrater reliability for determining which patients met the inclusion criteria was acceptable (κ > 0.6)39 for all conditions except for severe sepsis (κ = 0.23).

    View this table:
    • View inline
    • View popup
    TABLE 4

    Demographic Data and Proportion of Patients Who Met Inclusion Criteria for High Acuity Conditions

    Results of indicator measurement in a pediatric ED are shown in Table 3. For the indicators reflecting timeliness of care, ED arrival (first time recorded) was used as time zero based on consensus of the expert panel. For diabetic ketoacidosis (DKA), data were accessible from chart review and a physician order entry system for all of the applicable indicators with high interrater reliability. Compliance with the processes of care specified by the indicators was good with minimal practice variation identified. For example, no patients received bicarbonate (n = 62), and the expert panel agreed that this number should be low (ie, <1%). The majority of patients received potassium replacement (91%, n = 65) and were treated with the appropriate insulin dose and route (88%, n = 59). Required data were available for the status asthmaticus indicators, but interrater reliability was more variable. The majority of patients received a systemic corticosteroid during the visit (99%, n = 180), and β2-agonists and systemic steroids were provided in a median of 19 and 27 minutes, respectively. Reliability was lower for indicators that relied on information written in the chart by a clinician (compared with data from physician order entry or patient tracking systems), such as the “percentage of admitted patients with objective assessment of severity of their condition” (49%, κ = 0.08) and “patients referred to an asthma education program” (32%, κ = 0.48). From the currently available data sources, it is not possible to determine if the performance on these indicators was low due to practice variation and poor provider compliance or lack of documentation. For anaphylaxis, 68% of patients received epinephrine in the ED, and 94% of patients who received epinephrine in the ED were treated by the appropriate route. Interrater reliability was good for the anaphylaxis indicators.

    For the status epilepticus indicators, the median time to second line anticonvulsant administration was 31 minutes (ICC = 0.89), with 87% (κ = 1.0) of patients receiving a benzodiazepine as initial therapy and 86% (κ = 1.0) of patients with rapid bedside glucose documented. Interrater reliability was not calculated for “attainment of seizure control within 30 minutes” and “receipt of an antiepileptic within 10 minutes” due to the small number of cases available for reliability comparison.38 Four of the severe head injury indicators were specific to referring (nontrauma) centers (Table 3) and were not applicable to the center where data were collected. Data for 2 of the indicators “head computed tomography (CT) scan performed and analyzed within 1 hour of request” and “neurosurgeon response time >30 minutes” were not available by using existing data sources. Compliance was high with respect to documentation of central nervous system (CNS), blood pressure, and oxygen saturation monitoring. However, 9.5% (n = 21) of patients were not intubated before leaving the ED, and only 46% (n = 13) of intubated patients had documented end tidal CO2 monitoring. Interrater reliability was high for the head injury indicators with the exception of “hourly CNS monitoring” (κ = 0.41) and “CT within 1 hour of arrival” (Agreement = 0.44). For severe sepsis/septic shock, the median time from ED arrival to isotonic fluid bolus was 68 minutes, 63 minutes to intravenous (IV)/intraosseous (IO) insertion, and 189 minutes to antibiotic administration. Sample size was also small, and interrater reliability not calculated, for the severe sepsis indicators that measured fluid refractory shock (n = 7), dopamine resistant shock (n = 3), and patients treated with pressors who had not received 60 cc/kg of fluid (n = 8). The median time to first provider for all resuscitation and emergent patients was 34 minutes. In addition, 1.3% of 12 636 resuscitation and emergent patients discharged from the hospital returned within 48 hours and were admitted. Data on time to provider and return visits were from an administrative database, and interreliability data could not be assessed.

    Discussion

    This rigorous process provides 62 evidence and expert consensus based quality indicators for high acuity conditions relevant for any ED setting where children are seen. Previous work on indicators for pediatric ED patients has focused on administrative and clinical measures, such as length of stay in the ED after admission, that are not presentation or condition specific41,42; common conditions seen in any ED setting28; and creation of a balanced scorecard to reflect all facets of pediatric emergency care.43 None of the previous work targets high acuity conditions. A recently published analysis of existing pediatric measures relevant to emergency care revealed that most disease specific measures address a few common pediatric conditions and suggested that future measures should consider illness severity.44

    The 4 phases of this study followed the process for indicator development and assessment as outlined by the AHRQ.27 These phases included the following: expert engagement of an advisory panel to identify conditions for indicator development, identification of candidate indicators including literature review and summary of evidence (using GRADE), expert panel review and selection of indicators by using a modified Delphi process, and assessment of feasibility of candidate indicators including empirical analyses.

    One of the strengths of this project was the comprehensive search for existing indicators and high quality guidelines for the development of new indicators, and the systematic application of GRADE in assessing the level of evidence upon which each indicator is based. Although the GRADE system was a useful means of summarizing the evidence for the expert panel, we identified a number of challenges with its use, including the need for significant time, resources, and research methodology expertise. Even for raters with clinical and research backgrounds (physician with masters level epidemiology training and an experienced research nurse), the interrater reliability for GRADE assignment was variable. Not surprisingly, the κ was lower for conditions with a less developed evidence base, such as status epilepticus, as compared with asthma or DKA.

    Given the large number of indicators considered and the variable quality of evidence available, the opportunity for the expert panel to discuss the indicators in person was an integral part of the indicator selection process. Previous work has also emphasized the importance of a face-to-face meeting of the expert panel.28 Another similarity with previous work on pediatric quality measurement was that the majority of indicators selected by the expert panel reflected ED processes.28,44 The only structural indicators retained by the expert panel assessed the presence of clinical guidelines for each of the conditions, despite a paucity of evidence linking guidelines to patient outcome. These findings illustrate the need for further work developing outcome indicators and establishing links between structure and process indicators and patient outcome.

    A final strength of the project was the inclusion of a data collection phase to assess the feasibility and reliability of indicator measurement. Previous work on quality measures for the pediatric population has emphasized the importance of testing measures in the real world settings where care will be assessed.45 The measurement stage of this study highlighted a number of issues that are relevant to the interpretation and application of the indicators. For example, a challenge identified in the data collection phase was the difficulty assigning time zero for complex conditions such as severe sepsis, DKA, and status epilepticus. We decided a priori to use ED arrival as time zero but recognize that this may not be accurate as a septic child may decompensate while in the ED and not have met the criteria for severe sepsis at presentation. Similarly, a child may start seizing while in the ED such that the time from ED arrival to first anticonvulsant treatment may not accurately describe the timeliness of seizure treatment. Difficulty in assigning a time zero may account in part for the relatively long length of time from ED arrival to isotonic fluid bolus; IV/IO insertion; and antibiotic administration (Table 3) for patients with severe sepsis. These results highlight the need for data collection across multiple centers to establish reasonable benchmarks for these indicators, especially for conditions such as severe sepsis where identifying the denominator is a challenge, as illustrated by our poor interrater reliability (κ = 0.23) in applying an operational definition based on an international consensus definition.46

    Another challenge we encountered was the small sample size for a number of the indicators. The combination of low event rates and small numbers of eligible patients is a recognized issue in performance measurement,47 particularly in pediatrics.48 The conventional minimum sample size is ≥30 eligible patients.48 Our experience collecting 2 years of feasibility data suggests that even tertiary care pediatric centers may not be able to accrue sufficient numbers to adequately measure performance for conditions such as severe head injury or severe sepsis (Table 4).

    A number of methods have been suggested to address the issue of small sample size in indicator reporting. One approach is to only report on institutions with adequate numbers of eligible patients (≥30).48 However, applying measures only to institutions with a particular volume of high acuity cases would miss a significant portion of patients who are seen in smaller centers, and it is in these centers where practice variation and the potential for improvement may be greatest.21,24–26,49 Many of the indicators developed here would be useful even for smaller volume centers for local quality improvement initiatives, such as measuring the impact of a new clinical pathway. Another solution already in use by the US Department of Health and Human Services is to aggregate data over 3 years.48 In addition to changing the time frame of reporting, other methods that could be used to adapt certain indicators for public reporting and accountability for smaller volume centers include the following: changing reporting conventions to reflect uncertainty when it exists48; using a composite measure of multiple outcomes48; application of statistical methods such as indirect estimation and hierarchical modeling47; and utilizing selected measures on a regional rather than an institutional level.

    In addition to the challenges identified above, the results of this study are subject to a few limitations. First, despite our efforts to incorporate the best possible evidence, less than half of the final indicators selected by the expert panel are based on moderate or high quality evidence. Unfortunately, this is likely a reflection of gaps in the overall quantity and quality of child health research50 and reinforces the need for further high quality research in pediatrics. Second, information on the volume of patients seen and the availability of data may not be generalizable to other pediatric institutions and is certainly not applicable to smaller, nonpediatric hospitals. However, the measure specifications and reliability data should be applicable to most settings. A final limitation is the dependence on written documentation for some of the indicators. In general, interrater reliability and compliance with the process or structure specified by the indicator was lower for indicators that required clinician documentation, as compared with indicators where data were available from a patient tracking or physician order entry system. This limitation needs to be taken into account in future applications of the indicators, either at the data collection phase through the use of alternate data such as physician billing, electronic health records, and pharmacy data or in the benchmarking/reporting phase.

    Conclusions

    This evidence and expert consensus based process provides indicators for high acuity pediatric conditions potentially suitable for a range of applications from local quality improvement initiatives to public reporting. The results of this study contribute significantly to the existing body of quality indicators for the emergency care of pediatric patients. Future work will focus on multicenter benchmarking and data collection to test the validity and feasibility of these indicators across the spectrum of ED settings that provide care for children. This research provides clinicians, researchers, and policy makers with tools to improve the quality of pediatric care for severely ill children seen in any ED setting.

    Footnotes

      • Accepted July 30, 2013.
    • Address correspondence to Antonia S. Stang, MDCM, MBA, MSc, Alberta Children’s Hospital, 2888 Shaganappi Trail, Calgary AB, T3B 6A8. E-mail: antonia.stang{at}albertahealthservices.ca
    • Dr Stang conceptualized and designed the study, secured funding, designed and piloted surveys and data extraction forms, screened articles, extracted data, interpreted data, and drafted and revised the manuscript; Dr Straus provided methodological advice, reviewed and revised surveys, reviewed and revised tables and figures, and revised the manuscript; Ms Crotts screened articles, extracted data, coordinated expert panel meeting, created Access database review charts, and revised tables and figures; Dr Johnson provided methodological advice, reviewed and revised surveys, reviewed and revised tables and figures, and revised the manuscript; and Dr Guttmann provided methodological advice, reviewed and revised surveys, facilitated expert panel meeting, provided methodological advice, interpreted data, reviewed and revised tables and figures, and revised the manuscript.

    • FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.

    • FUNDING: All stages of this work were funded by the Canadian Institute for Health Research (CIHR) MOP-102676.

    • POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.

    References

    1. ↵
      Institute of Medicine. Statement on Quality of Care: National Roundtable on Health Care Quality–The Urgent Need to Improve Health Care Quality. Washington, DC: National Academies Press; 1998
      1. Institute of Medicine
      . Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001
      1. Beal AC,
      2. Co JP,
      3. Dougherty D,
      4. et al
      . Quality measures for children’s health care. Pediatrics. 2004;113(1 pt 2):199–209pmid:14702502
      OpenUrlPubMed
    2. ↵
      1. Campbell SM,
      2. Cantrill JA,
      3. Roberts D
      . Prescribing indicators for UK general practice: Delphi consultation study. BMJ. 2000;321(7258):425–428pmid:10938052
      OpenUrlAbstract/FREE Full Text
    3. ↵
      1. Lee TH
      . Eulogy for a quality measure. N Engl J Med. 2007;357(12):1175–1177pmid:17881749
      OpenUrlCrossRefPubMed
    4. ↵
      1. Bradley EH,
      2. Holmboe ES,
      3. Mattera JA,
      4. Roumanis SA,
      5. Radford MJ,
      6. Krumholz HM
      . A qualitative study of increasing beta-blocker use after myocardial infarction: why do some hospitals succeed? JAMA. 2001;285(20):2604–2611pmid:11368734
      OpenUrlCrossRefPubMed
    5. ↵
      1. Campbell SM,
      2. Braspenning J,
      3. Hutchinson A,
      4. Marshall MN
      . Research methods used in developing and applying quality indicators in primary care. BMJ. 2003;326(7393):816–819pmid:12689983
      OpenUrlFREE Full Text
    6. ↵
      1. Donabedian A
      . The quality of care. How can it be assessed? JAMA. 1988;260(12):1743–1748pmid:3045356
      OpenUrlCrossRefPubMed
    7. ↵
      1. Graff L,
      2. Stevens C,
      3. Spaite D,
      4. Foody J
      . Measuring and improving quality in emergency medicine. Acad Emerg Med. 2002;9(11):1091–1107pmid:12414457
      OpenUrlCrossRefPubMed
    8. ↵
      1. Lindsay P,
      2. Schull M,
      3. Bronskill S,
      4. Anderson G
      . The development of indicators to measure the quality of clinical care in emergency departments following a modified-delphi approach. Acad Emerg Med. 2002;9(11):1131–1139pmid:12414461
      OpenUrlCrossRefPubMed
    9. ↵
      1. Mourad SM,
      2. Hermens RP,
      3. Nelen WL,
      4. Braat DD,
      5. Grol RP,
      6. Kremer JA
      . Guideline-based development of quality indicators for subfertility care. Hum Reprod. 2007;22(10):2665–2672pmid:17664242
      OpenUrlAbstract/FREE Full Text
      1. McGory ML,
      2. Shekelle PG,
      3. Ko CY
      . Development of quality indicators for patients undergoing colorectal cancer surgery. J Natl Cancer Inst. 2006;98(22):1623–1633pmid:17105985
      OpenUrlAbstract/FREE Full Text
    10. ↵
      1. Guru V,
      2. Anderson GM,
      3. Fremes SE,
      4. O’Connor GT,
      5. Grover FL,
      6. Tu JV,
      7. Canadian CABG Surgery Quality Indicator Consensus Panel
      . The identification and development of Canadian coronary artery bypass graft surgery quality indicators. J Thorac Cardiovasc Surg. 2005;130(5):1257pmid:16256776
      OpenUrlCrossRefPubMed
    11. ↵
      National Center for Health Statistics. Health, United States, 2012: With Special Feature on Emergency Care. Hyattsville, MD: National Center for Health Statistics; 2013
    12. ↵
      1. Bordley WC
      . Outcomes research and emergency medical services for children: domains, challenges, and opportunities. Ambul Pediatr. 2002;2(suppl 4):306–310pmid:12135405
      OpenUrlCrossRefPubMed
    13. ↵
      1. Clancy CM,
      2. Dougherty D,
      3. Walker E
      . The importance of outcomes research in pediatric emergency medicine. Ambul Pediatr. 2002;2(suppl 4):293–300pmid:12135403
      OpenUrlCrossRefPubMed
      1. Moody-Williams JD,
      2. Krug S,
      3. O’Connor R,
      4. Shook JE,
      5. Athey JL,
      6. Holleran RS
      . Practice guidelines and performance measures in emergency medical services for children. Ann Emerg Med. 2002;39(4):404–412pmid:11919527
      OpenUrlCrossRefPubMed
    14. ↵
      1. Dougherty D,
      2. Simpson LA
      . Measuring the quality of children’s health care: a prerequisite to action. Pediatrics. 2004;113(1 pt 2):185–198pmid:14702501
      OpenUrlAbstract/FREE Full Text
    15. ↵
      1. Institute of Medicine, Committee on the Future of Emergency Care in the United States Health System
      . Emergency Care for Children: Growing Pains. Washington, DC: National Academies Press; 2007
    16. ↵
      1. Zebrack M,
      2. Dandoy C,
      3. Hansen K,
      4. Scaife E,
      5. Mann NC,
      6. Bratton SL
      . Early resuscitation of children with moderate-to-severe traumatic brain injury. Pediatrics. 2009;124(1):56–64pmid:19564283
      OpenUrlAbstract/FREE Full Text
    17. ↵
      1. Hampers LC,
      2. Trainor JL,
      3. Listernick R,
      4. et al
      . Setting-based practice variation in the management of simple febrile seizure. Acad Emerg Med. 2000;7(1):21–27pmid:10894238
      OpenUrlCrossRefPubMed
      1. Knapp JF,
      2. Simon SD,
      3. Sharma V
      . Quality of care for common pediatric respiratory illnesses in United States emergency departments: analysis of 2005 National Hospital Ambulatory Medical Care Survey Data. Pediatrics. 2008;122(6):1165–1170pmid:19047229
      OpenUrlAbstract/FREE Full Text
      1. Schweich PJ,
      2. Smith KM,
      3. Dowd MD,
      4. Walkley EI
      . Pediatric emergency medicine practice patterns: a comparison of pediatric and general emergency physicians. Pediatr Emerg Care. 1998;14(2):89–94pmid:9583386
      OpenUrlCrossRefPubMed
    18. ↵
      1. Petrack EM,
      2. Christopher NC,
      3. Kriwinsky J
      . Pain management in the emergency department: patterns of analgesic utilization. Pediatrics. 1997;99(5):711–714pmid:9113948
      OpenUrlAbstract/FREE Full Text
      1. Plint AC,
      2. Johnson DW,
      3. Wiebe N,
      4. et al
      . Practice variation among pediatric emergency departments in the treatment of bronchiolitis. Acad Emerg Med. 2004;11(4):353–360pmid:15064208
      OpenUrlCrossRefPubMed
    19. ↵
      1. Isaacman DJ,
      2. Kaminer K,
      3. Veligeti H,
      4. Jones M,
      5. Davis P,
      6. Mason JD
      . Comparative practice patterns of emergency medicine physicians and pediatric emergency medicine physicians managing fever in young children. Pediatrics. 2001;108(2):354–358pmid:11483800
      OpenUrlAbstract/FREE Full Text
    20. ↵
      1. Fabian LA, Geppert J. Quality Indicator Measure Development, Implementation, Maintenance, and Retirement Summary (Prepared by Battelle, under Contract No. 290-04-0020)
      . Rockville, MD: Agency for Healthcare Research and Quality; May 2011
    21. ↵
      1. Guttmann A,
      2. Razzaq A,
      3. Lindsay P,
      4. Zagorski B,
      5. Anderson GM
      . Development of measures of the quality of emergency department care for children using a structured panel process. Pediatrics. 2006;118(1):114–123pmid:16818556
      OpenUrlAbstract/FREE Full Text
    22. ↵
      Canadian Paediatric Triage and Acuity Scale: implementation guidelines for emergency departments. Canadian Journal of Emergency Medicine. 2001;3(4)
    23. ↵
      Fitch K, Bernstein SJ, Aguilar MD, et al. The RAND/UCLA Appropriateness Method User’s Manual. Santa Monica CA; RAND, Santa Monica CA: 2001
    24. ↵
      National Quality Forum. Measure evaluation criteria. December 2009. Available at: www.qualityforum.org/docs/measure_evaluation_criteria.aspx. Accessed August 13, 2013
    25. ↵
      Appraisal of Guidelines for Research and Evaluation (AGREE) Instrument. Available at: www.agreetrust.org/wp-content/uploads/2013/06/AGREE_II_Users_Manual_and_23-item_Instrument_ENGLISH.pdf. Accessed August 22, 2013
    26. ↵
      1. Guyatt GH,
      2. Oxman AD,
      3. Kunz R,
      4. et al.,
      5. GRADE Working Group
      . Going from evidence to recommendations. BMJ. 2008;336(7652):1049–1051pmid:18467413
      OpenUrlFREE Full Text
    27. ↵
      1. Atkins D,
      2. Best D,
      3. Briss PA,
      4. et al.,
      5. GRADE Working Group
      . Grading quality of evidence and strength of recommendations. BMJ. 2004;328(7454):1490pmid:15205295
      OpenUrlAbstract/FREE Full Text
    28. ↵
      1. Nkoy FL,
      2. Fassl BA,
      3. Simon TD,
      4. et al
      . Quality of care for children hospitalized with asthma. Pediatrics. 2008;122(5):1055–1063pmid:18977987
      OpenUrlAbstract/FREE Full Text
    29. ↵
      1. Rubin HR,
      2. Pronovost P,
      3. Diette GB
      . From a process of care to a measure: the development and testing of a quality indicator. Int J Qual Health Care. 2001;13(6):489–496pmid:11769752
      OpenUrlAbstract/FREE Full Text
    30. ↵
      1. Reeves MJ,
      2. Mullard AJ,
      3. Wehner S
      . Inter-rater reliability of data elements from a prototype of the Paul Coverdell National Acute Stroke Registry. BMC Neurol. 2008;8:19pmid:18547421
      OpenUrlCrossRefPubMed
    31. ↵
      1. Walter SD,
      2. Eliasziw M,
      3. Donner A
      . Sample size and optimal designs for reliability studies. Stat Med. 1998;17(1):101–110pmid:9463853
      OpenUrlCrossRefPubMed
    32. ↵
      1. Worster A,
      2. Haines T
      . Advanced statistics: understanding medical record review (MRR) studies. Acad Emerg Med. 2004;11(2):187–192pmid:14759964
      OpenUrlCrossRefPubMed
    33. ↵
      1. Cohen J
      . A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20(1):37–46
      OpenUrlCrossRef
    34. ↵
      1. Hung GR,
      2. Chalut D
      . A consensus-established set of important indicators of pediatric emergency department performance. Pediatr Emerg Care. 2008;24(1):9–15pmid:18165798
      OpenUrlPubMed
    35. ↵
      1. Shaw KN,
      2. Ruddy RM,
      3. Gorelick MH
      . Pediatric emergency department directors’ benchmarking survey: fiscal year 2001. Pediatr Emerg Care. 2003;19(3):143–147pmid:12813296
      OpenUrlCrossRefPubMed
    36. ↵
      Alessandrini E, Alpern E, Varadarajan K, et al. Developing quality performance measures for pediatric emergency care (Abstract). E-PAS. 2009;2725.1
    37. ↵
      1. Alessandrini E,
      2. Varadarajan K,
      3. Alpern ER,
      4. et al.,
      5. Pediatric Emergency Care Applied Research Network
      . Emergency department quality: an analysis of existing pediatric measures. Acad Emerg Med. 2011;18(5):519–526pmid:21569170
      OpenUrlCrossRefPubMed
    38. ↵
      1. Lannon C,
      2. Peterson LE,
      3. Goudie A
      . Quality measures for the care of children with otitis media with effusion. Pediatrics. 2011;127(6). Available at: www.pediatrics.org/cgi/content/full/127/6/e1490pmid:21606146
      OpenUrlAbstract/FREE Full Text
    39. ↵
      1. Goldstein B,
      2. Giroir B,
      3. Randolph A,
      4. International Consensus Conference on Pediatric Sepsis
      . International pediatric sepsis consensus conference: definitions for sepsis and organ dysfunction in pediatrics. Pediatr Crit Care Med. 2005;6(1):2–8pmid:15636651
      OpenUrlCrossRefPubMed
    40. ↵
      1. Zaslavsky AM
      . Statistical issues in reporting quality data: small samples and casemix variation. Int J Qual Health Care. 2001;13(6):481–488pmid:11769751
      OpenUrlAbstract/FREE Full Text
    41. ↵
      1. Bardach NS,
      2. Chien AT,
      3. Dudley RA.
      Small numbers limit the use of the inpatient pediatric quality indicators for hospital comparison. Acad Pediatr. 2010;10(4):266–273
      OpenUrlCrossRefPubMed
    42. ↵
      1. Grol R
      . Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care. 2001;39(8 suppl 2):II46–II54pmid:11583121
      OpenUrlPubMed
    43. ↵
      1. Zylke JW,
      2. Rivara FP,
      3. Bauchner H
      . Challenges to excellence in child health research: call for papers. JAMA. 2012;308(10):1040–1041pmid:22968895
      OpenUrlCrossRefPubMed
    44. ↵
      1. Dunger DB,
      2. Sperling MA,
      3. Acerini CL,
      4. et al.,
      5. ESPE,
      6. LWPES
      . ESPE/LWPES consensus statement on diabetic ketoacidosis in children and adolescents. Arch Dis Child. 2004;89(2):188–194pmid:14736641
      OpenUrlAbstract/FREE Full Text
    45. ↵
      1. Chalut DS,
      2. Ducharme FM,
      3. Davis GM
      . The Preschool Respiratory Assessment Measure (PRAM): a responsive index of acute asthma severity. J Pediatr. 2000;137(6):762–768pmid:11113831
      OpenUrlCrossRefPubMed
    46. ↵
      1. Sampson HA,
      2. Muñoz-Furlong A,
      3. Campbell RL,
      4. et al
      . Second symposium on the definition and management of anaphylaxis: summary report—Second National Institute of Allergy and Infectious Disease/Food Allergy and Anaphylaxis Network symposium. J Allergy Clin Immunol. 2006;117(2):391–397pmid:16461139
      OpenUrlCrossRefPubMed
    47. ↵
      Epilepsy Foundation. Prolonged or serial seizures (status epilepticus). Available at: www.epilepsyfoundation.org/about/types/types/statusepilepticus.cfm. Accessed August 13, 2013
    48. ↵
      1. Ghajar J
      . Traumatic brain injury. Lancet. 2000;356(9233):923–929pmid:11036909
      OpenUrlCrossRefPubMed
    • Copyright © 2013 by the American Academy of Pediatrics
    PreviousNext
    Back to top

    Advertising Disclaimer »

    In this issue

    Pediatrics
    Vol. 132, Issue 4
    1 Oct 2013
    • Table of Contents
    • Index by author
    View this article with LENS
    PreviousNext
    Email Article

    Thank you for your interest in spreading the word on American Academy of Pediatrics.

    NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

    Enter multiple addresses on separate lines or separate them with commas.
    Quality Indicators for High Acuity Pediatric Conditions
    (Your Name) has sent you a message from American Academy of Pediatrics
    (Your Name) thought you would like to see the American Academy of Pediatrics web site.
    CAPTCHA
    This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
    Request Permissions
    Article Alerts
    Log in
    You will be redirected to aap.org to login or to create your account.
    Or Sign In to Email Alerts with your Email Address
    Citation Tools
    Quality Indicators for High Acuity Pediatric Conditions
    Antonia S. Stang, Sharon E. Straus, Jennifer Crotts, David W. Johnson, Astrid Guttmann
    Pediatrics Oct 2013, 132 (4) 752-762; DOI: 10.1542/peds.2013-0854

    Citation Manager Formats

    • BibTeX
    • Bookends
    • EasyBib
    • EndNote (tagged)
    • EndNote 8 (xml)
    • Medlars
    • Mendeley
    • Papers
    • RefWorks Tagged
    • Ref Manager
    • RIS
    • Zotero
    Share
    Quality Indicators for High Acuity Pediatric Conditions
    Antonia S. Stang, Sharon E. Straus, Jennifer Crotts, David W. Johnson, Astrid Guttmann
    Pediatrics Oct 2013, 132 (4) 752-762; DOI: 10.1542/peds.2013-0854
    del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
    Print
    Download PDF
    Insight Alerts
    • Table of Contents

    Jump to section

    • Article
      • Abstract
      • Methods
      • Results
      • Discussion
      • Conclusions
      • Footnotes
      • References
    • Figures & Data
    • Supplemental
    • Info & Metrics
    • Comments

    Related Articles

    • PubMed
    • Google Scholar

    Cited By...

    • The PIPc Study--application of indicators of potentially inappropriate prescribing in children (PIPc) to a national prescribing database in Ireland: a cross-sectional prevalence study
    • Development of a simple, practice-based tool to assess quality of paediatric emergency care delivery in resource-limited settings: identifying critical actions via a Delphi study
    • Quality indicators for responsible use of medicines: a systematic review
    • Proposal of quality indicators for cardiac rehabilitation after acute coronary syndrome in Japan: a modified Delphi method and practice test
    • PIPc study: development of indicators of potentially inappropriate prescribing in children (PIPc) in primary care using a modified Delphi technique
    • Impact of Physician Scorecards on Emergency Department Resource Use, Quality, and Efficiency
    • Primary care quality indicators for children: measuring quality in UK general practice
    • Overdiagnosis: How Our Compulsion for Diagnosis May Be Harming Children
    • Google Scholar

    More in this TOC Section

    • Development of a Quality Improvement Learning Collaborative to Improve Pediatric Sepsis Outcomes
    • Racism as a Root Cause Approach: A New Framework
    • Life Course Health Development in Pediatric Practice
    Show more Special Article

    Similar Articles

    Subjects

    • Emergency Medicine
      • Emergency Medicine
    • Administration/Practice Management
      • Quality Improvement
      • Administration/Practice Management

    Keywords

    • quality improvement
    • quality indicators
    • performance measurement
    • emergency department
    • Journal Info
    • Editorial Board
    • Editorial Policies
    • Overview
    • Licensing Information
    • Authors/Reviewers
    • Author Guidelines
    • Submit My Manuscript
    • Open Access
    • Reviewer Guidelines
    • Librarians
    • Institutional Subscriptions
    • Usage Stats
    • Support
    • Contact Us
    • Subscribe
    • Resources
    • Media Kit
    • About
    • International Access
    • Terms of Use
    • Privacy Statement
    • FAQ
    • AAP.org
    • shopAAP
    • Follow American Academy of Pediatrics on Instagram
    • Visit American Academy of Pediatrics on Facebook
    • Follow American Academy of Pediatrics on Twitter
    • Follow American Academy of Pediatrics on Youtube
    • RSS
    American Academy of Pediatrics

    © 2021 American Academy of Pediatrics