BACKGROUND. Performance measures are essential components of public reporting and quality improvement. To date, few such measures exist to provide a comprehensive assessment of the quality of emergency department services for children.
OBJECTIVES. Our goal was to use a systematic process to develop measures of emergency department care for children (0–19 years) that are (1) based on research evidence and expert opinion, (2) representative of a range of conditions treated in most emergency departments, (3) related to links between processes and outcomes, and (4) feasible to measure.
METHODS. We presented a panel of providers and managers data from emergency department use to identify common conditions across levels of patient acuity, which could be targets for quality improvement. We used a structured panel process informed by a literature review to (1) identify condition-specific links between processes of care and defined outcomes and (2) select indicators to assess these process-outcome links. We determined the feasibility of calculating these indicators using an administrative data set of emergency department visits for Ontario, Canada.
RESULTS. The panel identified 18 clinical conditions for indicator development and 61 condition-specific links between processes of care and outcomes. After 2 rounds of ratings, the panel defined 68 specific clinical indicators for the following conditions: adolescent mental health problems, ankle injury, asthma, bronchiolitis, croup, diabetes, fever, gastroenteritis, minor head injury, neonatal jaundice, seizures, and urinary tract infections. Visits for these conditions account for 23% of all pediatric emergency department use. Using an administrative data set, we were able to calculate 19 indicators, covering 9 conditions, representing 20% of all emergency department visits by children.
CONCLUSIONS. Using a structured panel process, data on emergency department use, and literature review, it was possible to define indicators of emergency department care for children. The feasibility of these indicators will depend on the availability of high-quality data.
Public reporting on performance and quality improvement is increasingly recognized as a professional and institutional priority. Performance measures are an essential component of both of these activities. These measures allow health care providers, funders, accreditors, and researchers to identify areas in clinical care that require improvement, benchmark performance, and set minimum standards of care. Emergency department (ED) care is an integral component of the care provided by hospitals, yet there has been relatively little work done to develop performance measures in this area, especially for the care of children. A number of articles have highlighted the need for reliable and valid performance measures of the care that children receive in EDs.1–4 The Emergency Medical Services for Children Managed Care Task Force recommended that such performance measures be (1) developed using a systematic process, (2) designed to answer questions about quality of care across a range of conditions, (3) applicable to most EDs, (4) reflect both research evidence and input from multiple providers, (5) relate both to processes and outcomes, and (6) designed so that data can be collected without imposing a burden on providers or resources.4
The current literature on measures of quality of ED care for children, such as a recent benchmarking of performance of pediatric EDs,5 has focused predominantly on general measures, such as wait times, children who leave without being seen, staff training, revisits, and use of guidelines.6 However, for quality improvement purposes, measures related to specific clinical conditions are desirable.4 To date, there has been little published on efforts to define a set of processes and outcome measures specific to multiple clinical conditions and age groups to provide a more comprehensive assessment of the ED care that children receive.
Starting with the Rand-University of California Los Angeles work in the 1980s, there has been a long history of using structured panel processes to develop measures to assess quality of care.7–9 These methods are explicitly designed to incorporate both research evidence and input from providers. Recently we have used a modified version of these methods to define a set of quality of care indicators for overall ED care that have been used in public reports and to guide quality improvement efforts.10,11 The objective of this article is to draw on these methods to (1) define a set of common conditions and important outcomes by age group to assess pediatric ED care; (2) identify links between processes of care and outcomes for each of these conditions; (3) define an explicit set of process and outcome indicators for these conditions; and (4) determine the extent to which it is possible to measure these indicators using an existing population-based administrative data set in Ontario, Canada.
We conducted a multiphase research design, modeled on our earlier ED indicator work,10 to develop, define, and test measures specific to ED care for children (0–19 years of age). The various phases of this design, including an expert panel to define relevant clinical conditions and outcomes for indicator development, a Modified Delphi survey process informed by a detailed literature review to define and rate potential indicators, and statistical model development for feasibility testing using administrative data are described in detail below. Ethics approval for this study was obtained from the research ethics board of Sunnybrook and Women's Health Sciences Centre.
Phase 1: Identification of Clinical Conditions for Indicator Development
We convened a 9-member expert advisory panel to identify the clinical conditions for indicator development. We identified panel members by writing to the chief executive officers of all Ontario acute care hospitals asking for potential candidates. The panel was chosen to ensure both geographic and practice diversity. It consisted of 2 chiefs of emergency medicine from tertiary care hospitals (1 pediatric and 1 general), 1 chief of emergency medicine from a large community hospital serving rural Northern Ontario, 2 general ED physicians (1 from a tertiary care ED and 1 a small rural ED), 2 administrative directors of emergency care, an ED nurse educator, and a community-based pediatrician who consults to a rural ED.
The panel met for 1 day. As the basis for discussion we presented data from the National Ambulatory Care Reporting System (NACRS), a population-based data set of all ED visits in Ontario abstracted by all hospitals, and collected by the Canadian Institute of Health Information. We summarized the top 50 conditions for which children 0 to 19 years of age sought ED care by age groups and triage categories from April 1, 2003, to March 31, 2004. Following nominal group techniques,12 the panelists were each asked to recommend 3 conditions based on the following criteria: (1) are common, (2) are treated in most EDs, (3) encompass a range of patient acuity, and (4) have evidence for best practices to improve outcomes or enhance clinical efficiency. The panelists then rated and agreed on a final list of conditions for inclusion and made recommendations on the age groups to be assessed for each condition. We also presented the 8 outcomes used previously for indicator development: mortality and morbidity; ED length of stay; inappropriate admissions; unplanned return ED visits; unplanned primary care visits; use of diagnostic tests; imaging equipment; and use of ED personnel.10 The panel members were unanimous that these were the most appropriate outcomes for consideration.
Phase 2: Identification of Process-Outcome Links for Each Clinical Condition
We developed a questionnaire for panelists to assess which of the 8 outcomes were linked with clinical processes of ED care by clinical condition. We changed slightly the composition of our original expert panel. One of the administrators for a tertiary care pediatric center was replaced (at his request) with the chief of that ED. We added another medical chief and nurse manager of a large urban ED and a physician trained in pediatric emergency medicine from a large general ED for a panel of 12. In this mailed questionnaire, the panelists were asked to evaluate specifically whether best practice evidence and guidelines, if implemented, could improve each of the 8 outcomes by condition. The panelists rated each process-outcome link using a 9-point Likert-type scale that ranged from “strongly disagree” (1); to “strongly agree” (9). According to a predetermined decision rule, >70% of the panelists had to rate the link in the high “agreement” tertile of scores (7, 8, or 9) for the pair to be selected for further inclusion and development. Panelists were also invited to delete any conditions they felt should not be included or add any they thought were appropriate to consider in future phases of this project. Two of the participants (a nurse manager and physician from a rural ED) requested that they respond together to the questionnaire for a total of 11 responses.
Phase 3: Articulation of Indicators
The purpose of the next step was to identify potential measures or indicators of the quality of care that related to the specific process-outcomes links rated highly by the panel. This was done by performing a systematic review of the literature to identify studies that described links between clinical processes of care and outcomes or guidelines for the management of the conditions identified in Phase 2. We used Ovid Medline, Ovid Embase, Pubmed, the Cochrane database, and DARE for systematic reviews, the National Guideline Clearinghouse and the Canadian Medical Association Infobase (for Canadian guidelines), the National Electronic Library for Health, TRIP (a specialty search engine for systematic reviews, guidelines, and critical appraisals), BestBets, and the Web sites for the Canadian Association of Emergency Physicians, the Canadian Paediatric Society, and the American Association of Pediatrics. Two of the investigators (A.G. and A.R.) reviewed all of the articles and guidelines. We did not explicitly rate the level of evidence for each clinical condition but included only studies that both investigators deemed to have sufficient descriptive information to ensure generalizability, adequate consideration of confounding factors, and those with explicit standards for measuring outcomes. We did not perform any quantitative analysis of inter-rater reliability. We included only the most recent clinical guidelines and focused on ones from North American sources, such as Canadian Association of Emergency Physicians, Canadian Paediatric Society, and American Association of Pediatrics, because these are most commonly used in Canada. For topics for which there was a Cochrane or DARE review (such as the use of systemic corticosteroids in acute asthma),13 we did not review previous articles on the same subject. A full bibliography of included sources is available from the authors.
From the chosen sources, one author (A.G.) articulated process indicators based on the processes of care shown to have an impact on the previously rated outcomes for each clinical condition. For those conditions for which diagnostic testing was an outcome of interest, indicators were chosen both to represent appropriate but also inappropriate testing. The outcome indicators articulated were a reflection of the outcomes chosen by the panel for each condition. The literature was used to either specify details, such as the timing of return visits, or to combine a process with an outcome indicator (such as the proportion of children admitted to a hospital for croup without having been treated with corticosteroids in the ED).
Phase 4: Rating of Indicators
With a second mailed questionnaire, we asked our panelists to rate the identified indicators for the specific conditions according to the following 3 criteria: (1) validity (sufficient scientific evidence to support a link between the performance of the indicator and overall positive outcomes to patients); (2) relevance (the extent to which specific indicators [ie, death in the ED versus death within 30 days] could serve as a useful measure of each outcome [mortality]); and (3) opportunity for improvement (whether the processes or outcomes measured are within the control of an ED or hospital). Again we used a 9-point Likert-type scale ranging from strongly disagree (1) to strongly agree (9) for this questionnaire. We had a predetermined decision rule of >70% of panelists strongly agreeing (rating of 7, 8, or 9) as consensus for inclusion, >70% strongly disagreeing (rating of 1, 2, or 3) as consensus for noninclusion, and all others as indeterminate.
Using modified Delphi techniques12,14 the expert advisory panel met face-to-face on completion of the second questionnaire. At this meeting, we reviewed anonymized ratings of all of the indicators. We had copies of all of the reviewed literature available for reference during this meeting. The main goals of this panel meeting were to refine any indicators that had been highly rated to include, allow discussion on the indeterminate indicators, and assess whether any of the indicators would require risk adjustment. Finally we asked panel members to anonymously rerate all of the indicators based on the discussion. We used the same 70% rule of strong agreement to generate the final list of indicators.
Phase 5: Feasibility Testing
We chose from the final list of indicators those that we could accurately operationalize and measure from our NACRS data set. We created a standard profile that included an operational definition, numerator and denominator, specific data elements required for measurement, and data sources. This template has been established by accreditation organizations and facilitates consistency in communication and measurement of indicators for accountability and quality improvement.15
We used 1 year of data (April 1, 2003, to March 31, 2004) and calculated the rates of each of the indicators and the descriptive statistics (mean, median, and interquartile range) for the 174 EDs in Ontario. We excluded those indicators for which more than one quarter of Ontario EDs had <5 cases in 1 year.
Figure 1 summarizes each step in the process of indicator development.
Selection of Clinical Conditions
Table 1 describes the number of pediatric ED visits by age, triage, and disposition overall and for the 18 clinical conditions rated as important for indicator development at the first expert panel meeting. Overall, there are 1226849 visits (by 747091 of a population of ∼3128000 children in Ontario) in the 1 year analyzed. Almost half of all children are in the second lowest triage category, “semiurgent,” representing conditions which, by Canadian guidelines, “would benefit from intervention or reassurance within 1–2 hours.”16 Overall, only 4.4% of visits result in a hospital admission, with another 1.1% transferred to other EDs for additional care. Visits for the 18 clinical conditions chosen by the panel for indicator development total 465365 (38% of the total number of pediatric ED visits). With the exception of those with urinary tract infections, adolescent mental health problems, intentional poisonings, and pharyngitis, males made the majority of visits across conditions. Table 1 illustrates that the panel chose conditions across a variety of severity levels and grouped a number of diagnoses into clusters, such as adolescent mental health problems (encompassing mood disorders and suicidal ideation), lacerations, trauma, and poisonings. Although the panel chose many common diagnoses, the 2 most frequent ones in children, otitis media and upper respiratory tract infections,11 were not considered suitable candidates for indicator development.
Selection of Condition-Specific Process-Outcome Links
Table 2 describes the number of panelists in strong agreement on a process-outcome link by age and condition. Some links were not rated by 1 or 2 panelists. From this first questionnaire, 4 conditions (lacerations, pharyngitis, and intentional and unintentional poisonings) were dropped, because there was not sufficient support for any of the outcomes for further indicator development. From these ratings, 61 of the 200 pairs were selected to proceed to the next phase (shown by footnote “a”).
Clinical Indicator Identification
The second questionnaire presented 146 potential use, process, and outcome indicators over 14 clinical conditions for rating. After completion of this questionnaire, the face-to-face discussion, and final rerating of these indicators, a final list of 76 indicators for 12 conditions met the predetermined criteria for consensus (representing 23% of all pediatric ED visits). At the panel discussion there was some modification of the wording of a number of the indicators, specification of the time interval for ED revisits by condition, and refinement of age categories. Only 1 indicator was felt to require risk adjustment: the rate of admission for urinary tract infections in children 3 months to 3 years of age. Thus, the indicator was limited to those without underlying renal disease. None of the indicators for trauma or cellulitis were on the final list. Table 3 summarizes this final list of indicators.
We assessed the possibility of measuring each indicator with the 2003–2004 data available from NACRS. Table 3 describes the 19 indicators for which we were able to calculate statistics, of which 5 were excluded, because more than one quarter of Ontario hospitals had <5 cases. For most of the indicators that we could not calculate, our data set did not include the actual indicator (eg, steroid prescriptions in asthmatics), and for others we have not yet validated the accuracy of the data in NACRS (such as urine and blood cultures). For the indicators related to minor head injury, the panel felt that we could not reliably define that cohort from administrative data, because the associated literature17,18 defines these children by clinical history. Apart from children with fever, which was defined by the chief complaint, all of the other clinical cohorts were enumerated from the main discharge diagnosis (International Classification of Diseases, Tenth Revision, Canada, codes used to define these cohorts are available on request from the authors).
The descriptive statistics for the clinical indicators are presented in Table 4. These indicators represented 20% of all pediatric ED visits. All of the indicator rates of unplanned return visits (for asthma, croup, gastroenteritis, fever, diabetes, and urinary tract infection) are fairly low (the largest range among hospitals was 0–7.0 of 100 for diabetes), although there is variation between hospitals. The rates of indicators relating to use of diagnostic imaging (for ankle injury, asthma, bronchiolitis, and croup) were higher with larger ranges across hospitals (eg, 9.3–30.9 for asthma; 20.0–64.0 for bronchiolitis).
We report a methodology and results of the development of a set of condition-specific indicators of the quality of ED care for children. We were guided by 2 expert panels and data on current ED use to both define common conditions and process-outcome links. By combining extensive literature searches and expert panel review, we were able to arrive at a large number of indicators (76) for 12 of these conditions across a number of age groups representing a high proportion of all pediatric ED visits in Ontario. In keeping with the framework suggested by the Emergency Medical Services for Children Managed Care Task Force,4 we used a structured, reproducible process, relying on both scientific evidence and input from multiple provider types across a number of ED settings, assessing a range of clinical conditions, age groups, and patient acuity while focusing on both processes and outcomes. Our measures cover a number of conditions, which were highly rated by another Emergency Medical Services Task Force expert panel as important research and quality management priorities for pediatric ED care.1 Many of our indicators can be measured with existing data, and we operationalized those measures using standard definitions to ensure reproducibility for those jurisdictions with similar data sets. Although our current population-based data sets do not allow us to calculate all of these proposed indicators, including arguably some of the most useful process ones, we encourage other institutions or jurisdictions with different data sets that may include these elements to test the feasibility of these indicators in their populations. The indicators not currently measurable will become a focus of development for our research group.
Although the panel produced 76 indicators for 12 conditions, we would foresee that only a few might be used at any one time for public reporting. A small number for each condition may be enough to flag institutions that are lagging behind peers and focus the clinical conditions in need of review. However, having a bigger list allows for both a more comprehensive review and a strategy of shifting measures in reports over time to discourage institutions from focusing on only a select number of conditions to the detriment of others. For quality improvement initiatives, a number of indicators for each condition may be useful. For example, if the asthma return visit rate of an institution is found to be higher than its peers, a quality management initiative could use a number of the process indicators (use of steroids and written medication management plans) to focus on aspects of care that could guide improvement. We have reported mean and median values for all of the hospitals in Ontario, which can be used by other hospitals to compare their own performance. For subsequent reporting in the Ontario Hospital Report Card, we will initially report rates by peer groups to allow tertiary care pediatric hospitals to compare their rates with each other and community hospitals of different sizes with their peers.
Some of the proposed measures can be useful without benchmarking with other institutions. Those indicators for which the direction and ideal rate is clear (rate of steroid use in asthma should be close to 100%, rate of skull radiograph in children >2 years with minor head injury, close to 0) are immediately interpretable. Others in which the direction is clear (such as return visits) may still be more interpretable in comparison with other institutions or over time at 1 institution. Many of the measures, however, will require work to establish benchmarks. For some, this may require suites of measures, such as the ankle radiograph rate, with a rate of negative radiographs as proposed by our panel and a comparison with rates published in the validation of the Ottawa ankle rules.19–21 For others, defining the “right” rate will be complex and likely start with a comparison with other institutions. For conditions such as bronchiolitis and asthma, some patients require radiograph evaluation, but clearly institutions with higher rates than peers would need to assess the reasons for such. Our expert panel members all expressed interest in keeping these indicators, albeit with the difficulties in defining a right rate. All of the indicators chosen were ones that they felt would be useful to benchmark against peers and would allow them to assess whether there were particular areas requiring quality improvement in their EDs.
Although the epidemiology of ED use by children is similar in Ontario to the United States,11,22 it may vary across other jurisdictions. Furthermore, although expert panel processes can be an invaluable source of data to complement available medical evidence, as well as to focus the meaning of such evidence, the composition of a panel may have an important impact on the results of such a process.23–25 For our purposes, which will eventually include public reporting of these measures at the institution level, our panelists were chosen to represent important stakeholders of the results. Other jurisdictions wanting to initiate similar public reporting may need to embark on a similar process or use these measures as a starting point with relevant stakeholders.
As part of the consensus process, the 1-day final panel meeting, although more costly than using the more typical mailed questionnaire-based consensus process, proved to be invaluable. A number of important themes emerged for which consensus of opinion evolved. The first was that, although mortality was initially rated as an important outcome for 5 clinical conditions (asthma, gastroenteritis, diabetes, trauma, and fever), it was in the indeterminate category after the panel discussion and rerating. The panel articulated that, although important, and potentially the result of poor care in the ED, mortality was too rare an outcome to be a valid or feasible measure. The panel also acknowledged the difficulties in defining measures for use across different hospital settings. For instance, although trauma was initially on the list of conditions, and a number of outcomes rated as important, in the end, the panel decided that the issues in trauma management were so different for smaller than larger hospitals, many of which are designated trauma centers, that developing indicators relevant across institutions was not feasible. Some of the refinement of other indicators also focused on making them applicable to all types of EDs. For example, the time-to-phototherapy indicator for neonatal jaundice was originally worded as time to phototherapy in the ED. The panelists from smaller hospitals made it clear that, in such institutions, phototherapy was only conducted on the pediatric inpatient units or nurseries. The panelists also helped refine some of the definitions of conditions. For example, although intentional poisonings were not on the final list, the panelists recommended including these with mood disorders as part of the adolescent mental health category. Finally, the discussion led to recommendations that a similar panel process be initiated that concentrated on particular populations seen in tertiary care pediatric institutions needing that level of care, such as major trauma, sickle cell disease, and fever with neutropenia.
In this multiphase research design, we relied twice on expert panel meetings. Although we used rigorous methods of consensus by having anonymous ratings with the panel chair, after a strict process for managing the discussion, it is possible that members were unduly biased by other panelists who were particularly forceful in their opinions or because they had more experience or training in pediatric emergency medicine. The panel chair was attentive to that possibility and ensured that all of the panel members had an opportunity to express themselves. We also tried to ensure a diversity of opinion by choosing panelists from a number of different settings; however, as with all panels, the results are dependent on the composition of the panel. Although we used a systematic approach to review the literature, it is possible that we missed evidence that could have informed our choices of indicators. As new literature appears about the management of these conditions, these indicators may need to be refined. Finally, we used the NACRS data set to inform the panel about common conditions, and panelists and investigators were aware of the available data elements, which may have influenced the articulation or final rating of the indicators. We expressly did not want the panel to be limited by the data currently available in NACRS, and, in fact, most of the indicators are not currently measurable using NACRS.
Improving the quality of ED care for children requires tools to measure that care. This study represents the beginning of a systematic approach to defining meaningful measures of important processes and outcomes of care across a number of clinical conditions, age groups, and severity of illness. We have focused our indicator development on those applicable to a wide range of EDs that care for children. Additional work will focus on developing benchmarks, improving data sources, updating the indicators as evidence changes, and continuing to develop indicators for children with special care needs and care provided in tertiary centers. We hope that this research will provide health care providers, managers, and researchers tools with which to measure performance and guide quality improvement with the ultimate goal of improving the care that children receive in EDs.
This project was funded by the Ontario Hospital Report Research Collaborative. Dr Guttmann is funded by a Canadian Institute for Health Research Phase 2 Senior Research Fellowship. Dr Anderson holds the Chair in Health Management Strategies in the Faculty of Medicine, University of Toronto.
We thank the members of the expert panel for their time and contribution to the process of the development of these indicators.
- Accepted February 6, 2006.
- Address correspondence to Astrid Guttmann, MDCM, MSc, Institute for Clinical Evaluative Sciences, G Wing, Sunnybrook and Women's College Health Sciences Centre, 2075 Bayview Ave, Toronto, Ontario, Canada M4N 3M5. E-mail:
The authors have indicated they have no financial relationships relevant to this article to disclose.
- ↵Flores G, Lee M, Bauchner H, Kastner B. Pediatricians' attitudes, beliefs, and practices regarding clinical practice guidelines: a national survey. Pediatrics.2000;105 :496– 501
- ↵Razzaq A, Lindsay P., Guttmann A., Schull, M., Anderson, G. Clinical Utilization and Outcomes, Hospital Report: Emergency Department Hospital Report Research Collaborative ICES Executive Report. Toronto, Ontario, Canada: Institute for Clinical Evaluative Sciences; 2005
- ↵Campbell SM, Braspenning J, Hutchinson A, Marshall MN. Research methods used in developing and applying quality indicators in primary care. BMJ.2003;326 :816– 819
- ↵Rowe BH, Spooner C, Ducharme FM, Bretzlaff JA, Bota GW. Early emergency department treatment of acute asthma with systemic corticosteroids [Cochrane review]. In: The Cochrane Library. Oxford, United Kingdom: Update Software; 2003
- ↵Jones J, Hunter D. Consensus methods for medical and health services research. BMJ.1995;311 :376– 380
- ↵Canadian Council on Health Services Accreditation. A guide for the development and use of performance indicators. Ottawa, Ontario, Canada: Canadian Council on Health Services Accreditation; 1996
- ↵Canadian Association of Emergency Physicians. The Canadian Triage and Acuity Scale (CTAS) for Emergency Departments. Available at: www.caep.ca/002.policies/002-02.ctas.htm. Accessed May 12, 2006
- ↵Schutzman SA, Barnes P, Duhaime AC, et al. Evaluation and management of children younger than two years old with apparently minor head trauma: proposed guidelines. Pediatrics.2001;107 :983– 993
- ↵Committee on Quality Improvement, American Academy of Pediatrics. Commission on Clinical Policies and Research, American Academy of Family Physicians. The management of minor closed head injury in children. Pediatrics.1999;104 :1407– 1415
- ↵McCaig LF, Burt CW. National Hospital Ambulatory Medical Care Survey: 2003 Emergency Department Summary Advance data from vital and health statistics. No. 358. Hyattsville, MD: National Center for Health Statistics; 2005
- Copyright © 2006 by the American Academy of Pediatrics