BACKGROUND AND OBJECTIVE: Pediatrics has embraced technology-enhanced simulation (TES) as an educational modality, but its effectiveness for pediatric education remains unclear. The objective of this study was to describe the characteristics and evaluate the effectiveness of TES for pediatric education.
METHODS: This review adhered to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) standards. A systematic search of Medline, Embase, CINAHL, ERIC, Web of Science, Scopus, key journals, and previous review bibliographies through May 2011 and an updated Medline search through October 2013 were conducted. Original research articles in any language evaluating the use of TES for educating health care providers at any stage, where the content solely focuses on patients 18 years or younger, were selected. Reviewers working in duplicate abstracted information on learners, clinical topic, instructional design, study quality, and outcomes. We coded skills (simulated setting) separately for time and nontime measures and similarly classified patient care behaviors and patient effects.
RESULTS: We identified 57 studies (3666 learners) using TES to teach pediatrics. Effect sizes (ESs) were pooled by using a random-effects model. Among studies comparing TES with no intervention, pooled ESs were large for outcomes of knowledge, nontime skills (eg, performance in simulated setting), behaviors with patients, and time to task completion (ES = 0.80–1.91). Studies comparing the use of high versus low physical realism simulators showed small to moderate effects favoring high physical realism (ES = 0.31–0.70).
CONCLUSIONS: TES for pediatric education is associated with large ESs in comparison with no intervention. Future research should include comparative studies that identify optimal instructional methods and incorporate pediatric-specific issues into educational interventions.
- CI —
- 95% confidence interval
- ES —
- effect size
- MERSQI —
- Medical Education Research Study Quality Instrument
- NRP —
- neonatal resuscitation program
- TES —
- technology-enhanced simulation
With changes in both training and practice environments, simulation has emerged as an integral tool in health professions education. Simulation permits mastery learning through deliberate practice1 of high-risk and/or low-frequency events or procedures without compromising patient safety.2 In particular, technology-enhanced simulation (TES), defined as an educational tool or device with which the learner physically interacts to mimic an aspect of clinical care,3 has been the subject of much research. TES may include computer-based virtual reality simulators, high-fidelity and static mannequins, plastic models, live animals, inert animal products, and human cadavers. Pediatrics has embraced simulation as a learning modality, starting with the call for a paradigm shift in medical education,4 with subsequent integration of TES into pediatric residency and fellowship training programs.5–8
The effectiveness of TES as an educational intervention has been measured with various outcomes, spanning Kirkpatrick’s9 4 levels of learning evaluation: reactions (eg, what participants thought and felt about the training), learning (eg, increased knowledge/skills), behavior (eg, performance in clinical setting), and results (eg, impact on real patients). A recent review and meta-analysis of TES revealed that simulation has large beneficial effects on knowledge, skills, and behavior and moderate beneficial effects on patient outcomes when compared with no intervention.3 When compared with nonsimulation instruction, TES was found to have a small to moderate positive impact on outcomes.3 However, this review did not isolate simulation for pediatric education. Although several narrative reviews have described the use of simulation in pediatrics,10–14 these reviews had limitations, including a lack of a systematic search for articles, an analysis of the quality of existing research, or a quantitative synthesis of outcomes. Furthermore, we cannot assume that the results from systematic reviews of TES in other fields3,15 can be extrapolated to the pediatric population. Pediatric patients vary from adults in size, physiology, and diseases treated; and the pediatric work environment and demographic characteristics of pediatric health care providers often differ from those of adult medicine. These differences could, in turn, influence the optimal methods of educating pediatric health care providers. A comprehensive review and synthesis of existing evidence in TES for pediatric education would provide educators guidance regarding the present uses of this instructional approach and identify gaps in the field that warrant attention in future research. The goal of this review was to describe the characteristics of pediatric-related TES studies and quantitatively evaluate the effectiveness of TES as an educational modality used for educating health professionals caring for neonates, children, and adolescents.
This review was planned, conducted, and reported in adherence with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) standards of quality for reporting meta-analyses.16 Additional details on study design and execution have been published previously.3
We sought to answer 3 questions:
What are the characteristics of pediatric studies involving TES?
What is the effectiveness of TES for teaching pediatric content to health care professionals in comparison with no intervention or with other nonsimulation instruction?
What instructional design features are associated with improved learning outcomes for pediatric TES education?
We included comparative studies published in any language that (1) investigated the use of TES for training health care providers at any stage of training or practice, including physicians, nurses, paramedics, respiratory therapists, and emergency medical technicians and (2) focused solely on education for the care of the pediatric patient (≤18 years of age). Studies with adult educational content were not included. We included single-group pretest-posttest, 2-group nonrandomized, and randomized studies. All studies making comparison with no intervention (eg, control arm), an alternate simulation modality, or a nonsimulation instructional modality were included.
We searched Medline, Embase, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus, with the last search date of May 11, 2011. This search sought studies of TES broadly and included terms such as “simulation,” “computer simulation,” “training,” “skill,” “mannequin” or “manikin,” and “assessment” among others; this strategy has been published in full.3 We also reviewed all articles published in 2 journals devoted to health professions simulation (Simulation in Healthcare and Clinical Simulation in Nursing) and the references of several key review articles on pediatric simulation.10–14
Study selection proceeded in 3 stages. First, to identify studies of simulation-based training broadly, 2 independent reviewers screened for inclusion both the titles and abstracts of all potentially eligible studies, with chance-adjusted interrater agreement determined by using the intraclass correlation coefficient of 0.69. Second, to identify studies focused on pediatric education, 3 reviewers screened all articles by using loose criteria (erring on the side of inclusion). Finally, from this subset, 2 reviewers identified articles meeting criteria for inclusion in the present review.
We extracted information from each study independently and in duplicate, with all conflicts resolved by consensus. We abstracted information on the clinical topic, pediatric age subset (ie, content related to the following age groups: neonatal = <1 month, pediatric = 1 month to <13 years, adolescent = 13 years to 18 years), training level of learners, instructional design features of simulation training (eg, mastery learning, distributed practice, and multiple learning strategies), study design, method of group assignment, outcomes type, and methodologic quality.3 Methodologic quality was graded by using the Medical Education Research Study Quality Instrument (MERSQI)17 and an adaptation of the Newcastle-Ottawa scale18 for cohort studies. Data were abstracted separately for learner reactions (satisfaction) and for learning outcomes of knowledge, skills, behaviors with patients, and direct effects on patients. Skills outcomes were further classified as time (time to complete the task) and nontime (process [eg, performance rating in a simulated setting] or product [eg, successful task completion or major errors]), whereas behaviors with real patients were similarly classified as time (to complete task) and nontime (process; eg, performance rating in real patient setting).
We conducted both quantitative and qualitative syntheses of the evidence. A standardized mean difference (Hedges’ g effect size [ES]) was calculated for each comparison by using standard techniques.19–21 We used random-effects meta-analysis to quantitatively pool these results, organized by comparison (ie, comparison with no intervention, with another form of instruction, or between high and low physical realism) and outcome. For studies making a comparison with no intervention we conducted subgroup analysis on the basis of key study design and instructional design features. We also performed sensitivity analyses excluding the results of studies with imprecise ES calculations.3 We did not perform subgroup analyses for studies with active comparison due to few eligible articles per analysis. For studies comparing different methods of TES, we identified the theme of high versus low physical realism and pooled the results of related studies by using meta-analysis. For the purposes of this study, we defined physical realism as the physical properties of the simulation mannequin and accoutrements.22 High physical realism simulators are those that provide physical findings, display vital signs, physiologically respond to interventions (via computer interface), and allow for procedures to be performed on them (eg, bag mask ventilation, intubation, intravenous insertion), whereas low physical realism simulators are static mannequins that are otherwise limited in these capabilities. Studies that could not be combined in a quantitative synthesis of results were analyzed and summarized in a narrative synthesis.
We quantified between-study inconsistency for analyses of ≥3 studies by using the I2 statistic,21 which estimates the percentage of variability not due to chance. I2 values >50% indicate large inconsistency or heterogeneity. We used SAS 9.3 (SAS Institute, Cary, NC) for all analyses. Statistical significance was defined by a 2-sided α of 0.05. Interpretations of clinical significance emphasized confidence intervals (CIs) in relation to Cohen’s ES classifications (>0.8 = large, 0.5–0.8 = moderate, 0.2–0.5 = small, and <0.2 = negligible).23
We conducted an updated search from May 1, 2011, to October 1, 2013, of Medline only, with the following search terms: (“pediatr* OR neonat*) AND simulat* AND (educat* OR teach* OR learn*). Two reviewers screened for inclusion the titles, abstracts, and full articles (when necessary) of all potentially eligible studies, with a chance-adjusted interrater agreement determined by using the intraclass correlation coefficient of 0.77. The reviewers then extracted information on study design, clinical topic, pediatric age, training level of learners, outcomes type, and key results. We include in our meta-analytic or narrative synthesis all newly identified studies that made comparisons with other simulation. All newly identified studies (including studies making comparisons to no intervention) are summarized in Supplemental Table 3.
As detailed in Fig 1, we identified 53 studies from our initial search that met our inclusion criteria. Thirty-nine studies compared TES with no intervention, 2 made comparisons with another instructional modality, and 14 made comparisons with another form of TES (2 studies reported 2 comparisons). Supplemental Table 2 provides detailed information on each study.
A total of 292 candidate articles were identified from our updated search in October 2013, of which 59 studies met inclusion criteria. Of these, 55 made comparisons with no intervention and 4 made comparisons with alternate simulation (see Supplemental Table 3 and the text below).
Fifty-seven studies (N = 3666 learners) in total were included in the meta-analysis: 53 studies from the initial search and 4 studies from the updated search (ie, those making comparisons with alternate simulation). Table 1 summarizes the key features of the studies included in our meta-analysis.
The learners in the 57 studies included postgraduate physician trainees (n = 1641), physicians in practice (n = 562), nurses or nursing students (n = 447), and medical students (n = 235). Forty-two (74%) studies included pediatric content and 17 (30%) included neonatal content in the educational intervention. Twenty-eight (49%) studies focused on resuscitation training, 10 (18%) on communication and teamwork skills, and 8 (14%) on various types of procedural skills training (eg, venous access, endoscopy). The 57 studies reported 76 distinct outcomes, including 35 nontime skills, 17 knowledge outcomes, 12 time skills/behaviors, and 9 outcomes in the context of actual patient care (Table 1).
The methodologic quality of the included studies is summarized in Supplemental Table 4. The mean (SD) Newcastle-Ottawa scale (maximum 6 points) and MERSQI (maximum 18 points) study quality scores were 2.6 (1.8) and 11.8 (2.0), respectively.
Synthesis: Comparison of Simulation Versus No Intervention
In summary, simulation training was associated with large pooled ESs for all but patient effects, but with substantial inconsistency from study to study (I2 range = 65%–98%). We discuss these results by outcomes below.
Figure 2 summarizes the meta-analysis comparing TES with no intervention for outcomes of time and nontime skills. Twenty-seven studies (N = 1027 participants) reported nontime skill outcomes, with a pooled ES of 1.23 (95% CI: 0.95–1.51; P < .001). We found large inconsistency between studies (I2 = 87%), but all of the individual ESs favored the simulation intervention, indicating that studies varied in the magnitude but not the direction of benefit. Subgroup analysis exploring this inconsistency found that studies with a blinded outcome assessment had a lower pooled ES (ES = 0.69) than did unblinded assessments (ES = 1.51; P < .01), whereas other study quality features (randomization and overall quality scores) showed no statistically significant interactions (Fig 3). Other subgroup analyses (without statistical tests of significance) revealed pooled ESs of approximately the same magnitude across learner groups, pediatric age subsets, and simulated practice setting.
We found 5 studies reporting time in a simulation setting (time skills) and 1 reporting time in actual patient care (time behaviors). Pooling these time outcomes together revealed an ES of 1.24 (95% CI: 0.36–2.12; P < .01) (see Fig 2).
Learner Behavior Outcomes (Nontime Behavior and Patient Effects)
Among studies reporting outcomes with real patients, we found 5 nontime behaviors (pooled ES: 0.80; 95% CI: 0.08–1.53; P = .03) and 4 patient effects (pooled ES: 0.32; 95% CI: −0.16 to 0.80; P = .20) (see Fig 2). Thirteen studies assessed knowledge outcomes (see Fig 2 for details).
Synthesis: Comparison of Simulation Versus Other Forms of Instruction
Two studies (N = 101 participants) compared TES with nonsimulation instructional modalities. A meta-analysis was not conducted because of the few studies. One randomized study24 compared a computerized training simulator with an instructional video for neonatal resuscitation booster training, with results showing a small effect favoring simulator training for nontime skill outcomes (N = 60; ES = 0.37) and negligible effect for knowledge (ES = 0.08). A second randomized study25 involving undergraduate nursing students compared a clinical hybrid experience (one-third simulation and two-thirds traditional clinical experience) versus a traditional clinical experience (with no simulation) and found a negligible association for knowledge outcomes favoring those in the traditional group (N = 41; ES = −0.12).
Synthesis: Comparison of Simulation Versus Other Types of TES
Eighteen studies (14 from original the search and 4 from the updated search; N = 1360 participants) compared TES with other types of simulation instructional modalities. When inductively searching for conceptual themes shared across studies, we identified 7 studies comparing high with low physical realism simulation as part of the educational intervention, and conducted a meta-analysis pooling the results of these studies.26–32 We narratively discuss the other studies with nonrecurrent themes.33–43
Comparison of High Versus Low Physical Realism Simulation
Seven studies26–32 compared the use of high versus low physical realism simulators for TES as an educational modality (Fig 4). Meta-analysis of these studies revealed pooled effects of small magnitude favoring high physical realism for nontime skills (4 studies; ES = 0.49; P < .001) and learner reactions (3 studies; ES = 0.70; P < .01). Pooled effects likewise favored high physical realism for knowledge (2 studies; ES = 0.31; P = .32) and time skills (2 studies; ES = 0.31; P = .18), but neither finding was statistically significant.
Other Instructional Design Features
Instructor-led versus self-directed training: 2 randomized studies showed that video instruction followed by self-directed practice with a mannequin was more effective than instructor-led training and practice when measuring nontime skills (N = 44, ES = 0.096; N = 36, ES = 0.823).33,34
Expert modeling: a 2-group nonrandomized study comparing expert modeling versus self-directed hands-on practice among experienced transport teams showed that self-directed learning was associated with improved nontime skills (N = 24, ES = 0.36).35 However, another nonrandomized study with less experienced learners (nursing students) showed an association between expert modeling and improved knowledge acquisition and nontime skills (N = 16; ES = 0.907 for knowledge, ES =1.141 for nontime skills).36
Timing of instruction: a 2-group nonrandomized study comparing neonatal resuscitation program (NRP) training conducted over 1 day versus 2 days showed a favorable but non–statistically significant association between 1 day of NRP training and improved knowledge among medical students (N = 50, ES = 0.131).37
Team training: in a randomized study comparing standard NRP training with versus without the addition of team training during lectures and skills stations, interns in the team training group demonstrated improved teamwork skills (N = 32, ES = 1.199).38
Virtual reality training: a 2-group nonrandomized study comparing high-fidelity simulation versus virtual reality for teaching airway endoscopy skills to otolaryngology residents and pediatric surgery fellows showed an association with high-fidelity simulation (N = 36, ES = 0.543) and enhanced nontime skills.39
Type of feedback: 1 randomized study compared the type of corrective feedback given to learners during cardiopulmonary resuscitation training. The combination of instructor and automated feedback (from the defibrillator) during training versus instructor feedback only demonstrated large effects favoring combined feedback for nontime skills immediately after the intervention (N = 69, ES = 1.454).40 A follow-up study made a similar comparison and assessed outcomes at 6 months, showing marginal effects in favor of automated feedback with an instructor when compared with instructor only for nontime skills (N = 40, ES = 0.04).41
Debriefing and repetitive practice: only 1 nonrandomized study looked at the impact of debriefing practices by comparing debriefing alone versus debriefing followed by repeated practice during pediatric emergency medicine scenarios. There was a positive association noted between debriefing with repeated practice and higher learner satisfaction (N = 115, ES = 0.377).42
Video-assisted debriefing: 1 randomized study compared the effectiveness of video-assisted debriefing to oral debriefing alone in the context of simulated neonatal resuscitation and found small or negligible effects favoring video-assisted debriefing for time skills (N = 30, ES = 0.12) and nontime skills (ES = 0.27).43
Our results indicate that, in comparison with no intervention, simulation training for pediatrics is associated with effects that are uniformly favorable but somewhat variable in magnitude. Although between-study inconsistency was high, subgroup analyses revealed generally similar effects across studies of different methodologic quality, learner group, and pediatric age. Among the few studies comparing different approaches to simulation education, high physical realism simulators were associated with small to moderate benefits when compared with low physical realism simulators. Among all included studies, only a small fraction included an interprofessional learner group, procedural skills training, or patient care–related outcomes.
Limitations and Strengths
We chose to include studies only if the educational content was pediatric-focused. We considered a less restrictive definition (eg, all studies enrolling pediatric health care providers) but felt that focusing only on pediatric content would allow for stronger inferences regarding simulation-based pediatric education. Our analysis revealed high inconsistency between studies, reflecting variation in instructional design, clinical topics, leaner groups, and outcome measures. Many of the included studies had methodologic limitations or failed to clearly describe the context, instructional design, or outcomes; and these deficits limit the strength of our inferences. Only a small fraction of studies measured outcomes on real patients, thus limiting our ability to comment on translation of outcomes from the simulated environment to the real clinical environment.44
Although our meta-analysis of TES with high versus low physical realism revealed favorable effects for high physical realism, we found only 7 studies to include in this analysis. We were unable to draw definitive conclusions related to the optimal instructional design for pediatric simulation education because of a paucity of studies. Strengths of our study include the following: the exhaustive initial literature search; reproducible inclusion criteria encompassing a large range of learners; duplicate, independent review at all stages; rigorous coding of methodologic quality; and both quantitative and narrative synthesis of findings.
Integration With Previous Work
Previously published narrative reviews described the scope and use of simulation in pediatrics10,12 and pediatric subspecialty education11,13,14 but did not systematically characterize the studies or quantitatively synthesize the impact of simulation. As was found in previous meta-analyses of TES for health care professionals3 in general, our results showed large effects for knowledge and skills and small to moderate effects for learner behaviors and patient effects when compared with no intervention. These findings support the notion that, as for nonpediatric studies, simulation education can be used effectively for teaching pediatric content. The quality of research reported in our study parallels that reported in previous systematic reviews of the TES literature.3,15,18,45,46 Previous reviews using the MERSQI have reported average scores ranging from 9.6 to 12.3, compared with a mean MERSQI score of 11.8 in our study.3,15,17,18,45,46
Our finding that high physical realism simulators are weakly associated with improved outcomes when compared with low physical realism simulators differs from the results of other systematic reviews, suggesting that degree of realism has a negligible impact on learning outcomes.47–50 These results suggest that the relationship between realism and learning is complex and nonlinear. In fact, the importance of realism likely depends on multiple factors, including the specific type or category of realism22 (eg, physical versus conceptual versus emotional), learner type (eg, novice versus experienced), learning objectives (eg, cognitive versus technical), and educational context (eg, assessment versus educational). A recent commentary on realism in simulation-based education suggests that the degree of “functional task alignment,” reflecting the accurate representation of key steps in a simulated task, may have greater impact on educational outcomes than overall realism.51 Realism is only useful to the extent that it fulfills a specific need in the functional design. For example, an enhanced physical feature such as capillary refill may have a beneficial effect for a scenario with medical students, where the key objective is the assessment of a child in septic shock. Alternatively, realistic capillary refill may have minimal impact on learning in a scenario where attending physicians are asked to manage a difficult airway. Future studies should test this hypothesis and explore the relative contributions of physical, conceptual, and emotional realism in different learning environments and with different levels of learners. To the degree that high realism is more expensive, such a refocusing of research and design efforts could lower the cost of simulation-based education.
We identified several implications for the field of pediatric simulation. First, our study indicates that TES is an effective educational modality for pediatrics and supports the continued implementation of TES into pediatric educational programs. In the 2-year period between our initial search and updated search, more studies were published than had been published before May 2011. Although this in published research clearly indicates enthusiasm for simulation-based education for pediatrics, it is unfortunate that so many of these new studies make comparisons with no intervention, and hence contribute little to advancing the field. Our efforts to identify clear recommendations for effective simulation instructional design were hampered by the paucity of studies comparing 1 method of TES with another. Future work should include studies comparing different forms of simulation-based education in an effort to identify the instructional design features associated with the greatest improvement in outcomes.
There are several possible explanations for the methodologic limitations noted in many of the studies from this review. These methodologic limitations may be related to the following: (1) lack of expertise and collaboration in simulation-based research; (2) lack of forethought when planning educational activities, resulting in a simulation education that never ends up being reported as a research product; and (3) local barriers to implementation of simulation research. To address these challenges, pediatric groups such as the International Pediatric Simulation Society (www.ipedsim.com) and the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE; www.inspiresim.com) have been established to increase research expertise, promote collaboration, and identify troubleshooting strategies for implementation of simulation-based research. Through the combined expertise of its members, the INSPIRE network is working toward establishing a set of guiding principles for conducting pediatric simulation research. These efforts will help to ensure that future studies are more methodologically sound and more likely to answer questions that will help advance the field of pediatrics.
Simulation has thus far been used predominantly to teach resuscitation and team skills in pediatrics. This important application of TES has been embraced by standardized resuscitation courses such as the Neonatal Resuscitation Program52 and the Pediatric Advanced Life Support53 course. Given the widespread use of simulation for this purpose, future studies should determine the relative importance and optimal use of variously sized pediatric simulators and consider the relative value of full-body simulators with built-in task-training capabilities (eg, chest tube insertion, needle decompression, surgical airway). Because health care providers struggle to maintain resuscitation skills after standardized courses, defining the optimal instructional design (eg, distributed learning versus annual recertification) for the simulation component of these courses becomes a high priority to ensure improved outcomes from critical illness in pediatric patients.54
Several topical gaps were identified in our study. Pediatric health care professionals must be skilled at performing procedures in pediatric patients of all ages, yet there was a paucity of studies examining procedural skills training. Although current clinical training environments for pediatric trainees afford little opportunity in performing certain life-saving procedures, objectives for training still require that these procedures be learned and mastered.55 Simulation researchers should conduct studies to identify the most effective methods of acquiring and retaining skills, while also taking into account how differences in patient age will affect provider confidence, complication rates, and success rates. Similarly, only a tiny fraction of studies involved an interprofessional group of participants. Identifying the ideal strategies for collaborative learning via simulation-based education will help to meet the needs of these diverse groups and ensure that interprofessional competencies are addressed in a structured manner.56
Last, no studies examined the impact of context (eg, simulation laboratory versus in situ simulation in a real clinical space) on learning outcomes. Exploring the impact of context will be important as more simulation research is being conducted to identify strategies to enhance patient safety.57
TES for pediatrics is associated with large favorable effects in comparison with no intervention. Unfortunately, the current literature does little to help identify the optimal method of delivering simulation-based education for pediatrics because of a paucity of comparative studies. Defining these optimal features for simulation-based education in pediatrics will be essential to ultimately help translate learning into improved provider performance and patient outcomes.
The authors thank Dr Ryan Brydges, Dr Stan Hamstra, Dr Rose Hatala, Dr Jason Szostek, Dr Amy Wang and Ms Patricia Erwin for their efforts in initial study selection and data abstraction.
- Accepted January 29, 2014.
- Address correspondence to Adam Cheng, MD, FRCPC, Department of Pediatrics, Alberta Children’s Hospital, 2888 Shaganappi Trail NW, Calgary, AB, Canada T3B 6A8. E-mail:
Dr Cheng was the principal investigator and conceptualized and designed the protocol, contributed to data collection, drafted the initial manuscript, and reviewed and revised the manuscript; he and Dr Cook had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis; Dr Lang contributed to designing the study protocol and data collection instruments, contributed to data collection, contributed to drafting the initial manuscript, and reviewed and revised the manuscript; Drs Starr and Pusic contributed to designing the study protocol and data collection instruments and reviewed and revised the manuscript; Dr Cook was the senior investigator and conceptualized and designed the protocol, contributed to data collection, carried out the statistical analysis, and reviewed and revised the manuscript; and all authors approved the final manuscript as submitted.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: No external funding.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
- Kirkpatrick DL
- Borenstein M
- ↵Higgins JPT, Green S. Cochrane Handbook for Systematic Reviews of Interventions 5.1.0. 2011. Available at: www.cochrane.org/resources/handbook/index.htm. Accessed February 28, 2013
- Cohen J
- Thomas EJ,
- Williams AL,
- Reichman EF,
- Lasky RE,
- Crandell S,
- Taggart WR
- Butler KW,
- Veltre DE
- Deutsch ES,
- Christenson T,
- Curry J,
- Hossain J,
- Zur K,
- Jacobs I
- Sutton RM,
- Niles D,
- Meaney PA,
- et al
- Hamstra SJ,
- Hatala R,
- Brydges R,
- Zendejas B,
- Cook DA
- Kattwinkel J
- ↵American Heart Association. Pediatric Advanced Life Support Provider Manual. Dallas, TX: American Heart Association; 2011
- Bhanji F,
- Mancini ME,
- Sinz E,
- et al
- ↵Wilhaus J, Palaganas J, Manos J, et al. Interprofessional Education and Healthcare Simulation Symposium. 2012. Available at: http://ssih.org/uploads/static_pages/ipe-final_compressed_1.pdf. Accessed December 1, 2013
- Copyright © 2014 by the American Academy of Pediatrics