Impact of Just-in-Time and Just-in-Place Simulation on Intern Success With Infant Lumbar Puncture
BACKGROUND AND OBJECTIVE: Simulation-based skill trainings are common; however, optimal instructional designs that improve outcomes are not well specified. We explored the impact of just-in-time and just-in-place training (JIPT) on interns’ infant lumbar puncture (LP) success.
METHODS: This prospective study enrolled pediatric and emergency medicine interns from 2009 to 2012 at 34 centers. Two distinct instructional design strategies were compared. Cohort A (2009–2010) completed simulation-based training at commencement of internship, receiving individually coached practice on the LP simulator until achieving a predefined mastery performance standard. Cohort B (2010–2012) had the same training plus JIPT sessions immediately before their first clinical LP. Main outcome was LP success, defined as obtaining fluid with first needle insertion and <1000 red blood cells per high-power field. Process measures included use of analgesia, early stylet removal, and overall attempts.
RESULTS: A total of 436 first infant LPs were analyzed. The LP success rate in cohort A was 35% (13/37), compared with 38% (152/399) in cohort B (95% confidence interval for difference [CI diff], −15% to +18%). Cohort B exhibited greater analgesia use (68% vs 19%; 95% CI diff, 33% to 59%), early stylet removal (69% vs 54%; 95% CI diff, 0% to 32%), and lower mean number of attempts (1.4 ± 0.6 vs 2.1 ± 1.6, P < .01) compared with cohort A.
CONCLUSIONS: Across multiple institutions, intern success rates with infant LP are poor. Despite improving process measures, adding JIPT to training bundles did not improve success rate. More research is needed on optimal instructional design strategies for infant LP.
- clinical skills
- clinical competence and standards
- competency-based education and methods
- educational measurement and methods
- medical education methods
- internship and residency methods
- pediatrics education
- practice (psychology)
- prospective studies
- outcome assessment (health care)
- patient simulation
- spinal puncture
What’s Known on This Subject:
Trainee success rates with infant lumbar puncture are poor. The model of just-in-time learning via simulation has produced clinical improvement for other medical skills such as cardiac compressions and central line dressing changes.
What This Study Adds:
This is the first study to evaluate the impact of just-in-time-and-place simulation-based learning on success with infant lumbar puncture. The intervention improved clinical behaviors associated with success without making a significant impact on success with the procedure.
Infant lumbar puncture (LP) is a required skill for pediatric residents.1 However, recent literature demonstrates that experience with the procedure before residency is minimal.2 Simulation can address this deficit in experience, safely augmenting other methods of learning procedural skills.3–10 However, few multicenter data exist characterizing the impact of simulation training at the patient level, especially in pediatrics.11–15
Our previous single-center study demonstrated that using simulation to train residents to a minimal standard could improve their clinical success with infant LP.16,17 However, similar educational programs tested in other environments have not shown any improvements in clinical LP performance.18,19 Furthermore, when we repeated our intervention in a multicenter setting, we failed to replicate our previous success.11 In the current study conducted within International Network of Simulation-Based Pediatric Innovation, Research, and Education (INSPIRE), we aimed to improve and standardize implementation of the intervention across multiple, varied settings.20 This work was accomplished through monthly meetings among site directors to review barriers and facilitators and an annual in-person meeting where feasible enhancements for the coming year were discussed and decided. Based on the literature and our own published qualitative assessments of intern perspectives on learning barriers and facilitators, the group decided that distributing training sessions throughout the year (distributed practice) would be a superior educational strategy.21,22
Distributing learning activities over time (distributed practice) is a key instructional design strategy promoting skill improvement that was not included in our previous trials.23 Simulation models for distributed practice sessions in the clinical setting have proven clinical impact for other pediatric skills such as cardiac compressions and central line dressing changes.24,25 This education strategy can be subdivided into experiences that are not only apportioned across time but also offered “just in time,” or immediately before the clinical task or procedure being trained for, and “just in place,” when the learning experience occurs in the actual workplace. The described intervention occurred both just in place and just in time (JIPT).
Our hypothesis was that the addition of JIPT simulation immediately before infant LP attempts would have a greater impact on interns’ clinical LP success rate than a solitary training session.11
Study Design and Setting
This was a multi-institution educational prospective phased cohort study conducted over the course of 3 consecutive academic years, comparing 2 cohorts of interns at 34 academic medical centers (Table 1). All sites obtained necessary approvals from local institutional review boards, and requisite informed consent was obtained.
We enrolled incoming postgraduate year 1 trainees from pediatric or emergency medicine residency programs at participating study sites. Study sites are members of the INSPIRE network.20 Information about the number of interns enrolled in the study is available in Fig 1.
Intervention and Comparison
The initial baseline cohort (cohort A, 2009–2010) watched a video demonstrating the procedure and received a solitary training session at the start of residency incorporating practice on a simulator until a predefined master performance standard was achieved. The intervention cohort (cohort B1, 2010–2011 and cohort B2, 2011–2012) experienced the same training bundle plus the addition of JIPT immediately before clinical opportunities to perform infant LPs.
At intern year matriculation, participants completed an online self-administered 28-item questionnaire that collected baseline information on knowledge, attitudes, and experience with the infant LP procedure. Knowledge was assessed through multiple-choice questions developed and pilot tested on nonstudy senior residents, fellows, and faculty. Attitudes were assessed on a 4-point Likert scale of confidence (Supplemental Fig 1).
Cognitive prelearning is the phase in which a learner studies and acquires knowledge about a procedure. During orientation to residency, study participants viewed a 20-minute compilation of LP videos including procedure videos published by The New England Journal of Medicine and content developed by the study authors (production by Imaginehealth, New York, NY).26,27 The video provided an opportunity to view an expert performing the procedure on the simulator (expert modeling) and covered content including indications, contraindications, complications, necessary equipment key steps, and pediatric-specific elements of the procedure.
After watching the procedure video, participants attended a hands-on training session at the start of residency. Trained facilitators guided participants in 1-on-1 repetitive practice sessions until participants achieved a predefined level of mastery of the procedure on an infant LP simulator (BabyStap; Laerdal Medical, Stockholm, Sweden).11,16,17 Mastery was defined as independent performance of all items on a subcomponent skills checklist (Supplemental Fig 2). Development and validation of the skill checklist has been previously described.11,16,28,29 Sessions ranged from ∼20 to 60 minutes. For cohort B2, this session differed slightly in that facilitator expert modeling of the procedure was added to the start of the session. The number of facilitators varied by site depending on how many interns were enrolled, and no data were collected as to the experience level of the trainers. All facilitators participated in a standardized 30-minute “train-the-trainer session” before the study where they viewed the video, received a didactic about mastery learning, and practiced completing forms.
Just-in-Time and Just-in-Place Training (Intervention Cohorts B1, 2010–2011, and B2, 2011–2012)
Participants in the intervention cohorts, in addition to partaking in the training bundle already described, also had the opportunity, immediately before each infant LP, for individually coached practice with a simulator to refresh skills before an attempt in the clinical setting. The simulators were made available in all clinical areas where infant LPs are routinely performed (eg, NICU, emergency department, inpatient unit). JIPT was expected to be brief (5–10 minutes) and to occur with clinical supervisors who would be supervising the LP. The supervisors had previously completed a separate training module (prerecorded PowerPoint module) to orient them to the educational framework. Feedback was offered to learners, encouraging repetitive practice with feedback until they could proceed without errors or interruptions to complete the procedure independently, implying readiness to perform clinically with supervision. Participants and supervisors completed similar data collection forms after the JIPT and after the clinical infant LP attempt (Supplemental Figs 3 and 4). Actual decisions to perform clinical procedures were based on local standard of care practices and made by clinical supervisors at each site. If either the participant or the supervisor felt uncomfortable after the refresher training, our study guidelines strongly discouraged proceeding with the procedure. In cohort B2, participants who performed an infant LP without JIPT (eg, they ignored the study protocol completely) were still asked to report their results via online self-report. Only the first infant LP of participants was included in the final database for analysis.
Analysis and Sample Size
Our primary outcome was interns’ first-attempt infant LP success rate. A successful LP was conservatively defined as obtaining cerebrospinal fluid (CSF) with <1000 red blood cells per high-powered field on the first attempt (without previous attempts by other providers). Because red blood cell count was not always immediately available when forms were completed, this was often a missing data point. We therefore decided a priori that when cell count was not available, the LP would be coded as a success if, and only if, interns described first-attempt CSF as clear. Given the established success rate of 34% in our previous study (cohort A),11 we needed 182 LPs in the intervention cohort to detect a 10% absolute difference in success rates with 80% power at the .05 significance level. Considering dropouts, missing values, and other missing data, we planned to enroll ≥250 interns per cohort. When this number was not reached in the first year of enrollment for the intervention cohort (B1), we continued the protocol for a second year (B2). We collected the following secondary outcomes: known modifiable factors linked to success and promoted in our educational bundle (use of early stylet removal technique and use of analgesia) and patient-centered variables (attending supervision, family presence, and overall number of attempts). Potential confounders that were collected include previous experience of participants, patient age, location of the procedure, and holder for the procedure. Outcome assessors were masked, and all participant data were indexed via a confidential study identification number for each participant.
We analyzed participant characteristics by using descriptive statistics, and we compared the primary outcome across groups by using Fisher’s exact test. We compared knowledge and confidence levels between groups by using an independent 2-sided t test and χ2 test. We analyzed other secondary variables by using either a χ2 test for proportions or an independent 2-sided t test for continuous variables. We carried out an exploratory regression analysis to ensure that the effect of the intervention on LP success was not masked by the influence of covariates. For variables where there was a statistically significant difference between the control group and intervention group, we determined their potential impact as covariates in the following way. First, given the large number of variables, we inspected the absolute differences and retained those that were clinically significant (ie, absolute difference of 20%). Next, for each clinically significant variable, we used the Cochran–Mantel–Haenszel (CMH) method to combine multiple contingency tables to examine whether these variables had impact on infant LP success rates.30 We also assessed for site variability by using the CMH with site as the stratification variable and the Breslow–Day test for heterogeneity. We conducted all analyses by using Stata 13 (Stata Corp, College Station, TX).
A total of 1319 participants were enrolled: 104 interns in 2009 to 2010, 578 interns in 2010 to 2011, and 637 interns in 2011 to 2012. They reported 507 first LPs (51, 158, and 298 LPs, respectively). Of these 507 LPs, 436 LPs were identifiable as success or failure based on the predefined protocol of the study (Fig 1).
Baseline characteristics were similar between all cohorts for the majority of variables (Table 2). Of note, there appeared to be a decline in previous infant LP clinical experiences and a corresponding increase in infant LP training and simulator experiences among interns at the start of residency between cohort B1 and B2.
Infant LP success rate in cohort A was not significantly different from that in cohort B (35% vs 38%, absolute difference 3%; 95% confidence interval for difference [CI diff], −15% to +18%). In addition, as displayed in Table 3, the 2 successive cohorts in the JIPT group did not show statistically significant difference in LP success rate: 42% (cohort B1) vs 36% (cohort B2), (95% confidence interval for difference −4% to +17%).
Process Outcomes and Known Mediator Variables
Our JIPT educational intervention was associated with greater use of analgesia (68% vs 19%; 95% CI diff, 33% to 59%), greater use of early stylet removal (69% vs 54%; 95% CI diff, 0% to 32%), and lower mean number of attempts (1.4 ± 0.6 vs 2.1 ± 1.6, P < .01) (Table 4). Although all these variables are predictors of success in the literature, only early stylet removal was associated with success in our study across all 3 cohorts (P = .04).31,32 Age of patients was also significantly associated with success (P < .001), but no difference was seen between cohorts. All the aforementioned variables were added in a stepwise fashion to a logistic regression model predicting infant LP success and did not uncover either a main effect for the study intervention or any significant interactions.
Other Process Measures
Family member presence was associated with success (P = .03) and was significantly greater in cohort B than in cohort A (39% vs 16%; 95% CI diff, 7% to 32%). The type of holder for the infant LP was not associated with success and did not differ between the cohorts. Attending supervision was also similar between groups.
Potential Confounder Variables
There was a statistically significant difference between cohort A and cohort B for the following variables: previous infant LP simulator experience, infant LP knowledge, infant LPs performed in the emergency department, and infant LPs performed in the NICU. When we adjusted for site, CMH testing found no association at the .05 significance level between individual sites for these variables and infant LP success. The specific locations for infant LP performance within each hospital (emergency department, NICU) were not associated with success (P = .16). Variability in success rates between hospital sites was low (P = .65, CMH, Breslow–Day test). An average of 7 clinical supervisors was reported per site, with a range of 2 to 18. The majority of supervisors were attending physicians (43%), followed by fellows (23%) and residents (26%).
The impact of protocol adherence was assessed for cohort B2 (2011–2012); no difference was found in infant LP success rates between those who completed a JIPT session and those who did not (Table 5). Clinical supervisors from all sites reported the following areas as most needing practice by interns: needle insertion or advancement (41%), preparation (37%), and positioning infant (12%), with other issues occurring <5% of the time. The majority of participants reported spending between 5 and 10 minutes doing the JIPT (55%), with 34% reporting <5 minutes and 5% reporting >10 minutes (6.4% were missing data). χ2 test comparing average time on task between sites was not significant (P = .51).
This study is the largest prospective multicenter investigation into the optimal simulation-based instructional design for infant LP success. It is also one of the few pediatric studies that compares 2 different simulation designs with each other.23 We previously demonstrated improved infant LP clinical success among pediatric trainees after a single simulation-based intervention (71% intervention group vs 27% control group).16 However, in a subsequent multicenter study, this clinical impact could not be replicated in the intervention group (pooled 34% success rate).11 With the addition of JIPT, the current study still found no statistically discernible improvement in infant LP success rate. More specifically, the success rate (38%) for the intervention group based on data collected over a 2-year period was not statistically significantly different from the 35% success rate found in the previous year. We noted improvement in several process measures, such as the number of attempts needed and other behaviors previously shown to be associated with success (early stylet removal technique and use of analgesia).31
Low infant LP success rates among trainees are an intransigent problem. In this study, we report lower infant LP success rates (ranging from 34% to 42%) than previous descriptive reports of novice success rates (45%–63%).32,33 These differences may reflect variations in definitions of success or populations studied. We set a high bar for success: a single, atraumatic attempt.27 Although smaller studies have documented success using only laboratory results and not number of attempts,32 we chose this patient-centered definition because success on the first attempt causes the least discomfort to patients. Other than our aforementioned single-center study, no research has demonstrated an intervention leading to improved clinical success with the infant LP procedure. The education strategy we used of training to a minimum passing standard (mastery learning) is a well-supported strategy for skill training.23 Gaies et al13 performed a randomized trial of an infant LP mastery learning intervention for pediatric residents at the start of their pediatric emergency medicine rotation, but they did not find any significant difference in clinical success (70% for participants and 62% for controls). The high success rates reported in their study probably resulted from the more liberal definition of success, which included any CSF sample that was suitable for culture, regardless of number of attempts. A more recent single-center study found that 62% (13/21) of interns were successful on their first infant LP attempt after a mastery learning intervention; however, conclusions were limited by a lack of comparison group.34 Other studies using similar educational frameworks have also demonstrated improved confidence or skill in a simulated environment but were unable to reliably show a clinical improvement with actual patients.35–38
Even though our intervention was unable to globally improve clinical success rates, we did notice improvement in specific physician behaviors (Kirkpatrick level 3).39,40 There was a notable increase in the use of preprocedural analgesia and the early stylet removal technique, both of which were emphasized in our intervention, because they have been associated in previous studies with improved procedural success.31,41 The JIPT group also had improvement in patient-centered outcomes, such as reduced number of overall attempts and increased family presence.
A program for procedural learning that emphasizes multiple spaced practice sessions with a coach and JIPT has the potential to be effective if we can determine the optimal timing and frequency of both learning episodes and assessments. For example, Sutton et al42 studied hospital providers learning cardiac compressions and found that retention of skills improved with each additional refresher. Notable differences in time to achieve skill perfection are seen for those who had >2 refreshers per month compared with those with <2.24
Field studies are needed to elucidate why our success rates are so low. We explored some of the barriers and facilitators to the implementation of the workplace JIPT intervention in separate qualitative studies.22,43 Interns cited barriers to successful workplace training interventions such as workplace busyness and instructor lack of support. In contrast, interns commonly mentioned that improved realism of the training model would improve the training experience.22 Although there is an increasing trend toward more didactic and simulation education during medical school, the number of clinical infant LP procedural experiences is not sufficient to prepare incoming interns to perform these procedures.2 In addition, confidence in ability to perform the infant LP procedure was consistently poor throughout all cohorts in this study and previous investigations.2,11,16
There are a number of limitations to our study. Iterative changes in our educational intervention (eg, addition of expert modeling to the mastery learning and the JIPT itself) were based on annual reflection, qualitative review, and consensus among the education and clinical experts in this group. Therefore, our ability to identify the impact of specific changes was limited. Another important limitation of our study is the reliance on self-report to document the primary outcome. However, we also required supervisors to submit a form for every LP they supervised to substantiate the interns’ self-report forms. Use of historical controls may confound our results because each year brought unique learners, educators, and work environments. To mitigate this risk, we carefully tracked these confounders to assess for secular trends that could bias our results. We ascertained for volunteer bias by concurrently surveying participants in the 2011 to 2012 cohort who, for any reason, did not complete a JIPT (Table 5). The rate of LPs performed appears low among the JIPT cohort (39% of participants), and although the study protocol cannot verify whether the rates reported in this study reflect actual performance, it is similar to other studies using similar protocols to track procedures.44
The asynchronous and infrequent occurrence of infant LPs limited the usefulness of the clinical encounter as a trigger for training refreshers. Cognitive science literature supports an interval of practice that is roughly one-third of the retention time.45 With the minimal clinical experiences reported, we do not yet know the retention period for infant LP skills among novice providers who have been trained. Furthermore, because we collected outcomes only from the first infant LP, we had no way to know whether skills improved after several encounters. Simulation may help shorten the time to achieve clinical success, but multiple procedures are necessary to determine the number of procedures needed before a sustained improvement in clinical success rates is seen.12 The use of multiple diverse trainers in teaching is an inherent limitation to any multicenter educational study that we sought to mitigate through the use of standardized training protocols and faculty development. Although this method inevitably increases variability, it also improves the generalizability of our findings. Our analysis for site-level variance did not reveal any impact on our primary outcome measure. Finally, the number of sites in our network increased with each year of the study, adding some site variability between the cohorts. The increase was needed because of the limited number of procedures being performed by interns. Our ability to show small incremental improvements in clinical outcomes along the way was therefore limited. Finally, as noted by Kirkpatrick and Kirkpatrick,46 the leap from individual-level to patient-level impact may be stifled by a culture in which it is difficult to enact change rather than by an ineffective intervention. Because a version of this intervention was effective in a single center, we may not have done enough to focus on issues of implementation before iterating the design and participants.
In a large field study of >1000 pediatric and emergency medicine interns, we tested the hypothesis that JIPT in addition to our preexisting training bundle would improve infant LP success rates. Despite improvements in physician behaviors associated with success (analgesia use and early stylet removal technique), our intervention did not have a significant impact on intern first infant LP clinical success. The reduction of overall attempts and increase in family presence may also be considered independently beneficial to patients.
The INSPIRE LP investigators include the following:
Michael Holder (Akron Children’s); Glenn R. Stryjewski (AI Dupont); Kathleen Ostrom (CHLA); Lara Kothari (Children’s Hospital of Boston); Pavan Zaveri, Berry Seelbach, Dewesh Agrawal (Children’s National Medical Center); Joshua Rocker (Cohen Children’s Medical Center of New York); Kiran Hebbar (Emory University); Maybelle Kou (Inova-Fairfax); Julie Lindower, Glenda Rabe (University of Iowa Children’s Hospital); Audrey Paul, Christopher Strother (Mount Sinai Medical Center); Eric Weinberg, Nikhil Shah (Weill Cornell); Kevin Ching, Kelly Cleary (NYU); Noel Zuckerbraun and Brett McAninch (Children’s Hospital of Pittsburgh); Amanda Pratt (Rutgers–Robert Wood Johnson Medical School); Jennifer Reid, Steve Cico (Seattle Children’s Hospital); James Gerard (Cardinal–Glennon SLU); Matei Petrescu (Tulane Hospital for Children); Laura Haubner (University of South Florida); Geetanjali Srivastava (University of Texas Southwestern); Denis Oriot (CHU Poitiers); Grace Arteaga (Mayo Clinic); Daniel Lemke (Baylor); Wendy Van Ittersum (Cleveland); Alisa McQueen (Comer); Stephen M. Blumberg (Jacobi); Sandra Arnold, Peggy O’Cain (LeBonheur Children’s Hospital); and Melissa Cercone (Wisconsin).
We acknowledge the following people for their support in this project:
Anup Agrawal, MD, Yale University School of Medicine, for help with manuscript preparation. Adam Cheng, MD, University of Calgary, Alberta Children’s Hospital, for manuscript review.
Keven Cabrera, Columbia University College of Physicians and Surgeons; Amanda Krantz and Marisa Torch, New York University Langone Medical Center; and Karen Owen, Jonathan Berken, Kevin Pearson, and Laura Santry, Yale University School of Medicine, for project coordination during various phases of the project.
Collaboration with the Pediatric Emergency Medicine Collaborative Research Committee (PEM-CRC) of the American Academy of Pediatrics (years 2010–2011) provided expertise and resources from the data center at Baylor College of Medicine, and we specifically acknowledge Charles G. Macias, MD, MPH, and Jennifer Jones, MS, for their support.
The authors also acknowledge the contributions of members of the International Network for Simulation-Based Pediatric Innovation, Research, and Education (INSPIRE) who have helped to shape this project and the Society for Simulation in Healthcare and the International Pediatric Simulation Society for providing INSPIRE with space at their annual meetings.
We also acknowledge the many faculty educators and interns who participated in making this educational intervention possible.
- Accepted February 24, 2015.
- Address correspondence to David Kessler, MD, MSc, Department of Pediatrics, Columbia University Medical Center, 3959 Broadway, CHN-1-116, New York, NY 10032. E-mail:
Dr Kessler conceptualized and designed the study, contributed to data acquisition and enrollment of study subjects, carried out the initial analyses, and drafted the initial manuscript; Dr Pusic contributed substantially to the study design, data analysis and interpretation, and critical review and revision of the manuscript; Drs Chang, Fein, Grossman, Mehta, and White contributed substantially to the conception and design of the study, data acquisition and enrollment of study subjects, and drafting or critical review and revision of the manuscript; Mr Jang and Travis Whitfill contributed substantially to the analysis and interpretation of the data and are responsible for the integrity of the data and accuracy of the data analysis, and they contributed substantially to drafting of the manuscript and revised it critically for intellectual content; Dr Auerbach contributed substantially to the conception and design of this study, contributed substantially to the analysis and interpretation of the data, takes responsibility for the integrity of the data and accuracy of the data analysis, contributed to the data acquisition and enrollment of study subjects, contributed substantially to drafting of the manuscript and revising it critically for intellectual content, and contributed to data acquisition and enrollment of study subjects; and all authors approved the final manuscript as submitted.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: Rbaby (a nonprofit foundation) supported various phases of this study.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
- ↵American College of Graduate Medical Education. ACGME program requirements for graduate medical education in pediatrics. Available at: www.neonatologyresearch.com/wpcontent/uploads/2011/07/ACGMEPediatricRequirements7_2011-2.pdf. Accessed August 13, 2013
- Gaies MG,
- Morris SA,
- Hafler JP,
- et al
- ↵Auerbach MA, Chang T, Fein D, et al. A comprehensive infant lumbar puncture novice procedural skills training package: An INSPIRE simulation-based procedural skills training package. MedEdPORTAL; 2014. Available at: www.mededportal.org/publication/9724
- ↵International Network for Simulation-Based Pediatric Innovation, Research, and Education (INSPIRE). Available at: www.InspireSim.com. Accessed November 1, 2012
- ↵Auerbach M, Chang T, Krantz A, et al. Infant lumbar puncture: POISE pediatric procedure video. Available at: https://www. mededportal.org/publication/8339. Accessed January 14, 2013
- Zuckerbraun N,
- Mcaninch B,
- Ching K,
- Auerbach M,
- Kessler D
- Armitage P,
- Berry G,
- Matthews JNS
- Baxter AL,
- Fisher RG,
- Burke BL,
- Goldblatt SS,
- Isaacman DJ,
- Lawson ML
- Glatstein MM,
- Zucker-Toledano M,
- Arik A,
- Scolnik D,
- Oren A,
- Reif S
- Barsuk JH,
- Cohen ER,
- Caprio T,
- McGaghie WC,
- Simuni T,
- Wayne DB
- Kirkpatrick D
- Rohrer D,
- Pashler H
- Kirkpatrick DL,
- Kirkpatrick JD
- Copyright © 2015 by the American Academy of Pediatrics