Objective. Expert clinical teachers in medicine use teaching scripts. The aim of this study was to determine whether pediatricians also use common components of teaching scripts.
Methods. Seventy-three pediatric clerkship directors identified anticipated errors and teaching points in response to two short vignettes. The content analysis of responses, which we completed, was analyzed by rank and receipt of teaching awards.
Results. Greater than 87% of respondents identified at least one of three anticipated learner errors and greater than 80% of respondents identified at least one of three to four teaching points. Teaching points related directly to anticipated errors in 60% of responses. Level of experience and receipt of teaching awards had no impact on response content.
Conclusions. Consistent with findings on the use of teaching scripts, pediatrics' educators achieved high congruence on anticipated errors and teaching points on two teaching vignettes. These findings support the hypothesis that developing teaching expertise is associated with other script components. medical education, teaching scripts, clinical teaching, pediatrics.
We have all known excellent teachers, but what makes them excel is not always clear. Traditionally, studies of effective teachers have focused on behavioral characteristics (eg, clear and organized)1 and/or teacher roles (eg, supervisor).2 Recently, several researchers have begun to identify the cognitive processes that distinguish experienced from novice teachers. Schulman and others3,,4 recognize that experienced teachers have an extremely extensive and interrelated knowledge of teaching organized in the form of “scripts.” These teaching scripts help teachers anticipate learners' actions and enable them to respond quickly during an instructional episode. Just as movie scripts contain detailed information about dialogue, character traits, and staging, teachers' internalized scripts contain detailed information about the learner, the goals of a session, specific teaching points for given topics, and effective educational strategies keyed to different learner levels.3,,4
In his seminal study of expert physician educators, Irby5elaborated on the use of teaching scripts in medical education. The internists observed in his study combined their extensive knowledge base of patients and diseases with a broad knowledge of the learner and teaching methods. For these physicians, teaching scripts were activated in response to a simple stimulus (eg, a patient presenting with diabetic ketoacidosis). These expert teachers then used knowledge about the learner (level, strengths, and weaknesses) to anticipate errors (eg, inadequate history of diet), enabling them to adapt their internalized, disease-specific lesson plan (impact of diet on diabetes) to this teaching episode. As in procedures used by all experts, the automaticity of the stimulus-activation process reduces the clinician's cognitive workload,6 allowing the teacher to continually adapt his/her instruction to the specific learners.
Irby's pioneering work in medical education points to several unexplored issues. First, Irby's subjects were internists preselected as expert teachers, raising the questions of whether clinicians in other specialties or of varying levels of teaching experience use teaching scripts. The stimulus used by Irby was a single case of diabetic ketoacidosis, leaving open the question of whether scripts are generalizable to other conditions. This study sought to build on Irby's work by examining the generalizability of teaching scripts in teachers of another medical specialty, across medical conditions, and at varying levels of teacher expertise/experience. Consistent with the initial stages and processes of clinical teaching,7 this study focused on the use of scripts to identify common learner errors (needs assessment) and teaching points (objectives) in response to common clinical scenarios.
A cross-sectional study of 80 academic pediatricians attending the annual meeting of the Council on Medical Student Education in Pediatrics was conducted. These individuals are pediatric clerkship directors who have a broad range of experience as educators and are responsible for predoctoral education.
A specially designed questionnaire was used to obtain demographic data and elements of clinicians' teaching scripts. The first part of the questionnaire elicited years of experience as a teacher, faculty rank, number of faculty development programs attended, and recognition as outstanding teachers. The second part of the questionnaire was designed to evoke two of the clinical teaching script components essential to the first steps in a successful clinical teaching interaction: identification of errors common to learners at a specific level of training (needs assessment) and critical teaching points (objectives). Respondents were given 4 minutes to write down the common errors and teaching points for two common clinical vignettes. Response time was limited deliberately to provide an advantage to those whose teacher knowledge was organized as scripts (more efficient recall in response to a stimulus).
Both vignettes were developed around commonly presenting pediatric problems identified as important in the third-year clerkship: asthma and gastroenteritis/dehydration. The vignettes provided basic information about the setting in which the interaction with the learner occurred, the learners' level of training, and the time of academic year to provide the context for the teaching encounter. For example, the asthma vignette read, “It is July and you are about to hear Becky (a third-year medical student) present the history of a 3-year-old child admitted to your floor from the ER during an acute exacerbation of asthma.” Subjects then responded to two questions: “What are the common errors you expect in the history?” and “What teaching points will you make?”.
Coding and Analysis
Questionnaires were coded with a 2-digit identification number, and responses to the vignettes were transcribed with an identification number to blind reviewers to demographic data. Following standard methods of content analysis,8 we reviewed independently the transcriptions to determine coding categories. We met to develop a final coding book with response categories consistent with standard medical concepts of history, physical examination, psychomotor/procedural skills, etc. All responses then were coded independently by us, with interrater agreement of 91%. Postrating coding differences were resolved subsequently by mutual agreement. Coded data were analyzed by faculty rank, length of teaching experience, and receipt of teaching awards. For analysis, novice was defined as participants with fewer than 5 years' teaching experience. Expert was defined as those participants with more than 10 years' teaching experience or who were recognized through receipt of a teaching award. ANOVA was used to test for differences in teaching scripts by rank, years of experience, and receipt of teaching awards (SPSS 8.0 for Windows).
Of the 80 pediatric clerkship directors, 73 (91%) completed the questionnaire. Years of teaching experience ranged from 1 to 40 years. Twenty percent of respondents had 5 or fewer years of teaching experience, 26% had between 6 and 10 years, and 54% had more than 10 years teaching experience. Fifty-two percent (38) of the clerkship directors were assistant professors or instructors, 25% (18) were associate professors, and 23% (17) were full professors at the time they completed the survey. Fifty-six percent of the respondents had received at least one teaching award. Clerkship directors had attended 1.6 ± 2.2 faculty development programs (range: 0–10). The number of teaching awards received or faculty development programs attended did not vary by rank achieved.
The first study question focused on the existence of two critical components of teaching scripts (common errors and teaching points) across cases among pediatric educators. Greater than 87% of the “common error” responses fell into three case specific categories for each vignette. Common errors expected for a July M3 student when presenting a case of asthma included history of acute illness disorganized or incomplete; failure to obtain family, social, or environmental history; and difficulty in assessing severity of illness. Errors expected for M3 students presenting a case of gastroenteritis include disorganized and/or incomplete history, failure to characterize details of the disease, and difficulty in assessing degree of dehydration.
Congruity of teaching points educators would make was high in both cases, focusing on three to four teaching points for each case. Eighty-five percent of the educators identified asthma teaching points as 1) how to organize/structure a presentation, 2) what pertinent history to include, and 3) how to assess severity of asthma. Eighty percent of educators noted four teaching points for the gastroenteritis vignette: 1) how to obtain a directed history, 2) how to assess degree of dehydration, 3) understanding fluid management, and 4) discussion of the differential diagnosis. Sixty percent of the teaching points were linked directly to the common errors identified by each respondent. Analysis of responses by level of experience was performed. There were no significant differences in the content or distribution of responses by faculty rank or receipt of teaching awards.
This study verifies Irby's findings that clinical educators use at least some components of teaching scripts. The high degree of congruity for expected common errors and teaching points suggests the existence of case-specific teaching scripts. The congruity of responses about expected common errors highlights the clinicians' considerable knowledge base about the abilities and weaknesses of third-year medical students. This knowledge interconnects with their knowledge of medicine, yielding a common set of teaching points by case among pediatric clerkship directors.
Irby's study concentrated on a small sample of expert teachers of internal medicine. He identified the use of an extensive knowledge base, an understanding of learners' needs to identify common errors and teaching points in response to a brief stimulus, and use of a range of strategies to teach about the chosen point. Our study demonstrates that pediatric clerkship directors, regardless of their level of experience, possess clinically specific scripts for teaching about two common pediatric problems. In this study, both expert and novice teachers easily identified common errors and teaching points in response to a stimulus. They also understood at least one general principle of education: tying the teaching points to the expected errors.
Studies investigating how scripts develop in medical teachers are not available. However, in a study of kindergarten through grade 12 teachers, it has been shown that 5 to 6 years of experience are required for the development of scripts.3,,4 Although 20% of participants in this study reported fewer than 5 years of experience, in fact, they had been immersed in the study and management of these common pediatric conditions during their medical school and residency training. Adding the 3 years of residency as “teaching” experience to the self-reported years as faculty increases the total to more than 5 years for all participants. A second limitation of this study relates to the study population selected. Although the participants, as clerkship directors, may spend more time reflecting on teaching and learning than does the average pediatrician, the commonality of responses despite the variability in years of experience, faculty rank, and teaching awards emphasizes the pervasiveness of scripts in a diverse group of pediatric educators.
In his study of six expert teachers, Irby was impressed by the variety of scripts in response to the same stimulus. This appears at odds with our finding of high levels of congruity. When looking at individual responses, the specific points that an educator would make varies (ie, not every teacher makes the exact same two to three points). However, when looked at as a group, the number of teaching points actually is very small. This may have been enhanced by the use of extremely common pediatric problems with relatively standard diagnostic and therapeutic approaches used by most practitioners. The diversity of scripts may increase if the stimulus was an undifferentiated or uncommon pediatric problem. Response time for each vignette also was limited in an attempt to identify the high-priority issues for each teacher. This may have limited the diversity of responses as well, but it highlights those items that come to teachers automatically (a component of expertise).
The presence and commonality of the teaching scripts in pediatric educators with a broad range of experience leave us with additional questions. Will the differences between experts and novices emerge in other components of teaching scripts? Are the differences between expert and novice teachers to be found in the number and type of teaching strategies, resources, or principles of education used or in the adaptability of the teacher to the individual learner? Our study did not address these components of teacher knowledge. When subsequent research elucidates the details of these teaching scripts, the degree to which the incorporation of teaching scripts into faculty development programs accelerates clinical teacher expertise can be explored.
In summary, teaching scripts appear to be common among clinical teachers independent of specialty, case, or level of teacher experience. Pediatric educators demonstrate an amazing degree of congruity about common learner errors and teaching points for frequently occurring pediatric problems. Factors responsible for the differences in expert and novice teachers require continued investigation, especially in the selection of teaching points and adaptability of teaching methods for different levels of learners.
- ↵Grossman PL. The Making of A Teacher: Teacher Knowledge and Teacher Education. New York, NY: Teachers College Press; 1990
- ↵Ormrod JE. Human Learning. Englewood Cliffs, NJ: Merrill Publishing; 1995:264–269
- ↵Patton MQ. Qualitative Evaluation and Research Methods. Newbury Park, CA: Sage Publications; 1990
- Copyright © 1999 American Academy of Pediatrics