BACKGROUND: Providing and learning from feedback are essential components of medical education, and typically described as resistant to change. But given a decade of change in the clinical context in which feedback occurs, the authors asked if, and how, perceptions of feedback and feedback behaviors might have changed in response to contextual affordances.
METHODS: In 2017, the authors conducted a follow-up, ethnographic study on 2 general pediatric floors at the same children’s hospital where another ethnographic study on a general pediatric floor was conducted in 2007. Data sources included (1) 21 and 34 hours of observation in 2007 and 2017, respectively, (2) 35 and 25 interviews with general pediatric attending physicians and residents in 2007 and 2017, respectively, and (3) a review of 120 program documents spanning 2007 to 2017. Data were coded and organized around 3 recommendations for feedback that were derived from 2007 data and served as standards for assessing change in 2017.
RESULTS: Data revealed progress in achieving each recommendation. Compared with 2007, participants in 2017 more clearly distinguished between feedback and evaluation; residents were more aware of in-the-moment feedback, and they had shifted their orientation from evaluation and grades to feedback and learning. Explanations for progress in achieving recommendations, which were derived from the data, pointed to institutional and national influences, namely, the pediatric milestones.
CONCLUSIONS: On the basis of follow-up, ethnographic data, changes in the clinical context of pediatric education may afford positive change in perceptions of feedback and feedback behavior and point to influences within and beyond the institution.
- IV —
What’s Known on This Subject:
Feedback, which is an essential component of medical education, is typically described as resistant to change. Clinical contexts in which feedback plays out may constrain feedback behavior and the perceptions that underlie that behavior.
What This Study Adds:
The authors used ethnographic data from 2007 to 2017 to show that changes in the context of pediatric education afforded positive change in the perceptions of feedback and feedback behavior. Data-derived explanations for change point to institutional and national influences.
Providing and learning from feedback, which are essential components of medical education, are emblematic of the resistance to change delineated in Bloom’s1 description of the structure and ideology of medical education. This intransigence is evidenced by the substantial increase in the number of articles that seek to improve feedback.2 Researchers in 4 recent articles review efforts to improve feedback by employing curricular approaches to change feedback behaviors.2–5 Others suggest that targeting feedback behaviors is short sighted because context constrains feedback behavior.6–8 They imply that medical education must adapt to context to reduce its constraints on feedback. But if context constrains feedback behavior, can it not also afford or facilitate change?
The veracity of Bloom’s1 claim, as it is applied to the structure and ideology of pediatric education, seems debatable. Pediatrics is an area that leads change in graduate medical education. Pediatric program directors and the American Board of Pediatrics were among the first to develop and implement milestones assessment in a competency-based framework.9,10 Pediatric educators have spearheaded educational innovations at a national level11 and built infrastructure for educational scholarship.12 The scope of these changes points to the possibility that the context in which pediatric education unfolds has afforded change in feedback behavior and in the perceptions that drive that change.
In 2007, members of the research team conducted an ethnographic study on a general pediatric inpatient floor in a free-standing children’s hospital to understand learning as it played out in a real-life clinical context. Among other findings,13–15 they reported on feedback at a national meeting and made 3 data-driven recommendations: (1) distinguish between evaluation and feedback, (2) increase awareness of in-the-moment feedback embedded in clinical context, and (3) shift the emphasis from performance-oriented evaluation to learning-oriented feedback.16 In 2017, we (the authors) sought to describe and analyze if and how changes in the pediatric context afforded changes in the perceptions of feedback and feedback behavior over time. To that end, we repeated an ethnographic study by using the 2007 data as a baseline and the 2007 recommendations as standards for assessing change over the course of 10 years. We recognize that assessment is typically associated with individual learners and evaluation with programs. However, participants used these terms interchangeably; we chose to use their words.
Ethnography is a tradition in qualitative research that is well positioned to study learning contexts by virtue of its close, prolonged observation in natural settings, interviews with individual actors in the setting, and review of documents.17 With its anthropological roots, ethnography is the study of social interactions, behaviors, and perceptions that occur in groups and communities, making it an ideal methodology for studying change in learning contexts.18
Members of the research team conducted ethnographic studies in 2007 and 2017 at The Children’s Hospital of Philadelphia and targeted general pediatrics because of that division’s leadership in pediatric education (Fig 1). The first study focused on how education plays out in the authentic clinical learning environment on the general pediatric floors, whereas the follow-up study took place on 2 general pediatric inpatient floors with a focus on feedback.
The first ethnographic study was approved by The Children’s Hospital of Philadelphia Institutional Review Board in 2007; the follow-up was approved in 2017. In 2007, written, informed consent was obtained for the interviews, and verbal consent was obtained for the observation. In 2017, verbal consent was obtained for both the interviews and observation. Confidentiality was ensured by the removal of personal identifiers from observational notes and interview transcripts.
For the interviews, D. B. purposefully sampled interns, senior residents, and attending physicians in general pediatrics who had completed rotations on the general pediatric floors targeted for observation (Fig 1). In 2007, 35 participants were interviewed; all were part of the observation phase. In 2017, 25 participants were interviewed; of these, 9 participants were also observed. Three of the attending physicians interviewed in 2017 had held positions as program directors, and 4 had also participated as attendings in 2007.
The interview guides for 2007 and 2017 consisted of parallel versions of interview questions. In 2007, interns were asked to reflect on their learning, attending physicians were asked to reflect on their teaching, and senior residents were asked to reflect on both teaching and learning.14,15 They also were asked to comment on their feedback experiences in the context of the general pediatric floor that D. B. observed, both as providers and recipients of feedback. The same question about context-specific feedback was part of the interview guide in 2017. Other questions in the 2017 interview guide (not included in 2007) asked participants to distinguish between feedback and evaluation and describe experiences with feedback in nonmedical contexts. Interview questions were derived from the literature and reviewed by the research team.
From February 2007 to April 2007, D. B. observed on a general pediatrics floor (n = 21 hours). In February 2017 and March 2017, D. B. observed on 2 general pediatrics floors (n = 34 hours). In both 2007 and 2017, she observed the team (defined in Fig 1) during morning rounds. D. B. followed standard ethnographic technique, in which the observer interacts with participants closely enough to gain an insider’s perspective but does not engage in the activities under study. She took notes during rounds and at the end of rounds, and she routinely checked her understanding of observed feedback with residents to whom feedback was directed. She transcribed notes from her observations into an electronic document.
D. B. collected and reviewed a total of 120 documents for evidence of contextual changes. Documents included official departmental reports, faculty development curricula, residency lecture schedules, and evaluation forms.
Data collection and analysis were overlapping processes. Data in the form of interview transcripts and observation notes from 2007 and 2017 (a total of 102 documents) were managed with ATLAS.ti; program documents were summarized in a spreadsheet. D. B. inductively coded interview transcripts and notes from her observations by applying codes (ie, words or phrases describing important concepts) to segments of data. Coding occurred in an iterative fashion; codes were revised as patterns in incoming data became more apparent. Although informed by codes applied to the 2007 data set, which focused on learning, the 2017 investigation focused on feedback, and its data yielded more feedback-specific codes.
The research team met periodically to review the codes; as the codes stabilized, D. B. created a final code list and applied those codes to the data. In the later stages of analysis, she clustered coded data to create cohesive categories and used the recommendations for feedback from 2007 as a lens to describe and analyze changes in perceptions of feedback and feedback behavior. We reviewed clusters of coded data and looked for evidence to confirm or refute the achievement of recommendations and to discuss potential interpretations.
To illuminate qualitative data, we tallied the frequency of application of codes that were central to the study (eg, in-the-moment feedback and summative evaluation) and the number of observed patient presentations on morning rounds. To shed light on change over time, we calculated the percent application in 2007 and 2017.
We attended to data quality by using the following criteria: credibility, dependability, and confirmability.19–21 To enhance credibility, we triangulated data sources (intern, senior resident, attending physician, and program director perspectives) and data collection methods (interviews, observations, and document review). We enhanced dependability through iterative data collection and with a single interviewer and observer. Finally, our team approach and peer examination enhanced confirmability, as did D. B.’s routine member checks with residents whom she observed. E. M., B. R., and R. T. S. brought a chief resident, program director, and medical educator perspective, respectively.
Taken together, data from interviews, observations, and document review indicated progress in achieving the 2007 recommendations. We divided our results into 2 parts: (1) evidence of progress in achieving each of the 3 recommendations, and (2) explanations for this progress as derived from the data.
Progress in Achieving Recommendations
Recommendation 1: Distinguish Between Feedback and Evaluation
In 2007, residents and attending physicians often conflated feedback and evaluation. For example, 1 intern described evaluation when asked about feedback. “You give feedback for the specific rotation. It’s sort of fill out the questionnaire, which is okay. But I think after filling out so many of these forms, I think toward the end, they just sort of check boxes.”
In contrast, residents and attending physicians in 2017 did not even use the word “evaluation” when responding to questions about feedback. When asked to distinguish feedback from evaluation, they reported (in order of frequency) that feedback was actionable or helpful, conversational, constructive, and timely. In contrast, evaluation was broad strokes, grade oriented, formal, and documented. A 2017 intern made this distinction, “I find that evaluations hit the main points of competency that are required, but I always found feedback to be a lot more engaging, rewarding, and helpful than evaluation.”
Although not intended to be evaluation, formal feedback was also understood as “what administration wants us to do.” Formal feedback typically occurred in the form of Feedback Friday, a time set aside on Fridays for attending physicians and senior residents to give feedback to interns. Despite its established place in the residency curriculum, the actual occurrence of Feedback Friday was routinely described as hit or miss.
Recommendation 2: Increase Awareness of in-the-Moment Feedback
Compared with Feedback Friday, in-the-moment feedback was more nuanced. A 2017 intern distinguished between formal feedback and in-the-moment feedback:
“If I’m honest, the most productive feedback that I have received has been in-the-moment, timely feedback with senior residents. ... The Feedback Friday thing can be helpful, but it hasn’t always been. It can be something we check off at the end of the week.”
In-the-moment feedback was also called real-time feedback, just-in-time feedback, or informal feedback. Regardless of what it was called, in-the-moment feedback, as observed and confirmed by residents, typically played out like this:
The intern presents an infant admitted with dehydration and a rule-out of sepsis. She describes physical examination findings, lists laboratory results, and current medications, and then adds,
[Intern] “Mom has a difficult time breastfeeding. ... I want mom to see lactation, but if she is not able to breastfeed, then we can do [intravenous (IV)] fluid or use formula.”
[Attending] “So what do you see as her #1 problem?”
[Attending] “Is she rehydrated? Before you talk about maintaining hydration, you need to rehydrate. ... I’d much rather use formula than IV fluids.”
[Intern] “But this mom is all natural; she took a lot of supplements and doesn’t want to use formula.”
[Attending] “I see where you are going. Not to be disrespectful, but the [infant] got dehydrated naturally. You could push back and say you want to start formula. Remember, IV fluids are totally not natural either.”
Residents recognized in-the-moment feedback as overlapping with teaching; either way, it was information they could use to improve. One 2017 senior resident talked about in-the-moment feedback this way, “By having someone agree or disagree or fine tune your plan, that is a way of giving that person some feedback. It’s not just teaching about management, it’s a form of feedback.” Interns also recognized in-the-moment feedback, albeit sometimes hidden. One shared her process of recognizing feedback.
“If, at the end of the week, I feel like I didn't get feedback, I’m like, ‘Wait, no... that day on rounds, somebody made a comment about how I interacted with nursing and kept them updated.’ I can get feedback from anyone. I think you just have to be a little more cognizant of organic feedback to recognize it.”
In-the-moment feedback was more frequently spoken of in 2017 compared with 2007. In 2007, it was mentioned in interviews by 33%, 13%, and 33% of interns, senior residents, and attendings, respectively. But in 2017, in-the-moment feedback was mentioned by 90%, 100%, and 67% of interns, senior residents, and attending physicians, respectively. This increased awareness was supported by observational data. There was substantially more in-the-moment feedback observed in 2017: 83% (126 of 156) vs 44% (33 of 75) of observed patient presentations in 2017 and 2007, respectively
Although residents talked about in-the-moment feedback more frequently, and observational data pointed to its increased occurrence in 2017, attending physicians voiced a different perspective. They routinely said, in 2017, that residents did not recognize in-the-moment feedback. “I don’t think residents notice it until someone says, ‘How did you feel about that feedback?’”
Recommendation 3: Shift Emphasis From Performance-Oriented Evaluation to Learning-Oriented Feedback
In 2007, residents and attending physicians recognized evaluation and formal feedback as what were supposed to happen, but no one said that either was useful for learning. A 2007 intern conflated evaluation and feedback and questioned the utility of the latter. “I’d say maybe 40% of the attendings give us the written evaluation of feedback. Less actually tell us the feedback before it comes up on the computer. ... Sometimes, you don’t get to know the attending, and it’s not that meaningful.”
In 2017, residents, particularly interns, consistently reported little regard for evaluation and grading but positive regard for in-the-moment feedback as implicit education. Interns associated evaluation with being a medical student. One said, “Evaluation is more of a grade, and at this point in my career, it is much further back in my mind. I care much less about an evaluation compared to when I was a medical student.”
For interns, in-the-moment feedback was useful because “it’s timely, it’s based on observation, it’s based on that day’s occurrences, and it’s regular.” Senior residents highlighted that in-the-moment feedback is embedded in clinical work. “I try to give feedback along the way. If an intern does something good, I’ll say, ‘good pick up.’” Attending physicians highlighted the timeliness of in-the-moment feedback. “I do think some things need to be addressed just in time. And if someone does something excellently on rounds, I will say right there, “Now that was a fantastic presentation. Everyone, please take note.’”
Only in 2017, when residents were more aware of in-the-moment feedback, did they mention an unintended consequence of Feedback Friday: it could blur the recognition of in-the-moment feedback. “We really emphasize Feedback Friday as a thing we should be doing, and for good reason. But maybe it is to the detriment of recognizing how much feedback we are getting throughout the week.”
Data-Derived Explanations for Progress in Achieving Recommendations
In 2017, residents and attending physicians offered clues that helped explain the progress made in achieving the 2007 feedback recommendations. In 2007, the only reference to feedback training was that it did not occur. By 2017, training was offered to attending physicians across the institution, and there were focused efforts within the division of general pediatrics to improve feedback. One attending recalled, “There was a lot of communication in this new academic year. ... There was an intentional effort to rally faculty so that feedback was a more active part of our time on service.”
Efforts to train attending physicians about feedback were matched with efforts in the residency program to teach residents how to recognize and give feedback. One 2017 senior resident said, “The big thing we talk about a lot is what counts as feedback.” As early as 2009, residents-as-teachers sessions were introduced into intern orientation. A 2017 intern remarked, “Feedback in the moment is something we are taught to do better. I try to do it with my [medical] students when they present a patient.” And from June 2016 to January 2017, chief residents led an initiative to encourage attending physicians to give goal-directed feedback to interns.
Feedback training was accompanied by the leadership’s recognition of feedback. Feedback was named as a departmental priority in 2016 and included in the 2017 annual operating plan, and thus reinforcing its importance within the institution.
In 2010, the Accreditation Council for Graduate Medical Education changed its annual resident survey to ask about the quality of feedback and not merely the presence or absence of it, and thus indicating an evolved understanding of feedback at a national level. In 2014, the Accreditation Council for Graduate Medical Education mandated the reporting of pediatric milestone data, representing a shift to understanding learning as a developmental progression. The documents that were reviewed reflected that shift. For example, attending physicians in 2007 were asked to rate resident performance by using a normative scale (at, above, or below residents at the same level), whereas attending physicians in 2017 were asked to rate resident performance by using a criterion scale with behavioral anchors ranging from preintern to postresidency.
No participant explicitly identified these influences in the 2017 interviews, suggesting that national influence on feedback was subtle and did not rise to the level of awareness. However, attending physicians who served as program directors routinely used a development framework to give feedback. “I always try to phrase it in the developmental sense: ‘This is where you are at now, and this is where you want to be.’”
Changes in a pediatric context over the course of 10 years afforded positive change in the perceptions of feedback and feedback behaviors and progress in achieving data-driven recommendations. Compared with their 2007 counterparts, residents and attending physicians in 2017 more clearly distinguished between feedback and evaluation; residents were more aware of in-the-moment feedback and had shifted their orientation from evaluation and grades to learning and feedback. These findings challenge the notion of medical education being resistant to change and offer hope that pediatrics is cultivating a learning context in which feedback is differentiated from evaluation and is perceived as useful for learning.22,23 The findings also call for a broader understanding of feedback that aligns with residents’ perceptions of feedback as information they can use for improvement.3
Although feedback training and departmental leadership seemed to influence positive change, they did so synergistically with subtle, national changes, namely, the pediatric milestones (Fig 2). In contrast to a norm-referenced assessment framework, the milestones are a developmental assessment framework that describe a continuum of development toward mastery. Change to a developmental framework may help residents use feedback as a roadmap for their learning trajectory.10,24,25 Other disciplines can learn from our experience in pediatrics and dovetail milestone implementation with teaching to give feedback by using a developmental framework. Helping attending physicians across disciplines recognize the subtleties of feedback is an essential part of these efforts to change the context in which feedback plays out.
Despite the positive changes we report, there is room for improvement. Well-intended efforts, such as Feedback Friday, may have unintended consequences.26 It may be worthwhile to emphasize in-the-moment feedback as part of teaching in clinical contexts because this type of implicit learning is particularly powerful and pervasive.14,27–30 In-the-moment feedback itself may not be time intensive, but time should be protected for reviewing what was learned from in-the-moment feedback and situating that learning in residents’ developmental trajectory. Although residents were aware of in-the-moment feedback, whether this awareness is maintained when it comes time to formally evaluate attending physicians on their teaching and/or feedback performance is unclear.
As our findings suggest, the significance of contextual change comes into focus over time and may not be identifiable in cross-sectional studies or outside ethnography. Although we derived our findings from an ethnographic database spanning 10 years, we acknowledge our limitations. We observed morning rounds on general pediatric floors at 1 institution. Our findings may not apply to other pediatric institutions, other specialties, or other aspects of clinical training. We observed in-the-moment feedback but did not observe formal feedback. In the 2017 interviews, we asked participants to distinguish between feedback and evaluation and in so doing may have biased their responses. Finally, we limited our discussion of contextual change to those identified by residents and attending physicians; future researchers might explore evolving social norms or other accreditation mandates, which could also influence change.
By using follow-up ethnography to study contextual change, we report progress in achieving the data-driven recommendations made in 2007. Changes in the contexts in which pediatric education plays out may afford positive change in the perceptions of feedback and feedback behavior.
We thank Gail Slap, MD, MS, and Dan Schumacher MD, MEd, for their insightful comments on the article. We especially appreciate the residents and general pediatric attending physicians who contributed to this study.
- Accepted October 20, 2017.
- Address correspondence to Dorene F. Balmer, PhD, Department of Pediatrics, The Children’s Hospital of Philadelphia, 34th St and Civic Center Blvd, 9NW72 Main Building, Philadelphia, PA 19104. E-mail:
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: No external funding.
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
- Kornegay J,
- Kraut A,
- Manthey D, et al
- Hicks PJ,
- Margolis M,
- Poynter SE, et al; Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network–NBME Pediatrics Milestones Assessment Group
- Balmer DF,
- Giardino AP,
- Richards BF
- Balmer D,
- Master C,
- Giardino A
- Hammersley M,
- Atkinson P
- Patton MQ
- Eisner E
- Balmer DF,
- Quiah S,
- DiPace J,
- Paik S,
- Ward MA,
- Richards BF
- Copyright © 2018 by the American Academy of Pediatrics