Participants of the second colloquium of the Residency Review and Redesign in Pediatrics (R3P) Project considered 3 primary questions: What is a “good doctor”? How do we make one? and How do we know when we have made one? Experts from other countries and other medical specialties helped participants wrestle with these most basic questions. Participants emerged with a better feeling of the utility of different types of evaluations needed to determine resident competence. It was clear that the complexity of the task requires faculty education and development. Most important, it requires the ongoing commitment of all of pediatrics as we seek to link education directly to better health outcomes for children, adolescents, and young adults.
- organizational innovation
- program development
Uncertainties surrounding evolving health care needs of children, pediatric workforce, and financial structure that permeated the first colloquium of the Residency Review and Redesign in Pediatrics (R3P) Project carried over into the second colloquium, entitled “The Theory and Practice of GME and Certification.”1 In addition, the transformation in medical education that began with the Accreditation Council of Graduate Medical Education (ACGME) Outcome Project2 heralded another uncertainty: “Will this transformation create better doctors?” The ACGME has begun to redefine the “good doctor” by expanding the requirements for learning and assessment from traditional focus on patient care and medical knowledge to include application of knowledge and 4 additional competencies: interpersonal and communication skills; professionalism; practice-based learning and improvement; and systems-based practice. The latter two competencies embrace, respectively, concepts of reflection on practice with the intent of improvement and on ability to function within the context of the larger health care–delivery system. This transformation, along with evolving and emerging health needs of children and adolescents, has called current models of training and assessment into question. The pediatric community is not alone in this self-study; the American Board of Internal Medicine and the American Board of Family Medicine are raising similar questions. 3–6 In this article, we report on the theory and practice of graduate medical education (GME) by guiding the reader through the activities and content of the second R3P Project colloquium.
The goals of the colloquium were articulated by Dr Helena Davies from the United Kingdom, the keynote speaker and an expert in workplace-based assessments. She challenged the group with 3 questions:
What is a good doctor? Does the definition change with time and context? Will the good doctor of today have the same attributes and skills as the good doctor of tomorrow?
How do we make a good doctor? Do current training paradigms prepare pediatricians to meet the challenges of our patients? Will we be able to adapt them to meet the emerging needs of tomorrow's children? Is training designed with the end in mind, that is, are we adjusting our training requirements to achieve desired outcomes? Are we clear about what those outcomes are or will be?
How do we know when we have succeeded in making a good doctor? There is no gold standard for the good doctor. Assessment of competence in domains set forth by the ACGME, with no gold standard, fits the definition of a “wicked problem,” which was highlighted in the first colloquium.7 In a world of closely monitored clinical activities and promotion policies that place research at the pinnacle, how do we capture the time and interest of faculty for teaching and assessing learners?
Participants engaged in a whirlwind of team simulations, in both small and large groups, aimed at addressing these questions. A critical group-process technique used in this colloquium was the “trade-show booth.” Topic experts engaged a small group of participants in brief interactive learning sessions and then opened the discussion to possible implications for the overall project. Group participants rotated between trade-show booths to facilitate learning about different ways to achieve educational outcomes. Major themes that emerged inspired team simulations and small-group discussions. The finale charged each group with taking the content of the colloquium and applying it to various training scenarios, ranging from the current situation to an array of possibilities.
The content of the 5 trade-show booths is summarized below.
Clinical Skills Assessment
Clinical skills assessment (CSA) is now a mandatory part of step 2 of the US Medical Licensing Examination (USMLE). It uses standardized patients to test the ability of a medical student to gather information, perform a physical examination, and communicate findings to patients and colleagues. Video clips were used to highlight the critical need for such a tool and to relate it to “Good Medical Practice: USA,” a document developed by the National Alliance for Physician Competence.8 CSA has been demonstrated to be reliable and valid; however, our expert, Dr Ann Jobe, was quick to remind us that CSA should be complemented with skills testing in the workplace, connecting us to the content of our next trade-show booth.
Lessons about workplace-based assessment from the United Kingdom were presented by Dr Helena Davies. Both CSA and workplace assessment provide information about progress as they enhance performance through feedback. Both occur in the context of clinical practice and help assess the skills within the framework of Miller's pyramid for assessing clinical competence.9 At the base of the pyramid is simple knowledge. The next levels of the pyramid are “knows how” and “shows how,” with “does” at the top. An example of an assessment method for “does” would be an audit of rates of immunizations within a practice as compared with a quiz that measures knowledge of recommended immunization schedules. In addition to pointing out the value of workplace-based, real-time assessment, Dr Davies noted that methodologies must be practical. The utility of any assessment method depends on reliability, validity, cost, feasibility/acceptability, and educational impact, but the most reliable and valid assessment tool will be useless if cost and/or feasibility prevent implementation.10 Validity (measuring what one wants to measure), however, is a challenge; there is no gold standard for the measurement of the good doctor.11 Dr Davies cautioned against judging outcomes exclusively on the basis of individual performance.12 Patient outcomes derive from the interaction of many health care professionals. Concerns about reliability (reproducibility) in workplace assessment can be addressed by wide sampling.13
Dr Carol Carraccio described a learning portfolio as a physical or electronic repository of such items as learner self-assessments, individualized learning plans, documentation of progress toward learning objectives, performance evaluations, documentation of bidirectional feedback between supervisors and residents, and tracking of formal resident and supervisor responses to important or “critical” positive and negative incidents during residency.14 It is ideal in that it captures, in real time, developmental progress along an educational continuum. It also provides a practical approach to measuring competence in the 6 broad ACGME domains. Assessment of competence is a complex process that requires multiple methods and assessors and ongoing feedback.13–15 Methods should target the task at hand. For example, to determine if a resident can access, analyze, and apply best evidence to patient care, observation of a resident-led, evidence-based journal club would be appropriate. The assessors should be matched to the skill being assessed. Health care team members, including patient and parents, must assess resident communication skills. Web-based portfolios provide a feasible solution to the volume of assessments and diversity of assessors. Web-based portfolios also provide a practical approach to ongoing feedback through threaded discussions that facilitate self-assessment and reflection, particularly during guided review of the portfolio with a mentor. The active engagement of the learner is a requisite for both outcomes-based education and portfolio assessment and makes the portfolio approach attractive. Engaging learners in portfolio learning and assessment enables them to direct their professional development. The ultimate vision of a portfolio will be realized when pediatric learners open their portfolios on entrance to medical school and close them at the end of their professional careers.
Maintenance of Certification
Ongoing self-assessment and learning to improve practice and ensure quality of care are the essence of maintenance of certification (MOC). A study by Mangione-Smith,16 which demonstrated that, on the basis of quality indicators, only 46% of children receive appropriate care, reminds us of quality gaps in current practice. Dr Paul Miles described the American Board of Pediatrics MOC requirements in 4 domains: professional standing; lifelong learning and self-assessment; cognitive expertise; and performance in practice. He emphasized the importance of integrating “medical education with the delivery of quality care so that students see and participate in ongoing assessment and improvement of care that is safe, timely, effective, efficient, patient-centered, and equitable.”1,17 Key messages included the importance of patient outcome measurement in evaluation of training experiences and pediatric practice and the role of MOC in encouraging lifelong learning, self-assessment, and, ultimately, better outcomes for patients and families.
For all assessment systems, the evaluator must be able to evaluate performance and provide feedback, hence the need for intensive faculty development. Dr Eric Holmboe emphasized that evaluation should be something you do “with” trainees not “to” them and that assessment is most beneficial when it is incorporated into daily teaching and linked to feedback. The elements of an effective evaluation system include (1) clear purpose, (2) clear definition of what is to be evaluated, (3) appropriate training of evaluators, (4) timeliness, (5) transparency, and (6) reliable processes to disseminate and collect evaluations. A practical approach to faculty development involves joint ventures with residents and faculty, because both are called on to assess junior colleagues.
BRIDGING THE TRADE-SHOW BOOTHS
Participants emerged from the trade shows with a sense that (1) national standards of CSA are necessary but not sufficient to ensure good medical practice, (2) formal assessments must be paired with workplace assessments, (3) trainees must be proactive in driving their learning and should view residency as only one phase in a continuum of learning and self-assessment, (4) portfolio learning and assessment are practical ways of documenting professional development across the continuum of education and instilling the habits needed to maintain certification, and (5) faculty development must be addressed at every step.
Armed with knowledge provided by experts, teams reached consensus around major themes. First, R3P participants were better able to evaluate various aspects of assessment. Assessment represents a partnership, and completion of assessment is a responsibility held jointly by faculty and residents. A comprehensive picture of competence as it develops over time requires multiple assessment tools, each tailored to fit the task, and multiple assessors. The latter should include learner self-assessment, despite its deficiencies.18 According to Holmboe et al, “a portfolio isn't a portfolio unless it contains substantial evidence of self-assessment and reflection on the part of the trainee.”19 The use of multiple tools and evaluators is illustrated in Table 1. Learners would be better served if tools were more standardized across programs and a continuum of learning. Portfolios hold real promise in that regard. Tools that assess teams are needed to complement assessment of individuals. The goal should be to start with the end in mind, mapping back from skills required in practice and for MOC to determine skills and assessment during GME.
Second, participants recognized the need for partnering with others. The list of potential partners included internal and family medicine, which have both initiated a similar process; the Association of Pediatric Program Directors, the Committee on Medical Student Education in Pediatrics, the Association of Medical School Pediatric Department Chairs, and the National Association of Children's Hospitals and Related Institutions; specialty societies; accrediting/credentialing bodies; and medical organizations that transcend specialties such as the American Medical Association, the Association of American Medical Colleges, the Institute for Health Care Improvement, the National Board of Medical Examiners, and the Alliance for Physician Competence. Learners, faculty, practicing pediatricians, and allied health professionals are also key stakeholders.
Third, threads that emerged regarding promotion of innovation in residency training during colloquium II focused on the educational continuum. A foundation in generic aspects of competencies such as professionalism and communication would be established during medical school. Residency training would consist of a core that is both time and competency based or, in other words, with some minimal time for maturation and progression to the next phase based on competence. Residency would allow for flexibility in experiential learning based on career path so that once the core training was complete, the remainder of training would be individualized. The early years of practice would provide further structured learning to fill gaps and maintain needed skills. In designing innovations, the pediatric community must engage research networks in ongoing evaluation of these new approaches to link education and patient care outcomes. Portfolios have the potential to facilitate development throughout the structured educational continuum and subsequent professional practice. The ACGME learning portfolio will be disseminated nationally to all GME programs that wish to use this assessment system.20
Fourth, participants recognized that competition for limited resources will continue. The changing landscape of health care delivery requires a blueprint for the needs of children and adolescents and a focus of education and training on these needs. Two issues remain: for faculty, the critical need for faculty development, and for learners, the balance between education and service.
In addition, participants acknowledged the importance of seeing residency as part of an educational continuum. To improve pediatric residency education, residency educators must reach back to medical school and forward to the transition into clinical practice. Viewing undergraduate and graduate medical education as “silos” facilitates neither learning efficiency nor efficacy. The philosophy that structured learning ends with residency is antithetical to the concept of an educational continuum. Rather, the graduating resident needs to be thought of as a “journeyman” on the way to mastery. The slope of the learning curve during the transition years into practice mimics the steepness of the curve as one transitions from medical student to intern. Gaps in clinical skills and medical knowledge must be identified and addressed. Care must be both family centered and efficient. Content areas such as practice management and the business of medicine are more relevant during the early years of practice than during training. Furthermore, principles of practice-based learning and improvement learned in residency may be better developed during practice. Communication skills need to be strengthened to foster the rapport needed to provide true continuity of care within a medical home, delivering care that is accessible, continuous, comprehensive, family centered, coordinated, compassionate, and culturally effective.21 It is critical to understand both the microsystems and macrosystems of care. Microsystems are “units of people—including the patient—and processes that deliver care at the front line, such as the pediatric unit, the physician's office, or nursing provided in the patient's home.”22 Macrosystems “provide the microsystems with the resources, support processes, and structure to deliver care.”23 Finally, judicious use of consultants and referrals must be incorporated.
These themes raise the unanswered and controversial question of timing of certification. Should the American Board of Pediatrics continue to grant certification after successful completion of residency training and a passing score on the certification examination, or should the certificate be awarded later, with evidence that competence has been achieved in the context of practice?
After 2 intense days participants came full circle to answer the 3 questions: What is a good doctor? How do we make one? and How do we know when we have made one? First, we must measure knowledge, skills, and attitudes. Second, we must address medical education as a developmental process that begins in medical school and continues throughout one's career—a journey rather than a destination. Third, we must use appropriate tools to assess performance. An array of tools aligned with the task at hand, involving a variety of assessors (including the learner), will provide the palette of evaluations needed to make a useful determination. Faculty development is needed at every step. Finally, answers are never final. The quest for answers must be a preoccupation of all educators in an era of outcomes-based education and training.
We thank our keynote speaker, Helena Davies, MBChB, MD, the leader of the research and quality assurance arm of the National Foundation Assessment Program in the United Kingdom. We are also grateful to Ann Jobe, MD, MSN, executive director of the Clinical Skills Evaluation Collaboration, a collaborative initiative between the Educational Commission for Foreign Medical Graduates and the National Board of Medical Examiners; Eric Holmboe, MD, vice-president for evaluation and quality research at the American Board of Internal Medicine and director of the Evaluation of Clinical Competence Faculty Development course at the American Board of Internal Medicine; and Paul Miles, MD, vice-president and director of quality improvement and assessment programs in pediatric practice at the American Board of Pediatrics. We are grateful to Robert Hilliard, MD, EdD, professor of pediatrics at the Hospital for Sick Children (Toronto, Ontario, Canada) and a member of the Royal College of Medicine in Canada, for helping bring closure to the colloquium by providing reflections on medical education and performance assessment in Canada.
- Accepted September 22, 2008.
- Address correspondence to Carol Carraccio, MD, MA, University of Maryland, Department of Pediatrics, Room N5W56, 22 S Greene St, Baltimore, MD 21201. E-mail:
The authors have indicated they have no financial relationships relevant to this article to disclose.
- ↵American Board of Pediatrics. R3P Project colloquium II: theory and practice of GME and certification. Available at: www.innovationlabs.com/r3p_public/rtr2. Accessed April 17, 2008
- ↵Accreditation Council for Graduate Medical Education. Outcome Project: enhancing residency education through outcomes assessment. Available at: www.acgme.org/Outcome. Accessed March 31, 2008
- ↵David AK. Preparing the Personal Physician for Practice (P4): residency training in family medicine for the future. J Am Board Fam Med.2007;20 (4):332– 341; discussion 329–331
- ↵National Alliance for Physician Competence. Good medical practice: USA. https://gmpusa.org/default.asp. Accessed April 17, 2008
- ↵Norcini JJ. Work based assessment. BMJ.2003;326 (7392):753– 755
- ↵Holmboe ES, Davis MH, Carraccio C. Portfolios. In: Holmboe ES, Hawkins RE, eds. Practical Guide to the Evaluation of Clinical Competence. Philadelphia, PA: Mosby Elsevier; 2008:86– 101
- ↵Accreditation Council for Graduate Medical Education. ACGME Learning Portfolio. Available at: www.acgme.org/acWebsite/portfolio/learn_cbpac.asp. Accessed September 1, 2008
- Copyright © 2009 by the American Academy of Pediatrics