pediatrics
May 2017, VOLUME139 /ISSUE 5

Inpatient Hospital Factors and Resident Time With Patients and Families

  1. Lauren Ann Destino, MDa,
  2. Melissa Valentine, PhDb,
  3. Farnoosh H. Sheikhi, MSc,
  4. Amy J. Starmer, MD, MPHd,
  5. Christopher P. Landrigan, MD, MPHe,f,g, and
  6. Lee Sanders, MDh,i
  1. aDivisions of Pediatric Hospital Medicine and
  2. hGeneral Pediatrics, Department of Pediatrics, School of Medicine and
  3. bDepartments of Management Science and Engineering,
  4. cGeneral Medical Disciplines, Stanford University, Stanford, California;
  5. dDivision of General Pediatrics, Department of Medicine, Harvard Medical School, Boston, Massachusetts;
  6. eDivision of General Pediatrics, Department of Medicine, Boston Children’s Hospital, Harvard Medical School, Boston, Massachusetts;
  7. fDivision of Sleep Medicine and
  8. gCenter for Patient Safety Research and Practice, Division of General Medicine, Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts; and
  9. iCenter for Health Policy, Primary Care Outcomes Research Center, Stanford, California
  1. Dr Destino conceptualized and designed the study, designed the data collection instruments, and drafted the initial manuscript; Dr Valentine helped design the study and reviewed and revised the manuscript; Dr Shiekhi carried out the initial analyses and reviewed and revised the manuscript; Drs Starmer and Landrigan designed the data collection instruments, coordinated and supervised data collection across all 9 study sties, critically reviewed the manuscript; Dr Sanders conceptualized and designed the study, reviewed the data collection instruments, and reviewed and revised the manuscript; and all authors approved the final manuscript as submitted.

Abstract

OBJECTIVES: To define hospital factors associated with proportion of time spent by pediatric residents in direct patient care.

METHODS: We assessed 6222 hours of time-motion observations from a representative sample of 483 pediatric-resident physicians delivering inpatient care across 9 pediatric institutions. The primary outcome was percentage of direct patient care time (DPCT) during a single observation session (710 sessions). We used one-way analysis of variance to assess a significant difference in the mean percentage of DPCT between hospitals. We used the intraclass correlation coefficient analysis to determine within- versus between-hospital variations. We compared hospital characteristics of observation sessions with ≥12% DPCT to characteristics of sessions with <12% DPCT (12% is the DPCT in recent resident trainee time-motion studies). We conducted mixed-effects regression analysis to allow for clustering of sessions within hospitals and accounted for correlation of responses across hospital.

RESULTS: Mean proportion of physician DPCT was 13.2% (SD = 8.6; range, 0.2%–49.5%). DPCT was significantly different between hospitals (P < .001). The intraclass correlation coefficient was 0.25, indicating more within-hospital than between-hospital variation. Observation sessions with ≥12% DPCT were more likely to occur at hospitals with Magnet designation (odds ratio [OR] = 3.45, P = .006), lower medical complexity (OR = 2.57, P = .04), and higher patient-to-trainee ratios (OR = 2.48, P = .05).

CONCLUSIONS: On average, trainees spend <8 minutes per hour in DPCT. Variation exists in DPCT between hospitals. A less complex case mix, increased patient volume, and Magnet designation were independently associated with increased DPCT.

  • Abbreviations:
    ACGME
    Accreditation Council for Graduate Medical Education
    DPCT
    direct patient care time
    EHR
    electronic health care record
    I-PASS
    illness severity, patient summary, action list, situational awareness and contingency planning, synthesis by receiver
    LOS
    length of stay
    OR
    odds ratio
    RA
    research assistant
  • What’s Known on This Subject:

    Resident time in direct patient care has declined despite being an essential element of physician training.

    What This Study Adds:

    Resident proportion of time in direct patient care across 9 hospitals was 13.2%, with significant variation within and between hospitals. Magnet status, low patient complexity, and increased patient numbers per trainee were associated with more time in direct patient care.

    The introduction to the Accreditation Council for Graduate Medical Education (ACGME) common program requirements for residency training states, “For the resident, the essential learning activity is interaction with patients under the guidance and supervision of faculty members. . .”1 A physician must build relationships with patients and families, gather information to make diagnoses, and share information, including recommendations, plans, and instructions.2 Good communication benefits patients and physicians by improving treatment adherence, illness understanding, and overall satisfaction for patients, and is associated with less stress, improved job satisfaction, and decreased malpractice claims for physicians.3,4

    Despite the importance of patient and family interaction, time spent by residents in direct patient care has declined from 36% in studies (time-motion and activity logs) before the 2003 ACGME duty hour requirements to 12% in time-motion studies after 2011.57 Although the increased number of handoffs as a result of duty hours8 could be a factor by displacing direct patient care time (DPCT), this relationship has not been directly studied. Some authors have suggested that demands placed on residents by the ubiquity of computers may be contributing to this decline.4,7,9,10 However, studies looking at time spent with patients before and after implementation of an electronic health care record (EHR) have not necessarily demonstrated decreased DPCT.11,12 Other studies have analyzed the impact of restructuring inpatient resident teams to optimize DPCT, with mixed results.13,14 In general, time-motion studies involving trainees have been limited to ≤2 sites with little data from pediatrics.

    The objective of this study was to examine hospital-level factors associated with the proportion of resident DPCT in the inpatient setting. We hypothesized that variation in DPCT would be greater at the hospital level than at the observation level. Based on a number of factors that have been proposed to affect physician workload15 and the authors’ knowledge and experience, we also hypothesized that the following hospital characteristics would be associated with increased DPCT: lack of a comprehensive EHR, increased number of patients relative to trainees, less complex patients, shorter lengths of stay (LOS), localization (by patient unit) of patients on the team, no patient cap, and no in-house attending overnight.

    Methods

    Design

    We performed a cross-hospital comparison of time-motion observations of inpatient activities of pediatric house staff, collected as part of the I-PASS (illness severity; patient summary; action list; situational awareness and contingency planning; synthesis by receiver) handoff study.16

    Sample

    We examined 6222 hours of activities from 710 observation sessions for 483 physicians across 9 institutions between January 2011 and June 2013. Observation sessions occurred on the I-PASS study teams within each institution.

    Procedure and Data Sources

    Research assistants (RAs) shadowed a single intern or supervising resident, recording every observed activity during a continuous observation session, lasting ∼8 to 12 hours. If the trainee handed off to a subsequent trainee during the observation session, the RA would continue the session with that subsequent trainee. Observations occurred only at the hospital; no activities away from the hospital were observed. Observation sessions were performed over various times in the day, night, and week to obtain a representative ratio from all 24 hours, weekdays, and weekends, divided between interns and residents (postgraduate year 2 and greater). RAs were all trained by the same study investigator using a standard document describing the details of each activity in the time motion database. Activities were recorded by using an adapted Microsoft Access database (Microsoft Corporation, Redmond, WA)16,17 that included 12 major categories of activities (Fig 1), including subdomains detailing 114 minor activities. Using this database, the RA could quickly start and end observations of individual activities as they occurred during the session.

    FIGURE 1

    Time-motion database.

    The Institutional Review Board at Stanford University approved this secondary analysis. The original time-motion data were gathered after institutional review board approval at all 9 hospitals.

    Measures

    The primary outcome was the proportion of time all residents spent in DPCT (in person) within an observation session. This proportion was computed by summing the total minutes spent directly with patients during the observation session and then dividing this value by the total minutes spent in all other activities, including DPCT. We defined DPCT (patient/family contact) as any of the following activities escribed in the study RA’s codebook: obtaining a patient history, casual conversation, physical exam, explaining a plan, educating a patient/family, obtaining consent or advanced directives, procedures (intravenous line insertion, phlebotomy, and other), unspecified (trainee with patient/family but RA outside the room), and other.

    Independent variables assessed included hospital-level characteristics that have been proposed to affect physician workload.15 At each hospital, demographic data from all patients on the study teams over the study period were collected. However, these data were not specific enough to assign individual patients to specific observation sessions with accuracy (the I-PASS study period occurred over ∼1 year at each site, but the sessions were at random times within this period). Thus, the mean patient LOS and percentage of patients with complexity (percentage of total patients on the study teams with complex chronic conditions) were obtained from all patients on the I-PASS study teams at the site.16 To assess the comprehensiveness of the EHR,17 a modified version of the American Hospital Association Information Technology supplement was sent electronically to chief medical information officers at each hospital in October 2013. To obtain other hospital-level characteristics not already available from I-PASS data, each site’s chief resident was sent an electronic survey in June 2014. Because the patient census during an observation session was not available, the patient-to-trainee ratio was assessed by dividing the total number of patients seen on the study team over the entire 12-month study period (from I-PASS data16) by the number of trainees on the team on a typical weekday (from the chief resident survey). This calculation was done because some hospitals had more patients, but also more residents, on a typical weekday. Other hospital-level factors, obtained from the chief resident survey, included the presence of an admitting team, patient localization (patients mostly on 1 unit versus multiple units), system for increased patient volume (either a cap on the team’s total patient number or adding more providers when the team’s patient numbers exceed a set number), and whether a team attending was in house past 6 pm. Hospital structure (free-standing children’s hospital versus non–free-standing) was obtained from the Children’s Hospital Association (www.childrenshospitals.org), and Magnet status, a nursing designation for superior nursing and quality patient care, was obtained from www.nursecredentialing.org/Magnet/FindaMagnet Facility.

    Statistical Analysis

    Because each observation session may have included multiple physicians (observed in succession, not in parallel), we considered the overall average of their percentage of DPCT for each unique session. We used one-way analysis of variance to compare differences in the mean percentage of DPCT between hospitals. We used the intraclass correlation coefficient analysis to determine within-hospital versus between-hospital variations. This analysis measures similarity of values (observation sessions) within a cluster (individual hospitals). More variation exists between hospitals, compared with variation within hospitals, when the intraclass correlation coefficient is higher (closer to 1).18 We used descriptive statistics to describe hospital characteristics and assess bivariate relationships between those with ≥12% DPCT and <12% DPCT. We chose 12% a priori based on the most recent time-motion studies assessing inpatient trainees.57 We used the χ2 test to analyze differences between the 2 groups (≥12% vs <12%). We conducted 2 mixed-effects regression models assessing the association between significant hospital characteristics (bivariate analysis, P < .05) and the percentage of DPCT with a random intercept corresponding to each hospital. In the first model, we considered patient characteristics with fixed effects for patient complexity, LOS, and patient-to-trainee ratio. Because these characteristics were continuous variables, we dichotomized based on their distribution across the 9 hospitals. In the second model, we considered hospital/team characteristics. Such models allowed for clustering of observational sessions within hospitals and accounted for correlation of responses across hospitals. The significant covariates from bivariate analysis (P < .05) were subsequently tested for their impact on the percentage of DPCT with a stepwise mixed-effect logistic regression analysis to obtain odds ratio (OR) estimates. In examining the assumptions of regression, signs for multicollinearity were assessed by examining the correlation among covariates. The final regression models were adjusted for day versus night and weekdays versus weekend.

    All tests were two-sided and a P value ≤.05 was considered statistically significant. We performed analyses using R statistical software, version 0.98.50 (www.r-project.org).

    Results

    The overall mean proportion of DPCT per observational session was 13.2% (SD = 8.6; range, 0.2%–49.5%). The hospital-by-hospital comparison of the percentage of DPCT is shown in Fig 2. There was a significant difference in the percentage of DPCT between hospitals (F = 25.7, P < .001). The average DCPT at the lowest site was ∼5.5 minutes per hour, whereas the highest was ∼14.5. The intraclass correlation coefficient, adjusted for resident level (resident versus intern and day versus night was associated with DPCT in a previous analysis of a subset of the I-PASS time-motion data19) and the time of day of the observation session, was 0.25 (95% confidence interval, 0.12–0.54), indicating more variation within than between hospitals.

    FIGURE 2

    Percentage of DPCT by hospital.

    Figure 3 and 4 show the distribution of the hospital characteristics. The hospital characteristics, patient-related factors, team characteristics, and their association with the percentage of DPCT are shown in Table 1. All the predictors were significantly associated with percentage of DPCT except for having an admitting team, the presence of an attending physician after 6 pm, and a comprehensive EHR. Observation sessions with a high percentage of DPCT (≥12%) were more likely to occur for hospitals with less patient complexity (<50% of patients with complex chronic conditions), shorter LOS (average LOS <4 days), and increased patient-to-trainee ratios (patient-to trainee-ratio >200). Observation sessions with ≥12% DPCT were also more likely at non–free-standing children’s hospitals, hospitals where patients are not localized, hospitals with Magnet designation, and hospitals with a system to manage increased patient volume. The time of day (day versus night) and the day of week (weekday versus weekend) had no association with high DPCT, although the time of day was nearly significant (P = .08).

    FIGURE 3

    Distribution of patient factors by hospital. Solid lines indicate hospitals with higher patient-to-trainee ratios (>200), longer LOS (>4 days), and more patients with complex chronic conditions (>50%). aA complex chronic condition was defined to be present for each patient whose condition could be classified as belonging to 1 of the 3 commonly published categories based on International Classification of Diseases, Ninth Revision, diagnostic and procedural codes: a complex chronic condition, neurologic impairment, or a condition for which technological assistance was required.

    FIGURE 4

    Distribution of team and hospital characteristics.

    TABLE 1

    Distribution of Hospital Characteristics Between Sessions With High and Low DPCT (N = 710 Sessions)

    Table 2 presents the final mixed-effect models analyzing the association between predictor variables and ≥12% DPCT. Although these models performed well, there was collinearity between free-standing children’s hospital and patients localized to 1 unit (r2 = 0.8) and LOS and patient complexity <50% (r2 = 0.7). Therefore, the localization variable and LOS were dropped from the analysis to increase parameter estimate precision. The final model indicated that observation sessions with ≥12% DPCT were more likely at Magnet hospitals (OR = 3.45, P = .006) and in settings with lower child medical complexity (OR = 2.57, P = .04) and higher patient-to-trainee ratios (OR = 2.48, P = .05). There were no statistically significant differences in the team-level factors. Of note, the hospital with the highest DPCT met all 3 hospital characteristics (Magnet, low complexity, and high patient volumes relative to trainees).

    TABLE 2

    Mixed-Effect Models of Patient Care Time and Team/Hospital and Patient Characteristics

    Magnet status remained significant after adjusting for time of day (day versus night) and day of the week (weekday versus weekend). However, patient-to-trainee ratio and medical complexity were only borderline significant after adjustment (P = .06 and P = .059 respectively).

    Discussion

    Pediatric physician trainees in hospital-based settings spend ∼13% of their time, <8 minutes per hour in aggregate, in DPCT. This percentage of DPCT is similar to recent time-motion studies of physician trainees. However, we did find that DPCT varies significantly between hospitals, a difference of 10 minutes per hour between the hospitals with the lowest and highest DPCT. Additionally, there is more variation within hospitals than between hospitals, even after adjusting for time of day and resident training level. Among the 9 hospitals, observation sessions with greater DPCT were more likely at hospitals with Magnet designation, lower patient medical complexity, and higher patient volumes relative to the number of trainees. To our knowledge, this is the largest and most comprehensive time-motion study to assess hospital-level characteristics associated with the amount of time spent by physicians in direct communication with patients and families.

    Although quantity and quality of time do not necessarily correlate, our main results suggest several hypotheses and attendant opportunities to improve DPCT. The direct association between increased patient volume and increased DPCT may result simply from greater exposure: more admission history and physicals, more daily visits, and more discharge discussions.7 Similarly, the inverse association between decreased medical complexity and increased DPCT may result from the “newness” of patients to the hospital system: less previous information in the medical record requires more DPCT to gather essential information. In contrast, complex patients require more care coordination,20 and thus more interprofessional communication or documentation and scheduling may occur relative to direct family communication. In fact, in an analysis of a subset of this time-motion data, trainees spent the most time in interprofessional communication relative to all other categories.19 Also, given the relative frequency of hospitalizations, families of complex pediatric patients may not be present as frequently as families of children hospitalized for a one-time illness.

    Magnet status has been associated with better patient experience and increased quality.2123 Significant interprofessional resources are needed to support the nursing structure because the Magnet model includes “exemplary professional practice,” stressing efficient patient services, highly educated nursing staff, and strong interprofessional teams.24 These resources may also benefit physicians by freeing up time for patient/family interactions. Although Magnet is a nursing designation, the focus on patient-centered outcomes may alter the entire hospital culture. In addition, a qualitative study evaluating physician perceptions of Magnet nurses indicated that those nurses may be more likely to call a physician about patient issues.25 These calls could lead to more DPCT because physicians need to address issues directly with the family. Of note, although we found no hospital-level association between a comprehensive EHR and DPCT, previous studies have demonstrated variation in the size and direction of this association.10,11,2632

    Importantly, the intraclass correlation coefficient revealed more within-hospital than between-hospital DPCT variation. This finding aligns with previous studies demonstrating that different teams and units in the same hospital varied significantly on organizational outcomes, such as norms, teamwork, collaboration, coordination, and safety culture.3336 This result also suggests a number of hypotheses about the microcosms of hospital care, with implications for inpatient care and training. Perhaps, because trainee workload varies day to day and resident to resident, so, too, do opportunities for meaningful patient interaction.37 Also, some residents, teams, or team leaders (residents and/or attending supervisors) may value and reward DPCT more than others. Additionally, although residents may value DPCT, other competing interests, such as scholarly pursuits, education (learning and teaching), or personal obligations may take some away from the bedside more than others. Additional research and future interventions designed to improve doctor-patient communication should consider these microcosms of care, including the scope and intensity of work differential team-care cultures.38

    Our study has a number of limitations. First, our level of analysis was performed at each RA observation session. Within each session, >1 trainee was often observed, and we did not assess individual physician factors that may have contributed to variation. Also, between and within each session at a particular hospital, the number of patients, number of admissions and discharges, and patient complexity most certainly varied, but we cannot account for this variation at the observation session level. Although the number of hospitals analyzed exceeds that from previous studies, the total number is still small, limiting the power of our associations. In assessing a number of hospital-level factors, there is a lack of standardization in definitions (eg, patient complexity and patient localization) and the chief resident and chief medical information officer surveys were conducted retrospectively. Also, there may be additional hospital-level factors not assessed, confounding our results. Observations did not occur at the same time of year, and there may be variations in patient numbers and in how trainees spend their time depending on when they were observed within the academic calendar. We also did not account for multitasking, which has been found to be high in some time-motion studies of providers and may increase or decrease DPCT depending on the bias of the RA.39 Finally, because we felt experiential learning was preferable to the ACGME and learners alike, we did not comprehensively analyze education time between sites, which may provide additional learning.1,40

    Because pediatric resident trainees are often at large free-standing children’s hospitals with increasing patient complexity, the ACGME’s “essential learning activity” may be lost to other tasks, such as coordination of care. Improving the interprofessional team within the nursing structure may also benefit physicians, enabling time to be reallocated to patient interactions, which can ultimately enhance patient care. Other providers, such as care coordinators and/or scribes, may also free up resident time for more patient interactions. Thinking through the full 3-year resident experience to ensure frequent rotation experiences with robust opportunities for meaningful DPCT will be essential. Many tertiary care children’s hospitals offer rotations at alternative sites with more and less complex patients, so although one inpatient experience contains little DPCT, another may contain much more. Although we did not find an association between localization of patients and DPCT, patient localization does not mean physicians work in the space where the patients are located. Changing the environment to allow for colocation of patients and providers may encourage more DPCT: a nurse’s question may lead to an in-person patient assessment rather than only a review of EHR data, a resident may choose to enter the room with the consultant to ensure all team members have a shared mental model, or a family member may reach out about a question he or she keeps forgetting on rounds. Finally, because Magnet status may alter hospital culture, promoting the importance of patient and family interaction within the training program may alter resident culture.

    Conclusions

    We observed an overall low but variable rate of the percentage of DPCT, equivalent to <8 minutes per hour of physician time, but ranging from ∼5.5 to 14.5 minutes by site on average. Hospital-level factors, including lower patient complexity, higher numbers of patients per trainee, and a hospital’s Magnet status, are significantly associated with increased DPCT. Because DPCT is critical to improving patient care41 and physician training, future efforts should explore causes of within- and between-hospital variation, especially modifiable factors associated with increased DPCT.

    Acknowledgments

    We thank the Academic Pediatric Association’s Research Scholar’s Program as well as the I-PASS Study Group for their support. The I-PASS Study Group: Boston Children’s Hospital (primary hospital): April D. Allen, MPA, MA (currently at Heller School for Social Policy and Management, Brandeis University), Elizabeth L. Noble, BA, and Lisa L. Tse, BA; Boston Children’s Hospital/Harvard Medical School (primary hospital): Angela M. Feraco, MD, Christopher P. Landrigan, MD, MPH, Theodore C. Sectish, MD, and Amy J. Starmer, MD, MPH; Benioff Children’s Hospital/University of California San Francisco School of Medicine: Glenn Rosenbluth, MD, and Daniel C. West, MD; Brigham and Women’s Hospital (Data Coordinating Center): Anuj K. Dalal, MD, FHM, Carol A. Keohane, MS, RN, Stuart Lipsitz, PhD, Jeffrey M. Rothschild, MD, MPH, Matt F. Wien, BS, Catherine S. Yoon, MS, and Katherine R. Zigmont, BSN, RN; Cincinnati Children’s Hospital Medical Center/University of Cincinnati College of Medicine: Javier Gonzalez del Rey, MD, MEd, Jennifer K. O’Toole, MD, MEd, and Lauren G. Solan, MD; Doernbecher Children’s Hospital/Oregon Health and Science University: Megan Aylor, MD, Amy J. Starmer, MD, MPH, Windy Stevenson, MD, and Tamara Wagner, MD; Hospital for Sick Children/University of Toronto: Zia Bismilla, MD, FRCPC, MEd, Maitreya Coffey, MD, FRCPC, and Sanjay Mahant, MD, MSc, FRCPC; Lucile Packard Children’s Hospital Stanford/Stanford University: Rebecca L. Blankenburg, MD, MPH, Lauren A. Destino, MD, Jennifer L. Everhart, MD, Madelyn Kahana, MD (currently at Albert Einstein College of Medicine), Shilpa J. Patel, MD (currently at Kapi’olani Medical Center for Women and Children/University of Hawaii John A. Burns School of Medicine), and Paul J. Sharek, MD, MPH; Primary Children’s Hospital/Intermountain Healthcare/University of Utah School of Medicine: James F. Bale, Jr, MD, Jaime Blank Spackman, MSHS, CCRP, Rajendu Srivastava, MD, FRCPC, MPH, and Adam Stevenson, MD; St. Louis Children’s Hospital/Washington University School of Medicine: Kevin Barton, MD, Kathleen Berchelmann, MD, F. Sessions Cole, MD, Christine Hrach, MD, Kyle S. Schultz, MD, Michael P. Turmelle, MD, and Andrew J. White, MD; St. Christopher’s Hospital for Children/Drexel University College of Medicine: Sharon Calaman, MD, Bronwyn D. Carlson, MD, Robert S. McGregor, MD (currently at Akron Children’s Hospital/Northeast Ohio Medical University), Vahideh Nilforoshan, MD, and Nancy D. Spector, MD; Walter Reed National Military Medical Center/Uniformed Services University of the Health Sciences: Jennifer H. Hepps, MD, Joseph O. Lopreiato, MD, MPH, and Clifton E. Yu, MD.

    Footnotes

      • Accepted February 14, 2017.
    • Address correspondence to Lauren Ann Destino, MD, Department of Pediatrics, Stanford University, 300 Pasteur Dr, MC 5776, Palo Alto, CA 94304. E-mail: ldestino{at}stanford.edu
    • FINANCIAL DISCLOSURE: Dr Starmer reported receiving honoraria and travel reimbursement from multiple academic and professional organizations for delivering lectures on handoffs and patient safety. Dr Landrigan has served as a paid consultant to Virgin Pulse to help develop a Sleep and Health Program and is supported in part by the Children’s Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, Dr Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for teaching and consulting on sleep deprivation, physician performance, handoffs, and safety and has served as an expert witness in cases regarding patient safety and sleep deprivation. Drs Landrigan and Starmer have consulted with and hold equity in the I-PASS Institute, which seeks to train institutions in the best handoff practices and aid in their implementation. The other authors have indicated they have no financial relationships relevant to this article to disclose.

    • FUNDING: This study involved the secondary analysis of data collected during the Illness Severity; Patient Summary (I-PASS) study, which was supported by grant R18AE000029 from the US Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation. The content is solely the responsibility of the authors and does not necessarily represent the official views of the federal government. Additional funding for the I-PASS Study was provided by: Oregon Comparative Effectiveness Research K12 Program, grant 1K12HS019456 from the Agency for Healthcare Research and Quality; the Medical Research Foundation of Oregon; and the Physician Services Incorporated Foundation (of Ontario, Canada).

    • POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.

    References