OBJECTIVE: A Standardized Clinical Assessment and Management Plan (SCAMP) standardizes the care of patients with a predefined diagnosis while actively inviting and collecting data on clinician deviations (DEVs) from its recommendations. For 3 different pediatric cardiac diagnoses managed by SCAMPs, we determined the frequency of, types of, and reasons for DEVs, which are considered to be a valuable source of information and innovation.
METHODS: DEVs were collected as part of SCAMP implementation. DEVs were reviewed by the SCAMP committee chairperson and by a separate protocol deviation committee; they were characterized as either justifiable (J), possibly justifiable (PJ), or not justifiable (NJ).
RESULTS: We analyzed 415 patients, 484 clinic encounters, and 216 DEVs. Eighty-six (39.8%) of the DEVs were J, 21 (9.7%) were PJ, and 109 (50.4%) were NJ. The percentage of NJ DEVs relative to the number of opportunities for DEV was 4.1%. J and PJ DEVs were mostly due to management of unrelated conditions (11% overall) or special circumstances (22% overall). NJ DEVs primarily involved follow-up intervals (66%) and deleted tests (24%). The reason for deviating from SCAMP recommendations was not given for 31% of DEVs, even though such information was requested.
CONCLUSIONS: The overall low rate of NJ DEV suggests that practitioners generally accept SCAMP recommendations, but improved capture of practitioners' reasons for deviating from those recommendations is needed. This analysis revealed multiple opportunities for improving patient care, suggesting that this process can be useful in both promulgating sound practice and evolving improved approaches to patient management.
- AS —
- aortic valve stenosis
- ASO —
- arterial switch operation
- CPG —
- clinical practice guideline
- DEV —
- clinician deviation
- HCM —
- hypertrophic cardiomyopathy
- J —
- NJ —
- not justifiable
- PJ —
- possibly justifiable
- SCAMP —
- Standardized Clinical Assessment and Management Plan
What's Known on This Subject:
Adherence to guidelines has generally been shown to improve patient care and reduce the cost of care. Current understanding of the varying reasons why clinicians deviate from guidelines is based on surveys and retrospective reviews.
What This Study Adds:
We examined clinician deviations from guidelines in a prospective fashion and attempted to categorize those deviations. Better elucidation of clinician reasoning behind deviations may inform care improvement and help define strategies to eliminate unjustifiable deviations.
Clinical practice guidelines (CPGs) are intended to improve the quality of health care by standardizing care and promoting evidence-based medicine.1–4 Indeed, a variety of studies show that clinician adherence to guidelines cannot only improve patient outcomes5–7 but also reduce the cost of care.8–10 Understanding the reasons why clinicians deviate from guidelines could be useful to both learn from deviations and prevent future unjustified ones; however, we are unaware of any prospective studies that examine this reasoning at the point of clinical decision-making.
A Standardized Clinical Assessment and Management Plan (SCAMP) is a quality improvement initiative that guides clinical decision-making and gathers relevant clinical data to inform future improvement of the SCAMP.11–13 Each SCAMP is developed by a multidisciplinary committee of physician and nursing experts for a particular medical condition. Similar in some ways to a CPG, a SCAMP provides a guideline that standardizes the assessment and management of patients with a specific disorder.
Unlike a CPG, however, SCAMP guidelines are flexible and are accompanied by a targeted data collection process that focuses attention on those parts of the SCAMP where the greatest uncertainty about care is likely to reside. Acknowledging this uncertainty inherent in its management recommendations, a SCAMP also actively invites and collects data on knowledge and experience-based clinician deviations (DEVs), which are recognized as a rich source of information and innovation.14 Patterns of DEVs and, in some cases, an individual DEV can reveal areas in which the SCAMP is deficient and therefore amenable to revision. A SCAMP thus permits learning from DEVs and values the preeminence of clinical acumen, which is hypothesized to improve acceptance relative to other forms of guidelines.15 Based on periodic review of DEVs, collected data, and updates in the medical literature, a SCAMP undergoes iterative and progressive modification of its care-delivery algorithm through a process of continuous improvement.
A fundamental component of the SCAMP improvement process is therefore a review of actual clinician practice relative to SCAMP protocol recommendations. We reviewed clinician management of patients with 3 different diagnoses covered by SCAMPs (hypertrophic cardiomyopathy [HCM], aortic valve stenosis [AS], and patients with d-loop transposition of the great arteries status-post arterial switch operation [ASO]) to determine (1) the frequency with which practitioners deviated from SCAMP recommendations, (2) the types of DEVs, and (3) the reasons that practitioners deviated. This analysis was undertaken to assess practitioner adherence to SCAMPs and to uncover opportunities to learn from deviations to improve the SCAMPs themselves.
Based on consultation with the Children’s Hospital Boston Committee on Clinical Investigation, a SCAMP is considered a quality improvement initiative and therefore is exempt from human studies research regulations. The review of DEVs was undertaken as an integral part of the SCAMP process of quality improvement.
Opportunities for DEVs in SCAMPs
As previously mentioned, a SCAMP relies on knowledge-based DEVs from its ;recommendations to inform and improve the SCAMP care plan. Supplemen-tal Information provides an example of a SCAMP data form, which is used by the clinician at the point of care to collect targeted information, provide management recommendations, and capture reasons for DEVs. All cardiology providers in our department were expected to use SCAMPs in the management of all patients who fit inclusion criteria for the above 3 diagnoses.
To place the number of DEVs into context, we determined the number of opportunities for DEV. SCAMP evaluation and management recommendations vary not only with the primary diagnosis but also in relation to the patient’s age and other factors (eg, recommended testing at age 1–3 weeks after ASO differs from that for a 12-year-old); hence, 18 different age/follow-up period–specific SCAMP encounter forms were used for the 3 diagnoses noted. The number of opportunities for DEV for each SCAMP were calculated as 1 opportunity (each) for follow-up interval, added testing, assessment of severity, and exercise restrictions + 1 opportunity for each recommended test deleted (from 1 to 8, depending on the SCAMP). The number of opportunities for DEV ranged from 2 to 11 among SCAMP encounters. The total number of opportunities for DEV = Σ (the number of opportunities for DEV for each SCAMP × the number of clinic visits for that SCAMP) for all 18 SCAMP encounters.
Overview of DEV Analysis
The process used to analyze SCAMP DEVs is summarized in Fig 1.
As part of each SCAMP, practitioners are asked to indicate on the form at the time of the clinic visit when and why they intentionally deviate from SCAMP recommendations regarding tests ordered, follow-up visit intervals, and other specified management elements. These forms were inspected by administrative personnel in the SCAMP organization and DEVs and the reason given for them recorded. In some cases, DEVs not identified by the practitioner were discovered and also recorded. Because the reason for deviating from the protocol was often not provided (see Results), additional information about the patient was sometimes obtained from the medical record in an effort to deduce the reason for the DEV. Information collected on DEVs was intentionally not linked to individual providers to avoid concern that such information could be used for monitoring or criticism of patient care.
DEVs were reviewed by the chairperson of the committee (the SCAMP Protocol Committee) that formulated the SCAMP in question. Each DEV was characterized as justified (J), possibly justified (PJ), or not justified (NJ). A DEV was determined to be J or PJ if it was thought to be generally consistent with sound clinical practice; J and PJ DEVs occurred for 1 of 4 reasons: (1) a testing or clinic follow-up DEV was done to attend to an unrelated condition (eg, prolonged QTc in an ASO patient); (2) special circumstances required DEV (eg, a patient with findings atypical for the condition or when clinic visits occurred at nonspecified intervals due to developments not anticipated in the SCAMP protocol); (3) constraints due to insurance coverage; or (4) what we termed “expert expiation” (ie, when, in the judgment of the chairperson, the underlying rationale for the specific SCAMP prescription was not so robust that it would preclude a reasonable person from pursing the DEV. In some cases the DEV brought to light an issue that required SCAMP modification and the chairperson concluded that the needed modification was consistent with the DEV.
NJ DEVs were categorized as follows: (1) provider disagreement with an arguably arbitrary SCAMP specification (eg, follow-up interval); (2) provider disagreement with a SCAMP specification with strong medical justification (“gold standard” specifications [eg, obtaining an echocardiogram to evaluate possible HCM]); or (3) when family preference drove the decision to deviate from the protocol. DEVs were also considered to be NJ when the reason for DEV was not given by the clinician.
The DEV, assignment of justifiability, and reasons for same as determined by the SCAMP Protocol Committee chairperson were subsequently reviewed by the SCAMP Protocol Deviation Committee, which was composed of 5 experienced and clinically active cardiologists from our Department of Cardiology. This committee reviewed the assignment by the chairperson and, using the same criteria, confirmed that assignment by general consensus or provided an alternative one. The Protocol Deviation Committee’s assignment was final.
Types of DEVs Analyzed
The following types of DEVs were analyzed: clinic follow-up intervals (too long or too short), tests added, tests deleted (depending on the SCAMP: chest radiograph, electrocardiogram, echocardiogram, graded exercise test, stress echocardiogram, Holter electrocardiogram, cardiac MRI, lipid panel, cardiac catheterization, or genetic testing), practitioner assessment of AS, and recommendations for exercise restriction.
SCAMP encounters during the interval from March 2009 to December 2010 were used for analysis and are summarized in Table 1. Among all encounters there were a total of 2644 opportunities for DEV. For all 3 diagnoses combined, there were 216 DEVs; of these, 86 (39.8%) were J, 21 (9.7%) were PJ, and 109 (50.4%) were NJ. The total number of DEVs and the number of NJ DEVs expressed as a percentage of the total opportunities for DEV were 8.2% and 4.1%, respectively.
Figure 2 summarizes the reasons that J and PJ were categorized as such. Figure 3 summarizes the nature of the NJ DEVs; only 4 were DEVs from “gold standard” recommendations. Note, the relatively high percentage of DEVs for which the practitioner did not provide an explanation (31% overall). Figure 4 summarizes the types of NJ DEVs for the 3 conditions combined. Most (66%) were for follow-up interval deviations, the vast majority of which (62 of 72, or 86%) were for earlier than recommended follow-up.
CPGs often meet resistance from clinicians for multiple reasons.16–19 Guidelines often fail to fit within the culture of medicine insofar as clinicians may find them to be unnecessarily rigid, especially when idiosyncratic patient care is required. Clinicians may fear that CPGs represent a means of cost containment rather than quality improvement.20 Some even believe that guidelines inhibit both training and innovation in medical care. Lastly, it has been purported that CPGs are causing a shift in the professional focus of clinicians from autonomy to accountability,21 which can be a source of clinician dissatisfaction.22,23 By permitting and learning from knowledge-based DEVs, SCAMPs were designed to value clinical acumen and overcome many of these barriers to adoption and adherence.
Our analysis suggests simultaneously that while practitioners appear to generally accept the SCAMPs approach to management, they do not uniformly follow SCAMP recommendations. Indeed, DEVs occurred in 8.2% of opportunities, about half of which were J or PJ. Widespread adoption of SCAMPs is indicated by the low rate of NJ DEVs relative to the opportunities for DEV (4.1%). Because some DEVs judged as NJ might have been J or PJ had the rationale for the DEV been provided by the clinician, 4.1% actually represents the upper bound for NJ DEVs. Whether this reflects a respect for the “authority” of the SCAMP or that the protocols mirror the existing local management approach cannot be ascertained from our analysis. This very good level of acceptance, as demonstrated by actual practice, is consistent with a survey indicating a generally positive view of SCAMPs among clinicians at our institution.15
On the other hand, the fact that clinicians inconsistently provided a reason for why they intentionally deviated from the SCAMP (Fig 3) indicates a deficiency in the SCAMP process, which aims to evolve better care by learning from variability in practice. Indeed, some of the DEVs represented learning opportunities to revise and improve the SCAMP. It is not known why providers were inconsistent in reporting their rationale for DEVs; insufficient emphasis on the utility of this information by the SCAMP organizers and the time burden involved may have been contributing factors. Emphasizing the importance of this information and making it faster and easier to provide it (eg, through prepopulated checklists) may yield greater provider input.
There was considerable variability in the reasons for J or PJ DEVs among the 3 diagnoses (Fig 2); no pattern is ascertainable except that J DEVs occurred most commonly for conditions requiring evaluation unrelated to the SCAMP disease and for DEVs related to special circumstances. Interestingly, distinction can be made as to whether a DEV, regardless of being J, was or was not necessary. Necessary DEVs could be considered those related to unrelated conditions, special circumstances, or insurance constraints, which together comprised ∼84% of J DEVs. The remainder fell under the category of “expert expiation” and, although J, were potentially not necessary. Thus, the vast majority of J DEVs were mandated by circumstances not anticipatable by the SCAMP.
NJ DEVs, when a reason for them was identified, were almost always related to provider disagreement with an arguably somewhat arbitrary SCAMP provision or to family preferences. We decided to regard the latter as NJ DEVs, considering that parental preference can often be strongly influenced by an informed discussion with the clinician, and because a strong case can be made for providers sometimes resisting patient requests, especially if they are medically unhelpful.24 We recognize, however, that a cogent case could be made to consider such DEVs as J or PJ. There were few “gold standard” DEVs; in no case did it seem to the committee that a patient’s welfare was endangered due to such a DEV.
DEV analysis has revealed multiple opportunities for SCAMP and patient care improvements thus far. First, the substantial majority of NJ DEVs were for clinic follow-up intervals, with patients usually seen sooner than specified. This high rate of early follow-up was not apparent before the DEV analysis. Given that clinic visits entail expense and patient inconvenience, this discovery suggests that additional exploration of the reasons for these DEVs may yield strategies for providing more efficient follow-up of these patients. Second, we discovered that clinicians often omitted certain SCAMP-suggested tests (eg, lipid panels in ASO), indicating that either retention of these tests in the SCAMP needs to be reconsidered or practitioners need to be better informed of their usefulness. On the other hand, our DEV analysis also revealed the need to consider adding other tests and to reconsider exercise recommendations for some conditions (AS). These and other items were communicated to the chairperson of the committee responsible for development and review of the SCAMP protocol, and will be considered when revision of the SCAMP is undertaken. Last, as inferred earlier, ways should be sought to increase provider feedback regarding intentional noncompliance, perhaps by making the provider–SCAMP interface more efficient and by better informing providers of the utility of providing such information.
There were limitations and potential biases resulting from having the SCAMP Protocol Committee chairperson be the first person to adjudicate whether a DEV was J; however, we thought that these were outweighed by the benefit of having the DEVs reviewed by the individual most experienced with the disease and the SCAMP itself. Deciding whether a DEV was J, PJ, or NJ was also necessarily somewhat subjective. To maximize consistency, we developed rules that guided the determination of justifiability. Also, multiple experienced and clinically active physicians analyzed the DEVs and a general (although not always unanimous) consensus was sought when considering each DEV. The process evolved somewhat over the course of analyzing the 3 diagnoses, but committee members thought that a reasonable degree of consistency and interoperator reliability was achieved across DEVs from all 3 diagnoses.
One of the principles underpinning SCAMPs is that the recommendations should be followed, absent a reason given by the clinician for the DEV. Using “expert expiation” as a reason for considering some DEVs J or PJ is inconsistent with this, since it is the reviewer, not the clinician, who supplied the rationale for the DEV. We took this approach as it draws attention to, and loosely quantifies, elements in the SCAMP protocol that require revision. We acknowledge, however, that fewer DEVs (15%) were assigned NJ status than would have been the case without such “expiation.”
Because decisions to deviate within a SCAMP encounter for a single patient are not necessarily independent events, using the ratio of DEVs to opportunities for DEV is an imprecise way to think about how often practitioners deviate. However, we think it is important to put the number of DEVs into this context. We also note that because a DEV can lead to additional DEVs (eg, when one nonindicated test leads to another), and because no DEVs examined here could eliminate the opportunity for additional DEVs, this ratio represents the upper bound of possible independent DEVs relative to opportunities for DEV.
Finally, we demonstrate here how DEV analysis can reveal opportunities for protocol refinement, but it is too early to know the ultimate utility of such knowledge.
Our analysis of how, how often, and why clinicians deviate from SCAMPs revealed that clinician practice is highly, albeit not perfectly, congruent with SCAMP recommendations. Although our system needs a better way of capturing practitioners’ reasons for deviating from SCAMP recommendations, practitioners appear to be generally accepting of SCAMPs. Furthermore, we believe that this effort represents the first attempt to prospectively and systematically gather the rationale behind clinician deviation from standardized care pathways. This yields important information that is more relevant and less biased than data that surveys and retrospective analyses can provide. The utility of this analysis in identifying multiple opportunities to improve SCAMPs indicates that approaches like this may be quite useful in both promulgating sound practice and evolving improved approaches to patient care.
We would like to acknowledge the critical assistance of Steven D. Colan, MD, Rahul H. Rathod, MD, Ashley Renaud, RN, and Lynne S. Patkin, MBA.
- Accepted March 1, 2012.
- Address correspondence to Thomas J. Kulik, MD, Department of Cardiology, 300 Longwood Ave, Boston, MA 02115. E-mail:
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: This work was supported by grants from the Boston Children’s Heart Foundation, the Program for Patient Safety and Quality at Children’s Hospital Boston, and the Children’s Hospital Provider–Payer Quality Initiative of Massachusetts.
- Agency for Health Care Policy and Research, ed. Using Clinical Practice Guidelines to Evaluate Quality of Care: Vol. 1, Issue. Rockville, MD: Agency for Health Care Policy and Research; 1995. Publication No. 95-0045; No. 1
- Merritt TA,
- Palmer D,
- Bergman DA,
- Shiono PH
- Marrie TJ,
- Lau CY,
- Wheeler SL,
- Wong CJ,
- Vandervoort MK,
- Feagan BG
- Friedman KG,
- Kane DA,
- Rathod RH,
- et al
- Bohmer RMJ
- Flores G,
- Lee M,
- Bauchner H,
- Kastner B
- Bergman DA
- Edwards N,
- Kornacki MJ,
- Silversin J
- Copyright © 2012 by the American Academy of Pediatrics