OBJECTIVE: To evaluate whether systematically developed clinical decision supports provide usability benefit or decreased cognitive workload with their use.
METHODS: Seven surgeons at a pediatric hospital at different levels of training (3 residents, 3 fellows, and 1 attending) were randomized to use either a historical control (ad hoc developed order set) or a systematically developed order set for postoperative management of appendicitis in children. After a washout period, they were crossed over to the other order set. Participants were videorecorded and completed postsurveys, including the System Usability Scale and the National Aeronautic and Space Administration–Task Load Index.
RESULTS: Participants unanimously preferred using systematically developed order sets. These order sets resulted in higher usability scores (75 ± 10 vs 60 ± 19; P < .05) and lower cognitive workload scores (37.7 ± 15 vs 52.2 ± 12; P < .05), with comparable amounts of time spent, mouse clicks, and free text entry. Orders generated were more likely to conform to established clinical guidelines.
CONCLUSIONS: Systematically designed order sets provide a reduction in cognitive workload and order variation in the context of improved system usability and improved guideline adherence. The systematically designed order set did not improve time spent, reduce mouse clicks, or reduce free text entry.
- clinical decision support
- computer order entry
- cognitive workload
- health information technology
- order sets
- system usability
- CPOE —
- computerized provider order entry
- NASA-TLX —
- National Aeronautic and Space Administration–Task Load Index
- SUS —
- Systems Usability Scale
What’s Known on This Subject:
Computerized provider order entry (CPOE) has been recognized to enhance the efficiency, safety, and quality of medical work. Yet vendors and organizations have not determined best practices for customizations, resulting in systems that have poor usability and unintended consequences of use.
What This Study Adds:
This study demonstrated that systematically developed order sets reduce cognitive workload and order variation in the context of improved system usability and guideline adherence. The concept of cognitive workload reduction is novel in the setting of computer order entry.
Clinical decision support1 is a process for enhancing health-related decision-making by providing pertinent evidence-based clinical knowledge at the time such knowledge is to be used. An example of clinical decision support is order sets in systems that support computerized provider order entry (CPOE). Order sets contain groups of orders, with comments that explain when certain orders should be used (eg, postoperative admission orders for patients after appendectomy).
CPOE has been recognized to enhance the efficiency, safety, and quality of medical work.2 However, according to a recent Agency for Healthcare Research and Quality report, vendors and organizations have not determined best practices for customizations, resulting in systems that have poor usability and unintended consequences of use.3 Dissatisfaction may lead to low usage rates.4,5 The approach to implementation of CPOE systems likely is critical,6 as studies have shown increased, unchanged, and reduced mortality with these systems.7–9
CPOE was implemented at our institution in 2003, making Seattle Children’s Hospital an early adopter of this approach.10 CPOE was implemented in conjunction with order sets. Historically, an order set could be created by request. However, there was not a robust method for order set development or periodic maintenance of content. This variability in development resulted in a lack of standard appearance between order sets, including at the order level (eg, orders for incentive spirometry were specified in 6 different ways between order sets). It also had the unintended consequence that ordering providers would ignore intended use of order sets, to find the order set in which an order was written they way they desired. We hypothesize this resulted in increased cognitive load for ordering providers by increasing search time and switching between order sets. Furthermore, nonapplicable prechecked orders may introduce potential errors and increase confusion of staff interpreting these orders. The end result would be workarounds and calls for clarification similar to those reported in other studies.11
At Seattle Children’s Hospital, physician informaticians applied software design methods to create standards for order set content and development. We applied modular programming techniques by creating subroutines for groups of orders specified together (eg, pain management with acetaminophen, ibuprofen, oxycodone, and morphine) or as small order sets (eg, peripheral intravenous order set with intravenous line and flush orders). We also created a consistent appearance for medications (name, dosing, route, and frequency), especially weight-based dosing (with maximum dose). In addition, an order reference manual was created to support orders and order sequence being specified identically in different order sets. We then added a clinical owner and last modified date. A target age range for each order set was also added, allowing unnecessary orders to be removed (eg, infant diet orders in an order set intended for children and adolescents). We provided comments related to evidence-based ordering practice as notes adjacent to the orders and created rules for when orders could be preselected.
In April 2009, this process was applied to order sets for postoperative appendicitis to support a Division of General Surgery clinical guideline of care. The resultant order sets were reviewed and approved by a multidisciplinary team with clinician, nursing, and pharmacy representatives; subroutines for pain management and nutrition were preapproved by our respective pain service and nutrition teams. The order sets were then sent to our Information Services Department to be incorporated into our Clinical Information System.
This study compares the previously (ad hoc) created order set with the systematically designed order sets for postoperative management of perforated and nonperforated appendicitis in children. The goal was to evaluate whether the systematically developed order set differed from the ad hoc order set with regard to usability and cognitive workload. We also examined whether the new order sets would result in greater efficiency and reduced variability in order entry.
This study was conducted at a 250-bed regional children’s hospital with ∼14 000 admissions per year, with ∼250 admissions for appendicitis. Our institutional review board allowed the study to be conducted as nonhuman subject research. Participants were surgeons selected at the onset of surgical ward months in July and August 2009. Trainees were invited to participate as they were coming on to the General Surgery service at the beginning of the academic year (when they were not yet familiar with the ad hoc order sets). The surgeons included 6 males and 1 female. One participant was of Latino/Hispanic origin; all others were white. Additional demographic data were not collected; however, the trainees appeared to be of representative age. Although all trainees invited did agree to participate, some were unable to do so because of their clinical duties.
This study was a randomized crossover trial. The participants were given 2 clinical scenarios for children: 1 with perforated appendicitis and 1 with nonperforated appendicitis, and instructions to complete the order set (Appendix 1). Participants were block-randomized (in groups of 4) to use either the ad hoc or systematically developed order sets, with a washout period of at least 4 hours, followed by use of the other order sets, on a test patient. We collected the data in a prospective nonblinded fashion. Participants’ mouse click counts, keystrokes, and usage patterns were observed, and participants were videotaped and encouraged to articulate their thought processes as per usability testing paradigms. Information was also collected on level of training, time to enter orders, and number of free text orders entered.
Order set usability was assessed after the completion of each order set using an adapted validated usability tool (Systems Usability Scale [SUS]; John Brooke, Redhatch Consulting Ltd).12 The SUS is a 10-item questionnaire scored on a 5-point Likert scale, providing a global view of subjective assessments of usability. Cognitive workload was assessed by using the electronic version of the National Aeronautic and Space Administration–Task Load Index (NASA-TLX).13 The NASA-TLX is a validated subjective cognitive workload assessment tool for human/machine systems. It derives an overall workload score based on a weighted average of ratings on 6 subscales. These subscales include mental demands, physical demands, temporal demands, own performance, effort, and frustration. Both the SUS and NASA-TLX were scored immediately after completion of an order set. Participants were not blinded, although they were not informed about what was being studied in advance. Additional survey questions assessed participant preferences and beliefs by using a Likert scale (Appendix 2).
After the observations were completed, orders were reviewed for each completed order set, and variation was defined as deviation from an expected standard set of orders based on the guideline and clinical scenario (Table 1). Deviation from the standard was defined by not ordering a standard medication or by ordering a nonstandard medication. A medication whose dose varied by 10% from the expected weight-based dose, incorrectly specified intravenous fluid rates, or clinical orders not conforming to the guideline were counted as incorrect orders.
We conducted a pilot study with a convenience sample of 7 participants and performed a power calculation to determine how many participants would be required to find a 20% difference in usability or cognitive workload. As determined by those calculations, no additional participants had to be recruited. The SUS, NASA-TLX, and the differences in ordering were evaluated by using paired t tests.
Seven physicians (3 fellows in pediatric surgery, 3 surgical residents, and 1 general surgical attending [Dr Avansino]) completed a set of postoperative orders for both perforated and nonperforated appendicitis using both the ad hoc and systematically designed order sets.
The systematically designed order set was noted to be easier to use (75 ± 10 vs 60 ± 19; P < .05), with decreased cognitive workload (37.7 ± 15 vs 52.2 ± 12; P < .05) (Table 2). There was reduced variation in orders, with greater adherence to guideline recommendations for perforated appendicitis (0.17 ± 0.19 vs 0.44 ± 0.34; P = .003) compared with the ad hoc designed order set. All providers preferred the systematically designed order sets.
When examining secondary outcome measures, time spent and click counts did not vary dramatically, although there was a trend toward improvement for the test patient with nonperforated appendicitis in both time spent (316 ± 94 seconds vs 229 ± 74 seconds) and click counts (97 ± 32 vs 68 ± 38). There was significant reduction in the number of free text orders required to complete the order set for the systematically designed nonperforated order set that was not seen in the perforated appendicitis order set (Table 2).
Participant commentary revealed unintended consequences of the systematically created order sets. When large numbers of orders were prechecked, some participants accustomed to using adult order sets thought that everything needed would be chosen for them. This was problematic as our order sets require provider input because of weight-based dosing. One participant concluded that although the ad hoc order set was content-poor, it might decrease mistakes because it forced more cognitive effort. Some of the trainees were not familiar with the clinical guideline and assumed that the order set was correct, which was not true for the ad hoc order set. Click counts were higher than expected, not because of selecting orders, but because of individual clicks used to navigate order sets 1 line at a time.
Vendors and organizations do not have a clear understanding of how to customize CPOE, resulting in poor usability and unintended consequences of use.3,14 Using software engineering techniques, including modular software design and formal usability testing,15 can result in improvements in usability and improved adherence to clinical guidelines, as well as improvements in patient safety. Our findings suggest that systematically developed order sets improve adherence to clinical guidelines compared with ad hoc designed order sets. Order sets with improved internal and external consistency through systematic design are easier to use, and can be used with a lower cognitive workload, which will allow health care providers to focus their mental energies more on patient care than on the mechanics of ordering.
Our approach to delivering quality care is based on clinical standard work, which is an approach aimed at reducing provider- and system-based variation, with a goal of reducing care variation to only patient-specific factors. This goal is accomplished by hardwiring agreed clinical recommendations into clinical decision supports and monitoring clinical measures, as well as by simultaneously measuring where patients vary from these care standards (as piloted in other disciplines).16 We anticipate that hardwiring clinical standards into clinical decision supports such as order sets will result in improved safety (because orders that are available better respect patient factors such as age and weight, and drug–drug/drug–allergy interactions can be identified and remedied), improved effectiveness (by hardwiring evidence-based clinical recommendation), improved efficiency (by avoiding orders not related to the diagnosis in question), and improved equity (because it is simpler to treat patients in the same manner using the order sets). Thus, according to the Institute of Medicine’s definition, facilitating clinical standard work by hardwiring evidence-based practice in clinical decision supports should improve the quality of care.
Enabling clinical decision support through order sets has advantages in that the clinicians do not have to stop what they are doing to find the guideline (physical demand), and they do not have to interpret the guideline to successfully execute it (mental demand), supporting clinician belief that they can receive up-to-date information more quickly.17 This reduces time spent ordering correctly, leading to improved satisfaction and adherence to guidelines.18,19 Prechecking orders reduces the number of steps for the end user and limits the mental energy spent interpreting a blank box (performance and efficacy). However, for the reduction in cognitive workload to be effective, the end user must have confidence in the correct implementation of the guideline. Poorly designed decision support may require the end user to make corrections to ensure patient safety, requiring more mental effort.
In this study, we made several observations. First, prechecking orders enables close guideline adherence because users are unlikely to deselect these orders. Our direct observations appeared to verify analytic data from our institution that for most orders, the tendency will be for the individual to remain with that choice. It requires cognitive processing to decide to uncheck an order: to evaluate the prechecked order and then uncheck the order and enter an alternative where indicated. Second, although prechecking orders forces function, it also has the potential for unintended consequences of use. For instance, if an antibiotic for perforated appendicitis is prechecked in an effort to comply with the guideline, it would be incorrect if the patient has an allergy to that antibiotic. Thus, careful consideration must be given to which orders should be preselected. One of our order set standardization rules is that orders should only be prechecked if they are likely to be carried out >95% of the time and will not cause harm to the other <5% of patients. To reduce cognitive workload and improve usability, the order set presents all weight-based dosing for a specific medication and presents the alternative to the medication for those patients with an allergy (Fig 1).
Creating standard order sets can have many unintended consequences.20 We are concerned about the potential of these systems to limit resident training, creating an environment of “cookbook medicine” resulting from prechecked orders. It can be argued that this method impairs the resident’s ability to actively learn the concepts important for caring for patients with clinical diagnoses such as perforated appendicitis. Conversely, this argument supports standard order set development in clinical standard work. Such work allows for trainees to focus on complex patients instead of trivial details for routine patients (eg, whether a specific provider uses antibiotics for 5 or 7 days). By reducing cognitive workload, the trainee has liberated mental capacity to focus on the nuances associated with the deviations from the normal course of care, leading to increased learning. The gap in learning through trial and error can be filled by involving the trainees in the development of clinical standard work, where they can have focused time to review the evidence and to offer input into its implementation. They will better understand the process and methods for developing robust clinical practice, while providing them with an academic opportunity for propagating this work. If involvement in the development of clinical standard work is not possible for the trainee, education surrounding the standard process and the literature behind it can be presented to them in an educational setting. The development of an order set and other implementation tools to support clinical standard work only helps to fortify this educational opportunity for our trainees. Further research is needed to determine whether reducing cognitive workload through providing clinical decision decreases education or simply frees the mind to perform other important work.
Surgical trainees were new to CPOE at our hospital (because this approach is not used elsewhere in the University system except at the Veterans Administration), which may have led to longer completion times. A greater influence on time may be realized with an increased familiarity with the local implementation of CPOE. Implementation choices when developing order sets influence user’s perceptions of clinical decision supports, so this approach may not be completely generalizable to other institutions unless their development methods are comparably rigorous. Consistency between order set development efforts at affiliated institutions may address trainee human/computer interaction issues, freeing up time to focus on learning medicine instead of the idiosyncrasies of ordering between institutions. We also did not find a reduction in the number of mouse clicks using the systematic order set because our users primarily used clicking for navigation, not ordering. While completing the order set, participants would click the down arrow on the order set window as opposed to scrolling, resulting in a large number of clicks. Thus, click count was a poor measure of clinician work. We reanalyzed the data with only the 6 trainees and found that measured cognitive workload reduction remained intact (P = .023) whereas usability trended to significance (P = .067). Because the trainees in general were less familiar with the department’s clinical guidelines, the inclusion of an attending would likely tend to overestimate the compliance with the guidelines. This is counterbalanced by the possibility that the participants may have been more careful than usual with ordering, as seen when 1 fellow examined the surgical pocket guide before ordering.
The research team was not blinded. Because this randomized crossover trial was not a research study, we did not have to preobtain informed consent and so the participants did not know what was being studied. However, participating surgeons could not be truly blinded as they were interacting with the order sets.
We are limited by what can be implemented in our clinical information system. There was no support for conditional ordering (eg, the ability to perform the next series of tasks for the care of a patient based on previously successful completion of previous orders) and we could not preselect orders based on patient demographic factors (eg, we could not choose a medication and have the clinical information system determine the correct dosing). In our aforementioned medication example, the medication and the correct weight-based dose could be completed based on the patient’s current weight and allergy profile. These are areas for future system development: carefully designing systems to better support the needs clinicians have of improving guideline adherence, reducing cognitive workload, improving system usability, and maintaining patient safety.
This study demonstrated that systematically designed order sets, using a combination of standard order set development and a clinical guideline of care, can reduce variation, improve system usability, and reduce cognitive workload. This method serves as a first step in defining best practice for customizing CPOE to improve efficiency, quality, and safety in health care delivery. As providers are incentivized to use CPOE to meet meaningful use criteria, we encourage hospitals to consider creating systematic processes to ensure that ordering through order sets is correct and understandable.
Special thanks to John HT Waldhausen, MD, Professor of Surgery, University of Washington, and Mark Del Beccaro, MD, Pediatrician-in-Chief and Vice Chair for Clinical Affairs, Seattle Children’s Hospital, for their review of the manuscript.
- Accepted May 3, 2012.
- Address correspondence to Jeffrey Avansino, MD, Division of General and Thoracic Surgery, Seattle Children’s Hospital, PO Box 50010, Seattle, WA 98105. E-mail:
Dr Avansino and Leu provided study concept and design, acquisition and interpretation of data; analysis; drafting of the manuscript; critical revision of the manuscript for important intellectual content; administrative, technical, or material support; and study supervision. Dr Leu was responsible for the statistical analysis.
This work was presented by Dr Avansino at the Academic Surgical Congress; February 1, 2011; Huntington Beach, CA.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: No external funding.
- ↵Osheroff JA, Pifer EA, Teich JM, et al. Improving outcomes with clinical decision support: an implementer’s guide. Available at: www.himss.org/content/cdsw/2005/introduction_2005.pdf. Accessed March 15, 2011
- ↵McDonnell C, Werner K, Wendel L. Electronic Health Record Usability: Vendor Practices and Perspectives. AHRQ Publication No. 09(10)-0091-3-EF. Rockville, MD: Agency for Healthcare Research and Quality; May 2010
- Menachemi N,
- Brooks RG,
- Schwalenstocker E,
- Simpson L
- Han YY,
- Carcillo JA,
- Venkataraman ST,
- et al
- Del Beccaro MA,
- Jeffries HE,
- Eisenberg MA,
- Harry ED
- Longhurst CA,
- Parast L,
- Sandborg CI,
- et al
- Teufel RJ II,
- Kazley AS,
- Basco WT Jr
- ↵Brooke J. SUS: a “quick and dirty” usability scale. In: Jordan PW, Thomas B, Weerdmeester BA, et al, eds. Usability Evaluation in Industry. London, UK: Taylor and Francis, 1996. Available at: http://hell.meiert.org/core/pdf/sus.pdf. Accessed September 7, 2011
- ↵Navy Center for Applied Research in Artificial Intelligence. NASA-TLX for Windows. Available at: www.nrl.navy.mil/aic/ide/NASATLX.php. Accessed September 7, 2011
- ↵Physician Order Entry Team. Types of Unintended Consequences of CPOE. Oregon Health & Science University. Available at: www.ohsu.edu/academic/dmice/research/cpoe/unintended_consequences.php. Accessed September 7, 2011
- ↵Armijo D, McDonnell C, Werner K. Electronic Health Record Usability: Interface Design Considerations. AHRQ Publication No. 09(10)-0091-2-EF. Rockville, MD: Agency for Healthcare Research and Quality; October 2009
- Copyright © 2012 by the American Academy of Pediatrics