Objectives. The Vermont Oxford Network is a group of health professionals who are committed to improving the quality and safety of medical care for newborn infants and their families. Neonatal Intensive Care Quality Improvement Collaborative Year 2000 (NIC/Q 2000) was the second in a series of multiorganization improvement collaboratives organized and administered by the Vermont Oxford Network. The objective of this collaborative was to make measurable improvements in the quality and safety of neonatal intensive care, develop new tools and resources for improvement specific to the neonatal intensive care unit setting, evaluate improvement progress, and disseminate the learning.
Methods. The 34 centers that participated in NIC/Q 2000 learned and applied 4 key habits for improvement: the habit for change, the habit for evidence-based practice, the habit for systems thinking, and the habit for collaborative learning. A plan-do-study-act method of rapid-cycle improvement was an integral part of the habit for change. Multidisciplinary teams from the participating centers worked closely together in face-to-face meetings, conference calls, and dedicated e-mail listservs under the guidance of trained facilitators and expert faculty. Focus groups formed around specific improvement topics used critical appraisal of the published literature, detailed process analysis, benchmarking, and round-robin site visits to identify potentially better practices (PBPs).
Results. The focus groups developed a total of 51 PBPs. Each focus group has developed a “resource kit” summarizing its work. Many of these PBPs have been tested and implemented at the participating centers using rapid-cycle improvement. The PBPs and descriptions of individual center PDSA cycles are available to participants on NICQ.org, the dedicated Internet site for the collaborative.
Conclusions. Collaborative quality improvement based on the 4 key habits can assist multidisciplinary neonatal intensive care unit teams in identifying, testing, and successfully implementing change.
KEY POINTS OF ARTICLE
Information on outcomes is necessary but not sufficient to drive practice changes.
Teams need specific tools, resources, and skills for improvement.
Collaboration, both among disciplines within a unit and between teams from different units, is critical for making improvements.
It is important to evaluate the strength and quality of the evidence for proposed practice changes.
Because change depends on local context, it is better to speak of “potentially better practices” (PBPs) that can be tested for effectiveness rather than to assert that a given practice is a universally applicable “best practice.”
The social process within a clinical improvement collaborative aids the spreading of ideas.
APPLYING LESSONS LEARNED TO PRACTICE
Clinical practice can be insular, and collaboration can encourage evidence-based practice.
Large multiorganization collaboratives should have a common database with standard indicators to evaluate improvement progress.
The plan-do-study-act (PDSA) cycle can be easily learned as an effective process for making local change in practice.
Expect considerable effort to be expended when trying to gain consensus on PBPs.
Allow time for relatively unstructured social interaction as a necessary ingredient in successful collaborative learning.
The Vermont Oxford Network is a voluntary group of health professionals who are committed to improving the quality and safety of medical care for newborn infants and their families through a coordinated program of research, education, and quality improvement.1 To support this program, the network maintains a database for infants who weighed 401 to 1500 g at birth and were born at or transferred to 1 of the >380 participating neonatal intensive care units (NICUs) within 28 days of birth. The network database has been in operation for >10 years and now enrolls more than half of the very low birth weight infants born in the United States annually. The database is used to provide members with confidential quarterly and annual comparative performance reports for use in internal audit and quality improvement, as well as to provide core data for network research.
Information is not enough to induce improvement. Although accurate and reliable performance information is necessary for clinical improvement, alone, it is not sufficient. In addition to information, multidisciplinary teams of health professionals require knowledge, skills, and tools specific to quality improvement in their clinical field and a forum for collaboration. The Vermont Oxford Network created the Evidence-Based Quality Improvement Collaborative for Neonatology, Neonatal Intensive Care Quality Improvement Collaborative Year 2000 (NIC/Q 2000), in an attempt to foster these needed resources and use them to achieve measurable improvements in the quality and safety of neonatal intensive care. NIC/Q 2000 is the second in a series of network collaboratives. The first NIC/Q collaborative was reported previously.2,3 This article provides an overview of the NIC/Q 2000 collaborative. The subsequent articles included in this collection describe the work performed by multi-institutional focus groups and by individual centers that participated in the collaborative.
The rate of change in health care today makes it difficult for clinical and administrative leaders to keep pace with best practices. This is especially true in the care of very low birth weight infants. Innovations occur frequently in centers around the world, but the process of formal trials and publication is slower than the pace of development. Furthermore, as O’Connor et al4 noted, the traditional communications channels of publication and presentation often give no details about the fine structure of care. The practices borrowed from one center may not be realizable in another without knowledge of this fine structure.
Similarly, managers of systems such as NICUs often develop innovative solutions to such challenges as dealing with staff shortages or facilitating multidisciplinary teamwork, but others have no reliable way to learn about solutions. Furthermore, many clinical issues have a strong tie to managerial processes. For example, clinicians may examine the evidence and decide to administer surfactant in the delivery room, but this requires the manager to work out a process with the pharmacy to ensure that the surfactant is stocked and available. Knowing how others have successfully dealt with such practical issues would accelerate implementation of evidence-based practice.
Multiorganization improvement collaboratives are 1 mechanism for sharing good practices.5 Collaboratives typically involve multidisciplinary teams from many organizations coming together with a focus on a specific topic. Teams share details of practice, often involving site visits and exchange of internal data, and then use this knowledge to make changes in the local setting.
Several studies have shown the benefits of this approach. The Institute for Healthcare Improvement has successfully applied this approach to topics as diverse as adult critical care, waits and delays, service enhancement, and patient safety.6 For example, Flamm et al7 reported that 18 of 28 participating organizations in an improvement collaborative achieved caesarean section rate reductions of 10% to 30%. O’Connor et al8 reported a 24% decrease (P < .05) in mortality rates for coronary artery bypass surgery at 5 cardiac centers in northern New England involved in an improvement collaborative. Green and Plsek9 provided successful improvement case reports from a Minnesota-based collaborative in which organizations made improvements in a variety of areas over a 2-year period while also addressing issues in leadership and organizational culture. Kerr et al10 described how these same collaborative improvement methods have been applied to cancer networks in the United Kingdom.
One criticism of these studies is that only pre- and postintervention data for the teams in the collaborative are provided, with no comparison group. The lack of experimental controls leaves many health care professionals skeptical about the value of such efforts. In an evaluation of the Vermont Oxford Network’s first improvement collaborative, NIC/Q, the 10 participating NICUs were compared with a prospectively chosen group of comparison units that were not part of the collaborative.2 The study showed significant reductions in the rate of nosocomial infections (22.0% to 16.6%, P = .007, in 6 centers) and in the rate of supplemental oxygen at 36 weeks’ adjusted gestational age (43.5% to 31.5%, P = .03, in 4 centers) for the intervention group, whereas neither of these outcomes changed over the course of the study at the 66 centers in the comparison group. The cost of NICU care at the 10 sites decreased significantly over the course of the collaborative when compared with a comparison group of nonparticipating sites, demonstrating that the accepted adage that better quality results in reduced cost is in fact true.3
The NIC/Q 2000 collaborative was the second in a series of collaboratives organized by the Vermont Oxford Network. Network staff recruited faculty with an array of clinical, quality improvement, and group process expertise to plan the NIC/Q 2000 effort and provide on-going support. In addition to permanent project faculty, guest experts were invited to specific meetings of the collaborative to support groups working on particular topics.
The 34 centers in the NIC/Q 2000 collaborative were a self-selected subset of the network. Written descriptions of the proposed collaborative were mailed to all centers in the network, and the initiative was discussed at the Network Annual Meeting in late 1997. Participating centers agreed to provide financial support to the effort in the form of an annual project fee while also funding all travel expenses for teams and all internal staff effort. The overall project was approved by the Committee for Human Research at the University of Vermont. Each site was responsible for determining whether local Institutional Review Board approval was necessary for participation. The 34 project sites and their key personnel and project faculty and staff are listed in the Appendix. They include 9 of the 10 sites in the first NIC/Q collaborative plus 25 new sites.
The first meeting of the collaborative was held via a 2-way interactive video conference in May 1998. Before the conference, participating centers received materials explaining the project in detail, advising on the formation of a multidisciplinary team to guide the effort in their center, and instructing on a model for improvement. The conference allowed the teams to “meet” each other and provided training on basic improvement methods. On the basis of this information, local multidisciplinary teams were asked to conduct a cycle of improvement in their unit and to prepare a poster presentation to share at the first face-to-face meeting of the collaborative. This initial video conference established an important pattern of action-orientation that has become an enduring feature of the complex system of the collaborative.
The video conference also introduced an Organizational Culture Survey that was administered at all participating centers. The details of the survey are provided in a companion article.11 The survey is readministered periodically and serves as a constant reminder of the importance of working on organizational culture in tandem with specific improvement projects.
The first face-to-face gathering of the collaborative involved approximately 150 neonatologists, neonatal nurse practitioners, nurses, respiratory therapists, quality improvement coaches, and other associated personnel in teams from each of the 34 centers, meeting for 2.5 days. During the meeting, project faculty trained teams in methods of quality improvement and evidence-based practice. The meeting included didactic sessions as well as large- and small-group exercises.
A key outcome of this first meeting was the formation of 6 multicenter, multidisciplinary focus groups to embark on collaborative exploration for best practices in selected topic areas of their own choosing. The initial meeting established these 2 main strands of effort in NIC/Q 2000: the local improvement process and the multicenter focus group process. Collaborative improvement is about sharing knowledge of better practices and making lasting change in concert with that knowledge. Although sharing of knowledge can be global, change is a local phenomenon.
Setting the Stage for Improvement: The 4 Key Habits
O’Brien et al12 have pointed out that efforts to improve health care must be integrated within a larger framework of strategic, cultural, technical, and structural factors. Experience in the initial, 10-center NIC/Q improvement collaborative led to a similar conclusion: the potential for success in a specific improvement project depends heavily on larger organizational factors. Furthermore, the organizations that are most likely to succeed in using improvement methods are those that could demonstrate the key habits illustrated in Fig 1.5
Habit for Change
No matter how much one knows, improvement comes about only when we do something differently. Quality improvement requires continual change, and organizations that more naturally embrace change will be more successful than those for whom change is a cumbersome process.
Habit for Evidence-Based Practice
A significant proportion of the care delivered daily is not consistent with what is known to be most effective. Clinical improvement is largely about continuous efforts to bring the daily practice of health care more in line with the knowledge of what works. Organizations that are practiced at efficiently searching, interpreting, and applying the evidence will be most successful at improvement.
Habit for Systems Thinking
Good health care depends on the complex coordination of many factors and the efforts of many people. Clinicians and health care organizations that begin from this premise and reject the more restrictive, discipline-based view will be most successful in routinely improving care. Capra13 pointed out that complex systems can be described in 3 dimensions: structure (physical and hierarchical characteristics), process (the sequence of events that takes place), and pattern (the nature of the interactions among the parts of the system). In complex systems, sustainable improvement work must affect all 3 of these dimensions.
Habit for Collaborative Learning
Much of the knowledge of what makes for good care is currently locked away in undocumented innovation and unexamined variation in practice. The only way to get at this knowledge is through collaborative learning. This includes collaborative learning among different disciplines within an organization, as well as among individuals from different organizations. Improvement-oriented individuals and organizations start from the premise that it is better to be open and curious than defensive.
These 4 habits were introduced at the first face-to-face meeting of the collaborative, and explicit development of the habits has been a major focus of NIC/Q 2000. The habit for change is continually reinforced through the use of a model for improvement that stresses rapid testing of change.14 The habit for evidence-based practice is encouraged through emphasis on efficient literature review using such tools as the critically appraised topic (CAT).15 Centers practice the habit for systems thinking through the use of multidisciplinary teams and a predisposition toward viewing the system as a whole when proposing changes. The habit for collaborative learning is built into the infrastructure of the effort and reinforced through meetings, site visits, conference calls, Internet listservs, and a dedicated web site.
The emphasis on establishing durable habits for improvement in participating centers distinguishes this collaborative effort from others that are shorter and focus on specific predetermined topics. The long-term goal is to fundamentally change the practice of neonatology and the management of NICUs, making continuous improvement a natural feature of the working culture.
Local Improvement Process
Multidisciplinary Leadership Team
Because of the constraints of meeting space and travel expenses, the teams that attended meetings of the NIC/Q 2000 collaborative were composed of only 3 to 5 members. NICUs are complex organizations, often involving dozens of medical professionals and hundreds of nurses and other support staff in a 24-hours-per-day, 7-days-per-week operation. It is imperative, therefore, that the small team that attended collaborative meetings be supported by a larger, multidisciplinary leadership team (MDLT) within the center. The local leadership teams have from 8 to 20 members, typically, several neonatologists, neonatal nurse practitioners, nurses, respiratory therapists, managers, unit support staff, and, sometimes, representatives from other departments in the hospital such as pharmacy or radiology. The local MDLT is a semipermanent team whose roles and responsibilities include:
Overseeing local efforts within the center associated with the collaborative
Deciding on specific aims for improvement to pursue locally within the framework of the larger collaborative
Chartering local task teams to achieve improvements
Monitoring the progress of local task teams, allocating needed resources, and working to eliminate barriers to improvements
Establishing local communications processes that engage all personnel including administration in the improvement effort
Recognizing that NICUs are a complex social system, project staff coached centers to consider existing patterns of authority and opinion leadership in selecting members for the MDLT. The goal was to establish a leadership team that did not need to look often beyond itself to initiate change or overcome barriers. Centers were also encouraged to designate their MDLT as a formal quality improvement team to qualify as a peer-review committee under tort laws.
Aims for Improvement
In keeping with the comprehensive nature of NIC/Q 2000, centers were encouraged to develop aims for improvement in each of 3 areas: clinical, operational, and organizational culture. From October 1998 through December 2000, centers worked on many improvement aims, roughly balanced among these 3 areas.
Local Improvement Teams
Each MDLT chartered short-term task teams to carry out the improvement process for a specific aim. Several such teams might be active at any time. These task teams typically have fewer members than the MDLT (3–7 vs 8–20). Consistent with the habit for collaborative learning, the task teams were also multidisciplinary, but members were selected on the basis of involvement in the topic rather than a need to represent all parties in the NICU.
Rapid-Cycle Model for Improvement
Teams used the model for improvement depicted in Fig 2.14 This model is widely used in improvement collaboratives and leads a team to develop focused aims, simple measures, and ideas for change that can be tested through cycles known as PDSA cycles. Each time a change is planned and conducted, there is potential for learning through study of the impact of the change on the outcome selected. This organizational learning over multiple PDSA cycles systematically builds progress toward the improvement aim.
Berwick16 distinguished between rigorous, large-sample measurement for judgment and more practical measurement for improvement. Alemi et al17 pointed out that among the improvement teams, those that collected only the data they needed to determine whether they were making progress reached their improvement goals faster than those that relied on more formal measurement. For the NIC/Q 2000 Collaborative, the indicators in the Network Database and the results from the periodic Organizational Culture Survey were the main indicators of overall improvement. In addition, individual centers participated in hospital-specific standardized satisfaction surveys and may have had a variety of other organizational performance indicators. Although such measures are good for determining overall success, they are neither timely nor sensitive enough to guide improvement cycles.
Therefore, participants in NIC/Q 2000 were coached on techniques for simple, practical measurement consistent with the rapid-cycle improvement model. Measurement evolves as cycles build toward the aim. For instance, an overall improvement aim might be to reduce the rate of nosocomial infection by 50% for infants with birth weight <1000 g. Initial PDSA cycles might focus on handwashing. Early cycles involve collecting baseline data through chart reviews or simple data sheets to determine what proportion of caregivers washed their hands before touching an infant. Measurement in the next few cycles typically focuses on completion of activities: Was the handwashing protocol developed on schedule? Did a high percentage of the professional staff attend the in-service training?
As changes in practice are tested, the next evolution of measurement relies on process indicators: What proportion of the professional staff adhered to the handwashing protocol? Finally, as changes are fully implemented over time, teams can measure change in the outcome stated in the overall aim, the nosocomial infection rate. Understanding of the natural evolution of measurement helped local teams in NIC/Q 2000 maintain enthusiasm and sustain focus on their improvement aims because they could see evidence of progress early on.
Change and PDSA Cycles
Teams were encouraged to use a wide variety of resources in scouting for ideas for changes that might lead to improvement. The literature is an obvious first source of ideas for better practices, and The Institute for Healthcare Improvement (http://www.ihi.org/) produces catalogues of useful change concepts. The work of the 10 centers in the previous NIC/Q collaborative, site visits and telephone calls to better performing centers, Internet listserv dialogue among the 34 participating centers, and local creative thinking all were tapped as sources of ideas for change.
The impact of change in a complex system such as a NICU is unpredictable. An idea for change that has worked well in 1 organization may fail when implemented by another, due to a myriad of factors associated with the structure, process, and pattern within the system. Therefore, changes are tested and gradually implemented. These tests of change are conducted in PDSA cycles.
PDSA cycles build knowledge about the system in the local NICU, help manage the inherent risk of change, and encourage participation in improvement as individuals see initial signs of success in the early cycles. Participating centers completed worksheets for each improvement cycle, briefly recording their PDSA steps. The worksheets were submitted to the network, and summary reports on center-specific improvement aims and cycles were distributed to all participants before each meeting of the collaborative.
Local Improvement Efforts to Date in NIC/Q 2000
Time was allocated at each of the twice-annual meetings of the NIC/Q 2000 collaborative in 1999 and 2000 to review local improvement work. Faculty reviewed improvement cycle worksheets and coached centers one-on-one, as well as provided additional general training on improvement methods. The number of PDSA cycles reported by centers varied considerably. During the first 18 months of the collaborative, 1096 improvement cycles were reported by the participants. The number of cycles reported varied widely from center to center, ranging from <10 cycles to >100 cycles (Fig 3). Examples of local improvement efforts from several of the centers in NIC/Q 2000 appear in the series of companion reports.
Local improvement efforts are the focal point for establishing the 4 key habits for improvement. By encouraging teams to embark on many cycles of change, using multidisciplinary teams and a systems viewpoint, incorporating evidence as appropriate, and sharing what they have learned, these habits become engraved into the unit’s culture. The ultimate measure of success will be a comparison of the performance of these 34 centers and a comparison group on a variety of indicators.
In March 2000, the teams at each center rated their performance using a 5-point scale similar to that used by the Institute for Healthcare Improvement to evaluate participants in their Breakthrough Series.18 The results of this self-assessment are shown in Fig 4. Most centers rated their performance as “moderate improvement” or as “significant progress and real improvement”.
Focus Group Process
During the past decade, there have been dramatic changes in the field of neonatology. The introduction of surfactant therapy for the prevention and treatment of respiratory distress syndrome, the availability of new modes of assisted ventilation for high-risk infants, and other new technologies have changed the way neonatology is practiced.19 With these changes, it was not always clear what was the “best” practice. Evidence of this is seen in the wide variations in outcome and process indicators in the network database.20 Centers are clearly delivering care to very low birth weight infants differently and are getting different outcomes.
The indicators in the database are not fine-grained enough to show why there are differences or which differences matter. Although there have been trials on many topics, construction of the evidence is simply not keeping pace with the evolution of the field. Furthermore, no systematic learning that captures the knowledge being generated as care is delivered in various centers is taking place. Therefore, the second major strand of effort in NIC/Q 2000 involved the 6 multiorganization focus groups working to uncover better practices on specific topics. The objective of the focus groups was to study, in fine detail, current variations in practice to uncover PBPs that could be shared.
Formation and Group Process in the Focus Groups
The 6 multicenter focus groups were established at the first NIC/Q 2000 collaborative meeting in October 1998 to explore PBPs in the following areas:
Chronic lung disease
Intraventricular hemorrhage and brain injury
Nutrition practice and necrotizing enterocolitis
These 6 topic areas emerged through a large-group consensus process. First, individual center teams reviewed their performance on clinical indicators from the network database and the results of the Organizational Culture Survey. On the basis of these data, each center then prioritized 3 to 6 topics to explore to identify better practices. Project faculty constructed an affinity diagram from these individual responses and identified the 6 topics as major themes with 3 or more centers expressing strong interest in each.
Once topics were established, faculty served as facilitators to each of the 6 groups and led them in the formulation of improvement aims. Participants from centers in the initial NIC/Q group of 10 were asked to serve as focus group leaders because of familiarity with the improvement process. Expert guest faculty were invited to individual meetings to support the focus group work.
Working in multicenter, multidisciplinary focus groups has clearly enhanced the habit for collaborative learning. Differences in practice and outcomes were evident in the first meeting of the groups. Having faculty skilled in group process as facilitators was essential in channeling this diversity into curiosity for learning, rather than defensiveness or critical argument.
The work of the focus groups was conducted through semistructured work time at each of the twice-yearly meetings of the collaborative, via conference calls every 4 to 6 weeks, and through dedicated Internet listservs established for each group. The Vermont Oxford Network provided administrative and scheduling support to the focus groups.
The cohesiveness of the groups was evidenced by their self-scheduled group meetings in addition to meetings on the formal agenda; by their group nicknames such as FBI (fight bacterial infection), Brainy Bunch, We Are Family, CARE, Got Milk (nutrition), and RELI (reduce lung injury); and by the friendly competition among groups in skits and songs at every NIC/Q 2000 meeting. These actions were evidence of bonds of friendship and trust in a complex social network through which information about PBPs of care could flow.21
The focus groups were self-directed and approached work in slightly different ways. The details of each group’s work, along with preliminary outcomes, are described in the companion reports.
Evidence Reviews and Critical Appraisals
The principles of evidence-based medicine and critical appraisal of published evidence were addressed in didactic sessions and exercises at each meeting of the collaborative. Using this approach, the focus groups conducted literature searches to identify better practice ideas. Participants summarized findings using the CAT worksheet,15 which focuses thinking both on critical appraisal of the evidence and on its practical significance.
Internal Assessments and Development of Benchmarking Questions
Process benchmarking is a tool of quality improvement that involves a detailed search for better practices through site visits to superior performers.22 The starting point is a comprehensive assessment of internal practice that sensitizes one to differences during site visits. When multiple organizations conduct these internal assessments together, they begin the process of learning from each other.
During the first several months, focus groups developed internal self-assessment tools. Typically, it began with brainstorming about causal factors related to the selected topic, eg, asking which factors they thought contributed to nosocomial infections and which practices were related to those factors. Through group discussion, these factors were refined into questions and data collection tools. Each center in the group then used the self-assessment tools internally, typically involving many members of the center who were not part of the initial discussion. The results of the internal assessments were shared within the focus groups to identify variations in practice, and this sharing led to additional refinement of the tools. Eventually, a concise set of questions that could be posed to benchmark sites was developed.
The next step in process benchmarking is identification of and visits to best performing sites. Sites were identified in several ways. First, on the basis of criteria developed by each focus group, Vermont Oxford Network staff queried the network database to get an initial list of best performers. Typically, centers that were performing in the top quartile of the network for 3 years in a row on the indicators specified by the focus groups were studied. Honoring confidentiality agreements within the network, staff contacted these best performing sites to determine their willingness to be contacted by the focus groups. A Collaborative Learning Directory that included self-reported areas of excellence submitted by Vermont Oxford Network member hospitals was developed. The directory was distributed to all NIC/Q 2000 participants. Finally, when network data and self-reports did not produce suitable leads, other sources of information were used to identify best performers. Before visits, the focus groups posed the self-assessment questions to the benchmark sites, shared responses from their own internal assessments, and developed agendas for visits. The previsit sharing of responses to the benchmarking questions was key to optimizing the value of the site visits.
Site visits were typically 1 day in length and involved a multiorganizational, multidisciplinary team. After an initial plenary meeting for introductions and orientation to the site, members fanned out to explore specific areas of practice in detail. The group typically shared initial findings in a closing meeting and then followed up with a detailed written report several weeks later. The reports outlined preliminary theories about the practices observed that seemed correlated with better outcomes.
There were many creative variations on the benchmarking visit process. Some visits were conducted through an extended conference call or video link-up with the site. In some cases, there was enough information in the response to the benchmarking questions that a site visit was not necessary. Additional details were gathered in a simple telephone conversation or e-mail exchange. Centers within the focus groups also visited each other.
Identification of PBPs
The end product of this work was the identification of PBPs, based on a synthesis of knowledge from evidence reviews, observations in the benchmarking process, opinions of experts engaged by the focus groups, and reasonable casual theories. The PBPs were recorded in a standard format developed specifically for this project (Fig 5), which were disseminated in printed versions and on the collaborative web site, NICQ.org.
One of the criticisms of collaborative improvement is that there is often no explicit tie between suggested change concepts and evidence-based practice. An important feature of the focus groups was the rating of the strength and quality of the evidence for all proposed PBPs using a rating scale adapted from Muir Gray.23 The phrase “potentially better practices” is used to highlight that it is unknown whether these practices will result in improvement at a center until they are tested locally.
The PBPs were generated by the teams themselves. In most other improvement collaboratives, teams receive a list of change concepts developed by an expert panel. The Vermont Oxford Network process is clearly much slower, but it may be the only viable approach in rapidly changing fields such as neonatology, where there is neither strong evidence nor easy consensus among experts. In future collaboratives, we will explore a hybrid approach involving experts to jumpstart the focus groups.
Linking Focus Group Work to Local Improvement Cycles
The exploratory work of the focus groups took place from late 1998 through early 2000. By mid-1999, several PBPs had already emerged. As the groups reached higher levels of certainty and agreement about various practices, centers within the focus groups chartered task teams to conduct local PDSA cycles to test the practices. This gave more clarity to the practice and provided additional evidence of its effectiveness.
The April 2000 meeting of the NIC/Q 2000 collaborative highlighted the preliminary findings of the focus groups, with each sharing work to date, giving recommendations for 2 or 3 PBPs that others might wish to test, and presenting early results from local PDSA cycles conducted by members of the focus group. All centers were encouraged to select a few PBPs from outside their focus group to test over the next several months, to spread the process of collaborative learning.
At the September 2000 meeting, each focus group gave a more extensive report of its findings and provided a resource kit that included benchmarking questions and self-assessment tools to enable other centers to assess quickly their own current performance. For each PBP, the materials included:
A description of the practice and the rationale for it
Supporting CATs, benchmarking reports, literature, and so forth, with explicit grading of the evidence
Examples of locally developed measures to aid in conducting improvement cycles
Helpful hints and tools to aid in testing and implementation
Case studies from several centers in the focus group that had implemented the practice
Names and contact details for obtaining more information
These resource kits were designed to accelerate the local learning and improvement process in all 34 centers in the collaborative. In the period after the September 2000 meeting, each center was encouraged to review its practices and the PBPs of all of the focus groups and to test and implement as many of the PBPs as it deemed appropriate.
Spreading the Focus Group Work and PBPs
The 6 focus groups developed a total of 51 PBPs. At the September 2000 meeting of the collaborative, each participating center was asked to indicate which of the PBPs were already in place and would be worked on in the following year. Centers chose a broad range of PBPs for action. In a series of conference calls open to all interested participants, specific PBPs were discussed. At the April 2001 meeting of the collaborative, each center presented a case study documenting its experience with a PBP developed by a focus group in which they had not participated.
A great deal about the collaborative improvement process and the development of the 4 key habits for improvement within a health care organization has been learned.
Collaborative Improvement Process Generates Enthusiasm and Change Through a Combination of Collaboration and Healthy Competition
Quality improvement is an organic process involving evolution, growth, surprise, disappointment, and encouragement. In NIC/Q 2000, the process was highly self-directed. The Vermont Oxford Network and faculty facilitated the efforts and provided educational input but did not dictate detailed aims or specific changes. Results emerged through collaborative interactions within the collective group and the participating centers. Consistent with the research of Axelrod and Hamilton24 on a variety of complex systems, this emergence was fostered by a healthy balance of collaboration and competition. Feedback and sharing of performance data with individual centers encouraged improvement, while being in a trusting relationship within the larger group supported healthy inquiry to identify ways to further improve. Friendly competition among the focus groups kept all on schedule and encouraged thoroughness.
Collaboratives Can Be a Vehicle for Encouraging Evidence-Based Practice
O’Connor et al4 noted that the insular nature of clinical practice is a barrier to quality improvement. It is easy to remain in outmoded patterns of practice. Improvement collaboratives, such as NIC/Q 2000, create the conditions for reflection and critical inquiry about clinical practice, in concert with trusted and respected colleagues. Several centers in the focus groups reported surprise when internal process analysis revealed that practice was not as imagined. Furthermore, a collaborative provides more people to assess critically the mountains of evidence. The CAT worksheet provided an efficient means for sharing evidence reviews.
Rapid-Cycle PDSA Improvement Process Can Be Easily Learned and Used to Make Local Change
The inertia of the status quo is a strong force in many health care organizations. That, coupled with the mental model of randomized controlled trials as the only method of building knowledge for improvement, results in a painstakingly slow pace of change in many organizations. Alemi et al,17 who studied improvement teams across many organizations, found that it was possible to accelerate greatly the pace of improvement. Teams in NIC/Q 2000 were able to conduct initial rapid-cycle PDSA improvement projects based on a 2-page description and 30 minutes of instruction during the initial 2-way interactive video conference. With follow-up coaching and teaching at subsequent meetings of the collaborative, all progressed substantially in developing the habit for change in their centers.
Identifying and Gaining Consensus on PBPs Requires Considerable Time and Energy but Also Has Considerable Benefits
Project staff and faculty misjudged by almost a factor of 2 the amount of time and effort needed to reach consensus on PBPs. A lack of standardized, detailed measurements was 1 reason for the amount of time needed. In most cases, data simply did not exist to allow one to correlate a particular practice with consistently better outcomes. However, given the complex nature of NICUs and the complex medical conditions of very low birth weight infants, the hope for simple cause-and-effect correlations might be in vain. It has always been a part of the art of medicine to synthesize a variety of inputs into a concrete plan of action. This synthesis can occur rapidly in the mind of an individual or among colleagues in close practice who have learned to think alike. It is a much slower process in a larger group with more diverse colleagues.
Many participants reported that the process was challenging but educational and exposed the fallacies of long-held beliefs and mental models. O’Connor et al8 reported a similar finding in their collaborative improvement work in cardiovascular surgery. They noted that the improvement work had become the informal continuing medical education process for the group. Such thinking increases the conviction of the need for change, a critical factor often cited by experts in organizational change.25
Collaborative Learning Is an Important Process in a Rapidly Evolving Field
In a field such as neonatology, with exceedingly complex patients, there are few simple explanations or obvious best practices. Complex systems scientists26 and organizational learning theorists27 point out the importance of timely feedback in an ever-changing environment. Although professional publication and conferencing are important, they are not timely enough to meet the need for rapid access to a wide network of trusted colleagues. Ongoing improvement collaboratives, such as NIC/Q 2000, are 1 way to meet that need.
Social Process Within the Collaborative Is Important
Research into the diffusion of innovation makes clear the importance of social networks in the spread of ideas.21 Although the focus of NIC/Q 2000 was on the exchange of technical information leading to better organizational outcomes, the social networks and friendships that emerged have aided this process. Every meeting of the collaborative featured an evening social event, which proved useful in breaking down barriers among different professional groups and organizations. The use of professional facilitators to work with each focus group and be available to coach centers one-on-one further aided the exchange of information.
The social process has its downside as well. It may be difficult for new centers to join the collaborative and feel part of the process. Some of this was seen as the 25 new centers joined the 9 old centers in the transition to NIC/Q 2000. This same issue may also have a negative impact on plans to spread PBPs to other centers in the network that were not in the collaborative.
Having Reliable Data Across Many Organizations Allows for More Rigorous Evaluation of the Collaborative Improvement Process
A continuing criticism of collaborative improvement methods is the lack of formal evaluation and controlled comparisons. Berwick16 argued that improvement methods are fundamentally different from treatment modalities and should not be subjected to the same standards of evidence. However, more evidence should be provided for the effectiveness of these methods. Collaboratives organized around existing performance databases have a powerful advantage in their ability to demonstrate improvement. An important feature of NIC/Q 2000, for example, was the use of standard indicators in the network database to conduct rigorous pre- and postintervention studies of organizations in the collaborative and make comparisons to an appropriate control group of centers not in the collaborative.
Next Steps NIC/Q 2000+
The original commitment from centers in the NIC/Q 2000 collaborative was for a total of 2 years. Thirty-one of the 34 participating centers requested that we extend the collaborative for an additional year. During this additional year, called NIC/Q 2000+, there were 2 large group meetings in April and September 2001. Major emphases were spreading the PBPs developed by focus groups to centers outside those focus groups, applying the collaborative improvement model to reduce medical errors and enhance patient safety, and focusing in more detail on systems thinking and change.
In October 2000, the Vermont Oxford Network launched a new Internet site, NICQ.org, to support the activities of participants in the collaborative. The site includes a resource section with improvement information developed and submitted by participants, a classroom with slide presentations by project faculty from meetings of the collaborative, contact lists for all project participants, and a section on medical errors and patient safety. The site is secure and easily searchable. Users can submit comments on any document on the site, for viewing by other users. It is hoped that NICQ.org will provide the advanced tools and resources that health professionals in neonatal intensive care need to practice effectively the 4 key habits of improvement.
Focus on Safety
The Institute of Medicine has focused national attention on the issue of medical errors and patient safety.28 This issue is as important in neonatal intensive care as in other clinical settings. For applying the 4 key habits of clinical improvement to reduce medical errors and enhance patient safety in the NICU, didactic sessions and learning exercises related to improving patient safety were included as key elements in the NIC/Q 2000+ meeting agendas. For the April 2001 meeting of the collaborative, centers prepared a case study showing how they had applied the improvement model to improve patient safety. These case studies were presented and discussed at the meeting. In addition, collaborative participants used the NICQ.org Internet site to submit and view anonymous reports of medical errors, adverse events, and near-miss errors. More than 600 errors and near-miss errors were submitted to NICQ.org by April 2001, and the number of reports is continually growing. The reports represent a broad range of events of many different types. Participants also can post linked comments for any error report submitted to the site. Anonymous, voluntary error reporting on NICQ.org has facilitated collaborative learning about medical errors in the NICU.
Focus on Systems Thinking and Change
NICUs are part of larger hospital systems. These systems have many interactions with the NICU. Therefore, when changes are made in the NICU, thought has to be given to the impact that change has on the overall system. Conversely, when changes are occurring in the larger hospital system or community, thought needs to be given to the impact on the NICU. In addition, mental models are important factors in the way people interpret systems where they work.
At its first meeting, the collaborative developed and ratified a set of guidelines for confidentiality and publication to promote data sharing among institutions while protecting confidentiality, and to facilitate the publication of high-quality research based on work performed by the collaborative. A publications committee was elected to review all presentations and publications. This supplement includes the first major publications of the NIC/Q 2000 project.
Plans for Evaluation
The reports in the current supplement include descriptions and case studies of the work performed by participants in the collaborative as of December 2000. At the conclusion of the final year of the project, additional statistical analyses will be performed. Trends over time in key outcome measures will be analyzed. Comparison of outcomes at centers in the collaborative to a comparison group of network hospitals that did not participate in the collaborative will be done. These analyses will be conducted using information in the Vermont Oxford Network Database.
- ↵Horbar JD. The Vermont Oxford Network: evidence-based quality improvement for neonatology. Pediatrics.1999;103 :350– 359
- ↵Horbar JD, Rogowski J, Plsek PE, et al. Collaborative quality improvement for neonatal intensive care. Pediatrics.2001;107 :14– 22
- ↵Rogowski J, Horbar JD, Plsek PE, et al. Economic implications of neonatal intensive care unit collaborative quality improvement. Pediatrics.2001;107 :23– 29
- ↵Plsek PE. Quality improvement methods in clinical medicine. Pediatrics.1999;103 :203– 214
- ↵Kilo CM. Improving care through collaboration. Pediatrics.1999;103 :384– 393
- ↵Kerr D, Bevan H, Gowland B, Penny J, Berwick D. Redesigning cancer care. BMJ.2002;324 :164– 166
- ↵Baker GR, King H, MacDonald JL, Horbar JD. Using organizational assessment surveys for improvement in neonatal intensive care. Pediatrics.2003;111(suppl) :e419– e425
- ↵Capra F. The Web of Life: The New Scientific Understanding of Living Systems. New York, NY: Anchor Books; 1996
- ↵Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. San Francisco, CA: Jossey-Bass; 1996
- ↵Sackett DL, Strauss SE, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed. Edinburgh, United Kingdom: Churchill Livingstone; 2000
- ↵Rainey TG, Kabcenell A, Berwick DM, Roessner J. Reducing Costs and Improving Outcomes in Adult Intensive Care. Institute for Healthcare Improvement Breakthrough Series Guides. Boston, MA: Institute for Healthcare Improvement; 1998:156–157
- ↵Horbar JD, ed. Vermont Oxford Network 2000 Database Summary. Burlington, VT: Vermont Oxford Network; 2001
- ↵Rogers EM. Diffusion of Innovations. 4th ed. New York, NY: Free Press; 1995
- ↵Camp RC. Benchmarking: The Search for Industry Best Practices That Lead to Superior Performance. Milwaukee, WI: ASQC Quality Press; 1989
- ↵Muir Gray JA. Evidence-Based Health Care: How to Make Health Policy and Management Decisions. London, England: Churchill Livingstone; 1997
- ↵Axelrod R, Hamilton WD. The evolution of cooperation. Science.1981;211 :1390– 1396
- ↵von Bertalanffy L. General Systems Theory: Foundations, Development, and Applications. Rev ed. New York, NY: George Braziller Publishers; 1968
- ↵Wieck KE. Sense Making in Organizations. Thousand Oaks, CA: Sage; 1995
- ↵Kohn L, Corrigan J, Donaldson M, eds. To Err Is Human: Building a Safer Heath Care System. Washington, DC: National Academy Press; 2000
- Copyright © 2003 by the American Academy of Pediatrics