July 2014, VOLUME134 /ISSUE 1

Demonstrating the Learning Health System Through Practical Use Cases

  1. Amy P. Abernethy , MD, PhD
  1. Center for Learning Health Care, Duke Clinical Research Institute, Duke University School of Medicine, Durham, North Carolina
  • health information technology
  • electronic health record
  • Abbreviations:
    learning health system
    quality improvement
  • The study by Forrest et al1 on the evaluation of the effectiveness of anti-tumor necrosis factor (TNF)-α for Crohn disease has as its subtitle “Research in a Pediatric Learning Health System.” A quick review of the article and its abstract suggests that this is an exploration of anti-TNF-α; in reality, it is an exploration of the practical applicability of a learning health system (LHS) model to meaningfully generate generalizable knowledge. In techno-speak we would call this article a “use case,” a term derived from software and systems engineering; this is an attempt to develop, demonstrate, and refine the methods and application of LHS principles, data, and analytic techniques within the context of a practical real-world example.

    First, what is an LHS? According to the Institute of Medicine, an LHS is an organization in which “science, informatics, incentives, and culture are aligned for continuous improvement and innovation—with best practices seamlessly embedded in the delivery process and new knowledge captured as an integral by-product of the delivery experience.”2 Practically, the LHS is an environment in which the results and outcomes of clinical decisions and care for patients informs best practices and new research directions simultaneously. In an LHS, the care of an individual patient is informed by the care of similar patients before her or him, and her or his care is reinvested into a system of continuously aggregating data to support future discovery.

    Use cases serve many purposes. They take abstract concepts and make them tangible and realistic. They provide opportunities to define relationships between roles and systems to achieve a goal, including understanding key substrate, hidden barriers to success, analytic methods, prejudices, and unexpected enthusiasts. They provide the opportunity to demonstrate what is possible, and generate useful evidence along the way.

    LHS frameworks are mostly conceptual, with few practical published examples. Geisinger Health System redesigned outpatient management of chronically ill patients by using an advanced medical home model that incorporated predictive models, patient navigators, and partnerships with primary care and post–acute care facilities such that health care utilization was reduced.3 Group Health in Seattle described its rapid learning health system model: exemplar projects centered on opioid prescribing and the patient-centered medical home.4 Methodologic features of these data-driven models are continuous quality improvement (QI), iterative adjustment of clinical operations, and real-time reporting.

    To date, LHS descriptions have focused on iterative process improvement and implantation of currently available knowledge (through QI), with few examples of how the LHS model facilitates discovery, clinical research, and generation of new knowledge. This is where Forrest et al’s article on anti-TNF-α for Crohn disease is positioned. These researchers take the possibilities of an LHS to a new level: a use case for how the LHS model can contribute generalizable knowledge.

    To accomplish this task, the authors needed the core elements of an LHS to be in place: reliable analyzable data as substrate, organized teamwork, structured processes, meaningful questions to be addressed, clinical subject matter expertise, and analytic expertise. ImproveCareNow is a fundamental component; it is a trustworthy longitudinal registry with clinically meaningful objective and subjective end points. The authors needed a compelling use case with comparators. Anti-TNF-α is potentially beneficial, but costly and questionably toxic with unproven benefit in children; it was one of the Institute of Medicine’s top 25 comparative effectiveness research priorities. Published studies provide practical benchmarks against which the authors compared the results.

    Most critically, the authors needed an analytic approach. This approach is really the most novel aspect of the study. The authors used a sequence of nonrandomized trials, each individually summoned from the background ImproveCareNow data set. To accomplish this, they needed the following: (1) a methodologically and clinically appropriate statistical approach, (2) a practical data collection framework applicable in busy practices (eg, Web portal, case report form), (3) an LHS data set that incorporates reliable outcome measures regardless of the indistinct end points of the illness (eg, Short Pediatric Crohn’s Disease Activity Index plus the Physician Global Assessment rating), (4) longitudinal data with a planned deidentification schema, (5) expectation of data quality including processes for data review and audit, (6) a practical analytic framework that reflects the real-life clinical settings (eg, liberalizing the assessment framework to 16 weeks to accommodate the variation in visit frequency, which was at the discretion of clinicians and patients), and (7) a plan for managing missing data. There are a lot of prerequisites before analysis can occur. Just the sheer number of authors on the article is a testament to the importance of working together to accomplish this type of work.

    Pediatric gastroenterologists will be heartened by the authors’ assertion that, within this LHS comparative effectiveness research study, anti-TNF-α for Crohn disease is an effective treatment. On a much more generalizable scale, proponents of the LHS conceptual model will be heartened by Forrest et al’s compelling example of how carefully curated data collected to support QI can be used simultaneously to generate new generalizable knowledge, and the practical description of an analytic framework to get there.

    Of course, there are still limitations. I found myself wondering: “Why was only 65% of the data set included?” “Is their handling of missing data appropriate?” “Did the authors miss a critical confounder?” That all being said, I could spend my day lost in the forest for the trees; only time, reanalyses, and smarter people than me will be able to tell. These authors have made an important and meaningful contribution to the story of how we better approximate clinical care and research through data, and for that I am grateful. We needed the use case so that we can study it, learn from it, advance it, and take the next step.


      • Accepted April 22, 2014.
    • Address correspondence to Amy P. Abernethy, MD PhD, Duke University Medical Center, Box 3436, Durham, NC 27710. E-mail: amy.abernethy{at}
    • Opinions expressed in these commentaries are those of the author and not necessarily those of the American Academy of Pediatrics or its Committees.

    • FINANCIAL DISCLOSURE: Dr Abernethy has received research funding from the National Institute of Nursing Research, National Cancer Institute, Agency for Healthcare Research and Quality, DARA, Glaxo Smith Kline, Celgene, Helsinn, Dendreon, Kanglaite, Bristol Myers Squibb, and Pfizer; these funds are all distributed to Duke University Medical Center to support research including salary support for Dr Abernethy. Pending industry-funded projects include: Genentech and Insys. Since 2012, she has had consulting agreements with or received honoraria from (>$5000 annually) Bristol Myers Squibb and ACORN Research. She has corporate leadership responsibility in Athenahealth (health information technology company; director), Advoset (an education company; owner), and Orange Leaf Associates LLC (an information technology development company; owner).

    • FUNDING: No external funding.

    • POTENTIAL CONFLICT OF INTEREST: The author has indicated she has no potential conflicts of interest to disclose.

    • COMPANION PAPER: A companion to this article can be found on page 37, online at


    1. 1.
    2. 2.
    3. 3.
    4. 4.