OBJECTIVES: To assess the degree to which a national sample of pediatric practices could implement American Academy of Pediatrics (AAP) recommendations for developmental screening and referrals, and to identify factors that contributed to the successes and shortcomings of these efforts.
BACKGROUND: In 2006, the AAP released a policy statement on developmental surveillance and screening that included an algorithm to aid practices in implementation. Simultaneously, the AAP launched a 9-month pilot project in which 17 diverse practices sought to implement the policy statement's recommendations.
METHODS: Quantitative data from chart reviews were used to calculate rates of screening and referral. Qualitative data on practices' implementation efforts were collected through semistructured telephone interviews and inductively analyzed to generate key themes.
RESULTS: Nearly all practices selected parent-completed screening instruments. Instrument selection was frequently driven by concerns regarding clinic flow. At the project's conclusion, practices reported screening more than 85% of patients presenting at recommended screening ages. They achieved this by dividing responsibilities among staff and actively monitoring implementation. Despite these efforts, many practices struggled during busy periods and times of staff turnover. Most practices were unable or unwilling to adhere to 3 specific AAP recommendations: to implement a 30-month visit; to administer a screen after surveillance suggested concern; and to submit simultaneous referrals both to medical subspecialists and local early-intervention programs. Overall, practices reported referring only 61% of children with failed screens. Many practices also struggled to track their referrals. Those that did found that many families did not follow through with recommended referrals.
CONCLUSIONS: A diverse sample of practices successfully implemented developmental screening as recommended by the AAP. Practices were less successful in placing referrals and tracking those referrals. More attention needs to be paid to the referral process, and many practices may require separate implementation systems for screening and referrals.
In July 2006, the American Academy of Pediatrics (AAP) released a revised policy statement on developmental surveillance and screening for children from birth to 3 years of age.1 This policy statement recommended that primary care providers conduct both developmental surveillance at all well-child visits, and structured developmental screening using a standardized instrument at 9, 18, and 30 (or 24) months of age. It also recommended that children judged to be at risk for developmental delays be referred for detailed developmental and medical evaluations and for early-intervention services. These recommendations were driven by evidence that (1) early identification of developmental disorders can lead to improved child and family outcomes2,3; (2) primary care providers tend to underdetect developmental disorders among infants and young children4,–,6; and (3) adoption of standardized screening instruments to improve rates of early identification is feasible in pediatric primary care settings.7,8
Considerable evidence shows that the publication of clinical guidelines is, by itself, insufficient to ensure their widespread adoption.9,–,15 Therefore, the AAP took several steps to accelerate the uptake of the policy statement's recommendations. First, it developed a clinical algorithm intended to guide primary care practices in implementing the policy statement's recommendations during well-child visits. Second, it paired the release of the policy statement with an implementation project to assess the feasibility of implementing the policy statement in a variety of practice settings. In this project, the Developmental Surveillance and Screening Policy Implementation Pilot (D-PIP), staff members at 17 pediatric primary care practices sought to implement the policy statement over a 9-month period.
This study had 2 main objectives. The first objective was to assess, quantitatively, the degree to which practices participating in the D-PIP could implement the AAP recommendations for developmental screening and referrals. The second objective was to identify, qualitatively, the factors that staff at participating practices felt contributed to the successes or shortcomings of their efforts.
Site Selection and Training
In March 2006, 54 pediatric primary care practices responded to a request for applications from the AAP to participate in a quality-improvement pilot focused on developmental surveillance and screening. Practices provided information on their location, size, practice type, practice setting, and patient population (including age distribution, ethnic makeup, and payer mix). Each site was required to propose a 3-member project team (a pediatrician leader, a clinic or office staff member, and a third individual left to the discretion of the practice) and to express their commitment to practice change. On the basis of this information, study investigators selected 17 practices from 15 states in an effort to maximize diversity in practice types, practice settings, and patient populations. Table 1 lists the characteristics of selected practices.
All 17 practice teams participated in a 1-day orientation workshop that provided an introduction to the policy statement's recommendations. This orientation included new terminology, available screening instruments, approaches to practice change, data-collection tools, communication with payers, and collaboration with community-based programs were reviewed. Particular emphasis was placed on the use of standardized developmental screening instruments and the need for practice change. Project organizers were careful, however, not to endorse any specific screen or recommend any specific approaches to office-based implementation of developmental surveillance and screening.
This project received institutional review board (IRB) approval from both the AAP and the Johns Hopkins School of Medicine. For practices affiliated with institutions with their own IRBs, approvals were also obtained from those institution-specific IRBs. Each practice received a total of $1800 in remuneration for participation in both the quantitative and qualitative components of this study.
Quantitative assessment of each practice's implementation of screening and referrals was performed by using an interrupted time-series design. Practices reported their baseline surveillance, screening, and referral practices and chose a screening instrument(s). Once project implementation began, they were asked to review a specified number of patient charts per month and report these results to the AAP for compilation and analysis. A 9-month implementation period was chosen because it was the longest period possible within the constraints imposed by project funding sources. To minimize reporting burdens, practices were asked to report data in the aggregate; they did not provide information on specific children (such as age, ethnicity, or gender). At 3 practices, some providers chose not to implement developmental screening; these practices were instructed to review charts only for patients of participating providers.
At orientation, practices were provided specific instructions on the number of charts to be reviewed and data elements to be collected. Potential strategies for data collection were also reviewed, and standard monthly reporting forms were provided for uniform data reporting. AAP project staff members monitored these data-collection efforts and were available throughout the project to answer practices' questions regarding data collection and reporting.
During the first (July 2006) and last (March 2007) months of the project, practices were asked to report on the first 30 well-child visits for children aged 8 to 36 months. In intervening months, practices were asked to report on the first 10 well-child visits among children aged 8 to 36 months. The practices were instructed to review charts for all children within the target age range rather than select only children who presented at the recommended screening ages (9, 18, and 24/30 months).
The data elements requested included the number of children who presented at screening ages, the number with documentation of structured developmental screening, and the number referred to any source for further evaluation or services.
Rates of screening, failed screens, and referrals were calculated across sites on a monthly basis. Because practices reported data in the aggregate, stratified analyses could not be conducted for subgroups of children. Stratified analyses were only conducted for subgroups of clinics that used certain screening instruments.
One site had multiple months of missing or implausible data that could not be resolved despite the efforts of clinic and AAP project staff. This practice's data were ultimately excluded from quantitative analyses.
A longitudinal qualitative study was also conducted to characterize the experiences of practices in implementing the policy statement's recommendations. Qualitative research is appropriate when little previous research has been conducted on a topic, the topic is complex, and there is a need to capture the viewpoints of multiple stakeholders with different views.16,17 Each of these was true for this study.
Each practice nominated 3 individuals for study participation—1 provider, 1 clinical support staff (eg, nurse, social worker), and 1 practice support staff (eg, office manager, billing specialist)—who were most intimately involved with developmental screening. This resulted in 51 respondents for the first set of semistructured interviews. These respondents were often, but not always, the same individuals who attended the orientation workshop. Forty-four of the 51 respondents also completed a second semistructured interview. Because some first-round respondents had diminished involvement in the D-PIP over time or had left their practices, 6 new respondents completed only a second-round interview (1 provider, 2 clinical support staff, and 3 practice support staff).
The first set of semistructured interviews was conducted 4 to 5 months into the D-PIP implementation period. The second set of interviews was conducted shortly after the project's conclusion, which was approximately 5 months after the first set of interviews. All interviews were conducted by telephone by Dr King and audiorecorded.
Each interview was transcribed verbatim by a research assistant and reviewed for accuracy. Transcripts were then used to create a codebook that contained 119 codes. All codes were developed inductively by 2 research assistants and reviewed by 2 study investigators (Drs Tandon and King) for clarity and lack of overlap with other codes. These 119 codes were then organized by investigators into 10 categories that reflected main areas of investigation (eg, screening, referrals). Each interview was dually coded by 2 different research assistants until the κ coefficient, which indicated the level of agreement between coders, was consistently >0.70, indicating good-to-excellent interrater reliability of the coding process.18 Coded text segments were then placed into matrices to facilitate identification of emerging themes and patterns.
Screening and Referral Practices Before the D-PIP
Staff from 9 of the 17 practices reported that they conducted “structured developmental screening” before the D-PIP. Narrative descriptions of such screening, however, combined with clarifying questions during qualitative interviews, revealed that none of these 9 practices had been implementing developmental screening as defined in the AAP policy statement. Five practices were using a structured instrument (such as the Denver II) in an unstructured manner. One practice used “screening” to describe its use of a clinic-specific checklist of milestones. Three practices used a structured instrument in its intended fashion, but only for some patients, typically those for whom surveillance had already raised concerns for possible developmental delays.
Selection of Screening Instruments
Fifteen practices selected 1 or both of 2 parent-completed screening instruments: the Ages & Stages Questionnaires (ASQ)19 or the Parents' Evaluation of Developmental Status (PEDS).20 At 1 practice, some providers used the Denver II,21 whereas others used the Prescreening Developmental Questionnaire (PDQ).22 The remaining practice initially used the Denver II but later transitioned to the Bayley Infant Neurodevelopmental Screener (BINS).23 Some practices also chose to use a structured instrument for “surveillance,” separate from their “screening” activities. In these practices, parent responses on structured instruments were used primarily to inform the provider's clinical assessment rather than as the main indicator of the need for a developmental referral.
Some practices used combinations of instruments. For example, 1 clinic administered the full Denver II for surveillance at all visits and the ASQ for screening at designated screening visits. Another practice conducted 2-stage screening, using the PEDS to screen all children and the ASQ to screen children with concerning results on the PEDS before making final decisions about referrals.
Rates of Structured Developmental Screening
Practices screened a high proportion of children at the target ages (Fig 1). During the 9-month implementation period, monthly screening rates across practices increased from 68% to 86% of children who presented for recommended screening visits (9, 18, and 24/30 months). During the last 4 months of the project, practices consistently screened more than 85% of all target children. Over the entire project period, 80% of the children seen at screening ages underwent structured developmental screening. Considerable variability was seen among cumulative screening rates, however, which ranged from 33% to 100% across the practices.
Screening rates were also calculated separately for practices that used the 2 most commonly used screens (ASQ and PEDS). Overall, screening rates were not significantly different between the 2 instruments (80% vs 77%; P = .10).
Rates of Failed Screens
Fourteen percent of all children screened had a failed screen, which suggested a risk for developmental delays (Fig 2). As with screening, considerable variability was seen between practices, with cumulative failed-screen rates that ranged from 5% to 53% across practices. In contrast to screening, dramatic differences in rates of failed screens were seen between instruments, with twice as many children having a concerning result on the PEDS than on the ASQ (22% vs 11%; P < .001).
Referral Rates Among Children With Failed Screens
The AAP policy statement recommends that every child with a failed screen be referred for further evaluation. As shown in Fig 3, however, the practices fell short of this target. Monthly referral rates among children with failed screens ranged from a high of 78% in September 2006 to a low of 48% in January 2007, averaging 61% over the entire study. Once again, substantial variability was seen between practices, with practice-specific referral rates that ranged from 27% to 100% of children with failed screens. In contrast to screening rates, referral rates did not increase over time; in fact, they were noticeably lower in later months than they were early in the project period.
Subgroup analyses according to instrument revealed that most of this decrease in referral rates occurred among practices that used the PEDS. During the last 4 months of the project, fewer than one third of the children with a failing result on the PEDS were referred to any source. Over the entire study, referral rates for children with a failed PEDS were far lower than those for a failed ASQ (43% vs 72%; P < .001).
Several themes emerged from analysis of the qualitative data. These themes are described below. Illustrative quotations for each theme, as well as the number of practices that reported each theme, are provided in Tables 2 and 3.
Considerations in Selecting Screening Instruments
The factor most commonly cited in selecting screening instruments was concern about clinic flow (Table 2). Eight of the 17 practices expressed concern that routine administration of a developmental screening instrument would slow the flow of patients through the clinic. Most of these practices chose the PEDS, which comprises 10 items on a single page. Some practices concerned with clinic flow did, however, select the longer 30-item ASQ and found it feasible to implement in their busy clinic settings.
Several practices selected the ASQ because it was used by a local outreach program or their state's early-intervention program, and they felt it would be advantageous to be aligned with these local programs. Some clinics chose the ASQ because they felt that its skill-based items better supported their efforts to teach trainees and parents about typical child development than the PEDS, which contains items based on parent concerns.
Need for Practice-Wide Systems to Implement Screening
For nearly all practices, implementation of developmental screening required the creation of an office-wide implementation system. Although each clinic created a system tailored to its own needs, several features of these systems were common across sites. First, most clinics divided responsibilities among staff at multiple levels. For example, screening instruments might be distributed by front desk staff, scored by nurses or nursing assistants, and reviewed with the family by a provider. Then, if a referral is needed, it might be placed by a social worker or referral coordinator.
Second, implementation systems tended to evolve over time. Many clinics found early on that screening instruments were not being distributed consistently; this finding led them to examine and restructure their implementation systems. Importantly, most clinics identified the need for change by reviewing systematically collected data on rates of screen distribution and completion. Although many clinics bemoaned the time and effort necessary to collect these data, most acknowledged that such data were critical to the effectiveness of their quality-improvement efforts.
Common Challenges to Implementation
Despite their successes, practices identified several common challenges in implementing developmental screening. Practices struggled, particularly at the outset, with distributing screening instruments to children at screening ages but not to other children. The strategies that practices developed to address this issue were as numerous as the practices themselves. They ranged from typing notes in a computerized appointment system to paper-clipping screening instruments to charts at the beginning of the day. One practice, remarkably, even faxed screening instruments to parents in advance of the visit. Another practice opted to distribute screening instruments to children at all visits between 6 and 36 months of age, not just the 3 visits targeted by the AAP, because they felt that this would substantially increase the likelihood of children undergoing screening on multiple occasions before the age of 3.
Many clinics found that it was more difficult to screen consistently when they were busy. Although busy periods occurred sporadically at all sites, they were seen simultaneously across multiple sites with the onset of the winter viral season. This phenomenon was reflected not only in qualitative interviews but also in the quantitative data, which showed a noticeable dip in screening rates during November 2006. Ongoing data-collection and monitoring efforts helped practices recognize these challenges and adjust their implementation systems accordingly. As a result, screening rates quickly returned to their previous levels.
Finally, many clinics grappled with the challenges posed by staff turnover. When clinics lost staff members, they often saw screening rates decrease while they were short-staffed or while a new staff member was being trained. This was especially true when the new staff member was in a leadership position, such as a nursing manager or office manager, because these individuals often played an integral role in the ongoing monitoring of developmental screening efforts.
All these implementation challenges were notable for the degree to which they were shared across participating practices. No major differences were seen between practice types or patient populations, or between practices that used parent-completed questionnaires and those that used provider-administered screens.
Deviations From the AAP Algorithm
Practices generally adhered to the steps described in the algorithm included in the AAP policy statement on developmental surveillance and screening. However, there were 3 points at which practices tended to deviate from this algorithm.
First, the AAP algorithm suggests that the third screening occur at a 30-month well-child visit. No practice, however, consistently implemented a 30-month visit unless such a visit was already routine in their practice before the D-PIP. The most common reason cited for not implementing a 30-month visit was the lack of expected insurance payment for the visit. It should be noted, however, that none of the 3 practices that routinely performed 30-month visits before the D-PIP reported difficulties collecting insurance payments for these visits.
A second common deviation from the AAP algorithm involved provider actions when developmental surveillance at nonscreening visits raised concerns about possible delays. In these circumstances, the algorithm calls for administration of a standardized screening instrument before deciding whether to refer. Most providers, however, did not take this additional step. Instead, they tended to refer children for further evaluation based solely on their surveillance.
Finally, the AAP algorithm recommends that children with failed screens be simultaneously scheduled for developmental/medical evaluations and referred for early-intervention services. No clinic adhered strictly to this recommendation. Instead, providers tended to stratify their referrals. The manner in which practices stratified their referrals, however, varied widely. For example, 2 practices referred children suspected of having isolated speech-language delays directly to a speech therapist but referred children thought to have more severe or global delays to public early-intervention programs. Three practices referred children thought to have mild delays to public early-intervention programs, but they sent those suspected of having more severe delays to medical subspecialists such as developmental pediatricians or pediatric neurologists. One practice referred all children younger than 1 year to medical subspecialists but referred older children primarily to early intervention.
Lessons Learned From Referral-Tracking Efforts
Although not explicitly addressed in the AAP policy statement, 9 of the 17 practices attempted to track the outcomes of their referrals (Table 3). These practices found that such referral tracking required a clinic-wide implementation system distinct from their system for developmental screening. Without a separate tracking system, information about which children had been referred and where they had been referred could be found only in individual charts, making clinic-wide tracking nearly impossible. Of the 9 practices that attempted to track referrals, only 6 succeeded in putting a system in place. These clinics found referral tracking to be a time- and labor-intensive effort that was difficult to maintain over the long-term.
Practices were not asked to report the number of children further evaluated or ultimately identified as having delays. Quantitative data on these outcomes, therefore, are not available. In qualitative interviews, however, respondents described learning much from their referral-tracking efforts. Many found that a large number of families never followed through with the recommended referrals. Perhaps more important was the realization among several respondents that families often did not understand the reason for their referral.
All practices that tracked referrals found that they had better communication with local referral resources and received more consistent feedback about the children they referred. Finally, 3 practices reported that their tracking efforts led them to conclude that a larger number of children were being identified and awarded services as a result of their developmental surveillance and screening efforts.
By the end of the 9-month D-PIP implementation period, nearly all participating practices had successfully implemented the AAP's recommendations on developmental surveillance and screening. They did so by selecting parent-completed screening instruments and developing practice-wide systems for implementation. These systems engaged staff at multiple levels, monitored performance through ongoing data collection, and used such data to drive changes in implementation over time. In these respects, the experiences of D-PIP practices were similar to those of several recent projects that have aimed to improve the quality of pediatric preventive care.24,–,26
At the same time, this project revealed that many clinics chose not to implement certain AAP recommendations: de novo implementation of a 30-month well-child visit; routine screening when surveillance had already suggested delays; and dual referral of all children, no matter what the concern, to both medical subspecialists and early-intervention programs.
Practices largely succeeded in implementing both of the 2 most commonly-used screens: the ASQ and the PEDS. However, children at practices that used the PEDS “failed” their screens twice as often as children at practices that used the ASQ. At the same time, practices that used the PEDS referred a far lower proportion of patients with failed screens for further evaluation.
There are a number of possible explanations for these findings. Clinics that used the PEDS may have had higher proportions of children with developmental delays. If this is true, lower referral rates among these practices are concerning; they suggest that providers are opting not to refer many children who might benefit from further evaluation. Alternatively, providers that used the PEDS may have been less likely to believe a failed screening result and more likely, therefore, not to refer a child with a failed screen. More studies are needed to determine if this is replicated in other settings and, if so, what factors underlie this unexpected finding.
Among the policy statement's recommendations, the greatest departures from previous practice were recommendations regarding routine administration of structured screening instruments. It is understandable, therefore, that implementation of screening was addressed in the greatest detail, both within the policy statement and by the D-PIP practices. The results of this study strongly suggest, however, that effective developmental screening requires 2 distinct implementation systems: one for screening and another for referrals.
Implementation systems for screening govern the administration of screening instruments and ongoing monitoring of the screening process. In this study, similar to a number of previous studies,27,–,29 we found this to be feasible within the pediatric primary care setting. Implementation systems for referrals, however, address an entirely separate set of tasks: placing referrals, tracking families' follow-through, communicating with specialists and early-intervention programs about the outcomes of completed referrals, and ensuring timely primary care follow-up. Most referral-related tasks occur after the clinic visit and are often handled by different clinic staff members than those who are most intimately involved with screening. This aspect of developmental surveillance and screening has received far less attention in recent studies than implementation of screening instruments. In this study, practices that did not implement a separate implementation system for referrals could not reliably determine how many children completed their recommended referrals or were successfully connected to needed services.
This study has several important limitations. First, although selected for their geographic and demographic diversity, the 17 practices that participated in the D-PIP were not typical of all primary care practices with regards to developmental surveillance and screening. Instead, their interests in developmental screening and quality improvement were so strong that they volunteered to participate in a time- and labor-intensive project with little tangible reward. The degree to which their experiences would be replicated in less motivated practices remains to be seen.
Second, the chart reviews that generated the quantitative data for this project were collected by staff at each participating practice. Although all practices used the same data-collection forms and received the same instructions for their use, no external verification was conducted to confirm that items were being interpreted in the same way or that data were being collected in the same manner across all practices. To date, however, screening rates reported by other recent studies of developmental screening have ranged from 53% to 81%,7,28,29 which overlaps with average screening rates among the D-PIP sites.
Finally, although it enhanced the feasibility of the project for participating practices, the small number of charts reviewed each month limited the precision of monthly estimates for rates of screening, failed screens, and referrals. Confidence intervals around these monthly estimates were quite wide, with substantial overlap between months and across subgroups of instruments. However, the cumulative rates that reflected the entire 9-month project period were much more precise. This facilitated the detection of statistically significant differences in rates of failed screens and rates of referrals between screening instruments.
Despite these limitations, this study had a number of important strengths. As noted previously, the D-PIP practices represented a range of geographic regions, practice types, and patient populations. This diversity adds credibility to the findings that some themes, such as the challenges of capturing children at screening ages and tracking referrals, were consistent across a wide range of primary care settings.
We intentionally collected qualitative data from across categories of clinic staff. Although providers typically attract the most attention in discussions of developmental screening, this study clearly shows that ancillary staff assume most day-to-day responsibility for implementation. Thus, ongoing support for such ancillary staff is critical to the continued success of these efforts.
The ultimate goal of developmental surveillance and screening is to improve outcomes for children with developmental disorders.1 To date, however, studies have failed to document a direct link between routine screening30,31 and improved child outcomes. In fact, the US Preventive Services Task Force recently found insufficient evidence to recommend in favor of widespread screening of young children for speech-language delays.31 Our findings suggest that shortcomings in referral processes may partially account for this gap in evidence. Future studies on the potential benefits of developmental screening, therefore, should include robust referral systems that incorporate the findings from this study. Specifically, such referral systems should provide better explanations to families of the reasons for developmental referrals, as well as better monitoring of referral outcomes.
We identified a number of points at which an algorithm for an office-based process could not be consistently implemented in real-world practice settings. This highlights the utility of combining a policy statement with both a detailed algorithm and efforts to test the feasibility of its implementation. It also emphasizes the need for better mechanisms to revise policy statements as new knowledge becomes available. Currently, AAP policy statements “automatically expire 5 years after publication unless reaffirmed, revised, or retired at or before that time.”32 Without a mechanism for disseminating new knowledge in a timely fashion, many practices that choose to implement developmental screening will not have the opportunity to avoid the common missteps identified by the D-PIP practices. A remaining challenge for the AAP and other organizations that develop clinical recommendations is the creation of efficient mechanisms for information dissemination that maximizes the opportunity for translation and adoption of new knowledge.
Major support for the D-PIP was provided by the AAP, the Centers for Disease Control and Prevention/National Center on Birth Defects and Developmental Disabilities (cooperative agreement U59/CCU521266), and the Maternal and Child Health Bureau (HRSA-04-056: CFDA 93.110). Additional support for the qualitative study was provided by the Commonwealth Fund (grant 20070181). Dr King also received support from the Johns Hopkins Clinical Research Scholars Program (funded by grants K12RR017627 and KL2RR025006 from the National Center for Research Resources).
We are deeply indebted to the staff of the 17 D-PIP practices who gave so generously of their time and wisdom for this project. These 17 practices are: Alexandria-Lake Ridge Pediatrics (Alexandria, VA); Boys Town Pediatrics (Omaha, NE); Charter Oak Health Center at Connecticut Children's Medical Center (Hartford, CT); Children's Clinic (Muskogee, OK): Children's Clinic La Jolla (La Jolla, CA); Children's Hospital of Pittsburgh Primary Care Center (Pittsburgh, PA); Hospital of Saint Raphael Pediatric Primary Care Center (New Haven, CT); Kids Clinic (Lawrenceville, GA); Marshall University Pediatrics (Huntington, WV); Midland Community Health care Services (Midland, TX); New Ulm Medical Center (New Ulm, MN); North Arlington Pediatrics (Arlington Heights, IL); Ohio Pediatrics, Inc (Huber Heights, OH); South Valley Pediatrics (Hamilton, MT); Children's Clinic, Serving Children & Their Families (Long Beach, CA); Wishard Primary Care Center (Indianapolis, IN); and Ypsilanti Health Center (Ypsilanti, MI).
Our research assistants (Jenna Rossoff, Chantel Priolo, Katherine Thorpe, Jennifer Dein, Samantha Ohmer, Sarah Mahon, Michael Levin, and Arthika Chandramohan) were invaluable in transcribing interviews and organizing the enormous amount of data generated by this project. We thank the other members of the AAP's policy revision committee (Paul Biondich, Carl Cooley, John Duby, Joseph Hagan, and Lynn Wegner) for their important contributions in crafting the revised developmental surveillance and screening policy statement. Finally, we thank Ginny Chanda and Holly Griffin in the offices of the Council on Children With Disabilities and the Medical Home Surveillance and Screening Initiatives at the AAP for their enormous logistic support throughout this project.
- Accepted August 4, 2009.
- Address correspondence to Tracy M. King, MD, MPH, Johns Hopkins School of Medicine, Division of General Pediatrics and Adolescent Medicine, 200 N Wolfe St, Room 2072, Baltimore, MD 21287. E-mail:
The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the AAP, Centers for Disease Control and Prevention, Maternal and Child Health Bureau, Commonwealth Fund, or National Center for Research Resources, National Institutes of Health.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
- AAP =
- American Academy of Pediatrics •
- D-PIP =
- Developmental Surveillance and Screening Policy Implementation Pilot •
- ASQ =
- Ages & Stages Questionnaires •
- PEDS =
- Parents' Evaluation of Developmental Status
- 1.↵American Academy of Pediatrics, Council on Children With Disabilities, Section on Developmental Behavioral Pediatrics, Bright Futures Steering Committee, and Medical Home Initiatives for Children With Special Needs. Identifying infants and young children with developmental disorders in the medical home: an algorithm for developmental surveillance and screening [published correction appears in Pediatrics. 2006;118(4):1808–1809]. Pediatrics. 2006;118(1):405–420
- Shonkoff JP,
- Phillips D
- Guralnick MJ
- Sand N,
- Silverstein M,
- Glascoe FP,
- Gupta VB,
- Tonniges TP,
- O'Connor KG
- Earls MF,
- Hay SS
- Glascoe FP,
- Foster EM,
- Wolraich ML
- Davis DA,
- Taylor-Vaisey A
- Flores G,
- Lee M,
- Bauchner H,
- Kastner B
- Hayward RS,
- Guyatt GH,
- Moore KA,
- McKibbon KA,
- Carter AO
- Pett MA
- Bricker D,
- Squires J
- Glascoe FP
- Frankenburg WK,
- Dodds J,
- Archer P,
- et al
- Frankenburg WK,
- Bresnick B
- Aylward GP
- Lannon CM,
- Flower K,
- Duncan P,
- Moore KS,
- Stuart J,
- Bassewitz J
- Schonwald A,
- Huntington N,
- Chan E,
- Risko W,
- Bridgemohan C
- Hix-Small H,
- Marks K,
- Squires J,
- Nickel R
- Rydz D,
- Srour M,
- Oskoui M,
- et al
- Nelson HD,
- Nygren P,
- Walker M,
- Panoscha R
- 32.↵American Academy of Pediatrics. AAP policy. Available at: http://aappolicy.aappublications.org. Accessed July 28, 2009
Finding a Cause for Chronic Fatigue Syndrome: Never a Tiring Topic: A recent article in The New York Times (Grady D, October 12, 2009) discussed findings from a study reported in Science that a newly discovered retrovirus may be associated with chronic fatigue syndrome. The virus is the xenotypic murine leukemia virus-related virus (XMRV), a descendant of a group of viruses that cause cancer in mice and has been linked to prostate cancer in men. According to Dr Judy Mikovits, a lead author for this study, 67% of patients had been infected with XMRV compared to 3.7% of controls and this percentage has subsequently been found to be as high as 98% in follow-up studies. Since the virus affects the immune system, it may be involved in the evolution of this syndrome when combined with other environmental triggers. The Centers for Disease Control and Prevention are already working on confirming the pilot findings published in Science.
Noted by JFL, MD
- Copyright © 2010 by the American Academy of Pediatrics