pediatrics
February 2005, VOLUME115 /ISSUE 2

The Assessment of Attention-Deficit/Hyperactivity Disorder in Rural Primary Care: The Portability of the American Academy of Pediatrics Guidelines to the “Real World”

  1. Jodi Polaha, PhD*,
  2. Stephanie L. Cooper, PhD*,
  3. Tawnya Meadows, PhD*,
  4. Christopher J. Kratochvil, MD
  1. *Department of Psychology, Munroe-Meyer Institute
  2. Department of Psychiatry, University of Nebraska Medical Center, Omaha, Nebraska

Abstract

Objective. To examine the implementation of a protocol for the assessment of attention-deficit/hyperactivity disorder (ADHD) in rural pediatric practices. The protocol was designed to provide an efficient means for pediatricians to learn and use the ADHD guidelines put forth by the American Academy of Pediatrics (AAP).

Methods. Primary care staff (physicians, nurses, etc) from 2 rural pediatric practices were trained to use the ADHD-assessment protocol. Medical records for 101 patients were reviewed from 1 to 2 years before the introduction of the protocol and for 86 patients during the subsequent 2 to 3 years to assess compliance with the AAP guidelines. In addition, 34% of the scales scored by the staff were rescored to check for scoring accuracy.

Results. Before the availability of the AAP guidelines and the implementation of the assessment protocol, neither primary care site was consistently collecting the comprehensive information that is now recommended for an ADHD assessment. Parent and/or teacher rating scales were collected for only 0% to 21% of assessments across sites. When provided with brief training and supporting materials, medical records reflected significant improvement in the ascertainment of clinically necessary ADHD information, with parent and teacher rating scales present 88% to 100% of the time. Staff demonstrated an ability to score rating scales with a high degree of accuracy. The integrity of protocol collection and management was maintained 2 to 3 years after training.

Conclusions. An efficient system for conducting ADHD assessments according to AAP guidelines in rural pediatrics clinics can be initiated and maintained with integrity. Additional research is needed to determine if this system improves diagnostic decision-making and patient outcomes.

  • attention-deficit/hyperactivity disorder
  • assessment
  • guidelines
  • pediatric

Physicians have generally been identified as the gatekeepers of behavioral health services, and in pediatric primary care, attention-deficit/hyperactivity disorder (ADHD) has become an important focus. Pediatricians rate behavior problems, including ADHD, as the most common presenting concern.1 Specialists in developmental and behavioral pediatrics estimate that referrals for ADHD comprise 50% to 75% of their practices.2 Moreover, a recent study of 2 national surveys indicated that primary care diagnostic assessment services for children with ADHD increased threefold between 1989 and 1996.3

To assist physicians in meeting this growing clinical demand, the American Academy of Pediatrics (AAP) has made a substantial effort to develop best-practice guidelines for ADHD. In particular, a series of articles was published to describe specific empirically supported recommendations for assessment and treatment.4 These guidelines were established by a panel of experts across a variety of relevant disciplines and underwent an extensive peer-review process both within the AAP and by outside organizations. The published guidelines provide recommendations, detail their application, and describe the strength of evidence for their use.

With regard to assessment, the AAP provided 6 guidelines: (1) physicians should conduct an evaluation for school-aged children presenting with any of the core ADHD symptoms, academic underachievement, or behavior problems; (2) criteria published in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV)5 should be used; (3) physicians should systematically collect information from parents regarding core symptoms (using empirically-supported ADHD-specific rating scales such as the Conners' Parent Rating Scale)6; (4) similar information should be obtained from the child's classroom teacher, augmented with school records documenting academic performance, behavioral issues, and assessments conducted by the school; (5) physicians should evaluate for coexisting conditions, with a specific emphasis on those that are often comorbid with ADHD (ie, oppositional defiant disorder [ODD], anxiety disorders, mood disorders, and learning disabilities [LDs]); and (6) physicians should not use diagnostic tools that do not have strong empirical support, such as the continuous performance tests.

Although carefully composed and specific guidelines can lay out a thorough and effective protocol for ADHD evaluations, it is unknown to what extent they can be put into practice in a “real-world” high-volume pediatrics practice. In particular, the recommendation that physicians have rating scales completed by parents as well as teachers may be a stumbling block in busy practices in which there is no precedent for the systematic collection of such information. Moreover, although primary care physicians have training and experience in interpreting medical records and reports from specialists, they generally have limited training in the interpretation of the myriad of empirically supported rating scales for ADHD.

Although no recent data exist regarding the use of rating scales for the assessment of ADHD in primary care, survey data collected over 10 years before the publication of the guidelines showed that pediatricians reported collecting rating scale data from parents and teachers 58% and 62% of the time, respectively, and used DSM-III criteria <20% of the time when conducting an assessment.7 These data were limited, however, in that they were self-reported.

More recently, research has shown that primary care physicians struggle to effectively provide needed services for children with ADHD. Results from a large epidemiologic study of children demonstrated that of those with a diagnosis of ADHD, only 12.5% were treated with a stimulant medication during the previous year.8 Likewise, results from 2 large national surveys show only 50% of children in “real-world” primary care practice settings received care commensurate with guidelines published by the American Academy of Child and Adolescent Psychiatry.3 Finally, in the Multimodal Treatment Study of Children With ADHD, children with ADHD who received “treatment as usual,” in community-based care, had significantly poorer outcomes than children who received medication therapy according to the carefully designed study protocol.9

The present study was designed to evaluate an ADHD-assessment protocol, developed by behavioral health professionals in collaboration with primary care staff, based on AAP guidelines. Specifically, the present study examined the methods used for ADHD-assessment in pediatric clinics before and after the staff was trained to use the protocol. This study sought to discover whether, given an efficient system for collecting and interpreting relevant information, practices could independently do so in the acute and long-term and with accuracy. In addition to providing empirical support for the assessment protocol, this article describes the pragmatics of its use in a pediatric primary care setting.

METHODS

The following methods were reviewed and approved by the University of Nebraska Medical Center Institutional Review Board.

Participants

Primary Care Staff Participants

Primary care staff included physicians, nurses, and support staff at 2 pediatric primary care practices in midsized communities (population ∼20 000) in rural Nebraska. In fact, each is the only pediatric clinic in a 50-mile radius; thus, both have a reputation for work with behavioral concerns. These clinics were approached to participate in the development of an ADHD-assessment protocol because they were already partners with the Munroe-Meyer Institute (University of Nebraska Medical Center) in an integrated behavioral health clinic program. These were the only 2 clinics approached for participation.

Clinic 1 was a stand-alone private practice employing 3 full-time physicians, 2 part-time physicians, a nurse practitioner, a physicians' assistant, and ∼20 nursing/support staff. Clinic 2 was a hospital-based practice employing 3 full-time physicians and ∼15 nursing/support staff. Three of the 5 physicians were male, and 2 were female. All but 1 were early to midcareer in practice. None had specialized training in the area of ADHD. Nurses and support staff were trained to collect, organize, and score rating scales for the assessment protocol. Physicians were trained to interpret rating scales and use them to conduct assessments for ADHD.

Patient Participants

This was an archival study, examining the medical records of children attending an evaluation for ADHD as patients in 1 of these 2 primary care offices. Using billing history at both sites (Current Procedural Terminology codes), ADHD consultations were identified for at least 1 year before protocol initiation. Only those visits occurring solely for the purpose of a formal ADHD-assessment were included. Because of changes in the billing records at clinic 1, not all years were accessible. Thus, for clinic 1 this process identified medical records for 76 children referred in December 1998 through April 2000 (68% male). For clinic 2 this process identified records for 25 children referred in 2001 (72% male).

Consults occurring after the protocol was initiated were identified in the same manner. For clinic 1, records were identified for 54 children attending evaluations from September 2000 to December 2000 and all of 2002 (61% male). For clinic 2, records were identified for the entire years 2002 and 2003 (78% male). In sum, across both clinics, 101 charts were identified as containing an assessment for ADHD from the time before the introduction of the protocol, and 86 were identified as containing an assessment for ADHD from the years after.

Measures

Table 1 depicts the measures that made up the protocol for assessment that was introduced at each of the primary care clinics. This protocol meets AAP guidelines as follows: (1) it is focused on gaining data via well-established, empirically supported rating scales; (2) it recruits information from both parent and teacher, allowing for the demonstration of symptoms and functioning across multiple settings, a DSM-IV5 criterion; (3) it includes the use of a broad-band measure Child Behavior Checklist/Teacher Report Form (CBCL/TRF)10 or Behavior Assessment System for Children (BASC)11 to help physicians screen for alternate explanations for the symptoms of ADHD (eg, a high score on depression items, which could explain either poor concentration or psychomotor agitation), as required in DSM-IV criteria; and (4) it uses a measure (ADHD-IV)12 that assesses the DSM-IV criteria directly, allowing the physicians to systematically examine criteria when making the diagnosis. Although the AAP guidelines do not specify taking a multiple-measure approach, such a strategy has long been recommended,13 allowing for a broader range of information and providing the best overview of the child's functioning.

TABLE 1.

Protocol Instruments

For the purposes of the present study, research staff supplied all scoring manuals, computerized scoring software, and protocols for the first 25 patients. Subsequently, the practices purchased materials independently. The total “start-up” cost for the first 25 patients using the protocol described in this article is approximately $600. The cost for every 25 evaluations thereafter (if only 1 teacher per assessment) is approximately $83. One of the primary care clinics had a policy that parents would be charged $5 for each additional teacher packet or misplaced packet.

Procedures

Development of Efficient Protocol for Primary Care

A system for recruiting, scoring, and using rating scales was established. Significant effort was devoted to developing a method that was compatible with ongoing practice in the primary care setting to ensure efficiency and long-term maintenance. This system, developed in collaboration with staff at both practices, was as follows:

  1. The patient is referred for evaluation for ADHD (by physician, school, self, etc).

  2. Parents of patients are provided with a packet including a set of forms/rating scales to be completed by the parent, a set to be completed by the child's teacher(s), return envelopes, the clinic's release-of-information form, and a cover letter explaining the forms and clinic procedures. At this time it was explained to the parents that the evaluation in the primary care setting would be scheduled when the release of information was returned with completed rating scales from home and school. When children had ≥2 teachers, 2 sets of packets were provided, and parents were instructed to give them to the teachers who knew the child best.

  3. Returned packets are stored in a central file until both parent and teacher information is present. Staff do not initiate any additional reminders or updates to parents regarding packet status.

  4. The packet is scored and summary sheet is completed by staff/nurse when both packets are completed.

  5. Completed packets are placed in the child's medical record.

  6. Parents are contacted by office staff to schedule an evaluation.

  7. The parent and patient attend evaluation. The physician reviews materials in the chart immediately before the appointment.

Primary Care Staff Training

A 90-minute training session was conducted (by J.P.) at a staff meeting attended by the front-desk and nursing staff before initiation of the project. At both clinics, front-desk staff members were designated to distribute packets, track incoming packets, and schedule appointments. Staff were provided with a brief rationale for the use of the packets and were assisted in developing their own system for creating and storing packets, instructing/scheduling patients, and tracking, scoring, and filing them. No prior system for any of these activities was in place, and although suggestions and feedback were provided (by J.P.), flexibility was emphasized so that each office could develop a strategy that best “fit” their ongoing practice. Thus, for example, at clinic 1, 1 specific front-desk staff member was identified to complete all packet scoring. At clinic 2 nurses were identified to complete scoring and did so for those patients who were seen by their assigned physician.

In addition, research staff (licensed psychologists) trained identified staff at each clinic to score each rating scale according to manual instructions. In addition, staff members were trained to complete the summary form, which provided an easy-to-complete, 1-page overview of t scores and significance level of each scale (Fig 1). A notebook was provided to staff at each practice containing complete step-by-step instructions for scoring packets and transferring scores to the summary form. Research staff attended scoring for the first 2 to 3 complete packets to answer questions and check for scoring accuracy. No other formal training was provided; however, behavioral health professionals were on site ∼1 to 2 days weekly as part of an independent, ongoing outpatient treatment clinic and answered questions on an informal basis.

Fig 1.

Summary form used with protocol.

In a 2-hour session for physicians only, research staff described the use and interpretation of each of the measures included in the protocol. The meaning of t scores and significance scores were discussed. In anticipation of the need for a brief and simple summary of the data, the summary form was developed, allowing the clinicians to efficiently review the outcomes of the completed scales. In training, several examples of completed summary forms were presented, showing profiles that would suggest the various ADHD subtypes, profiles consistent with concurrent ODD and LDs, and profiles indicating only the presence of ODD or suggesting that attention problems might better be explained by learning problems. In addition, the use of significant scores on subscales from the broadband rating scales (eg, CBCL10 or BASC11) was discussed as a mechanism for screening for other problems that might cause difficulty with concentration (eg, anxiety, depression) or as support for the ADHD-specific rating scales.

Research staff emphasized to the physicians that these rating scales were not independently diagnostic but should be used in conjunction with a thorough clinical interview. The DSM-IV5 criteria for ADHD, ODD, and LDs were reviewed, and physicians were encouraged to obtain additional information during the interview such as history of symptoms, recent family dynamics, comorbid conditions, etc, to ensure that they were adhering to the criteria. This was the only formal training provided. As discussed above, behavioral health professionals were on site ∼1 to 2 days weekly as part of an independent, ongoing outpatient treatment clinic and answered questions on an informal basis.

Data Collection

Research assistants were trained to review medical charts and code the presence or absence of materials relevant to evaluation. This included the presence of protocol materials described above as well as the t scores from subscales on each. In addition, the presence or absence of additional materials including other rating scales, psychoeducational reports within the past 3 years, anecdotal reports from the school or behavioral health professionals, and clinic-specific forms. Assistants also coded the diagnosis made by the physician (verbatim) as well as his or her recommendations for treatment. Twenty-two percent of all charts (N = 41) were recoded by a second assistant. Overall, coders agreed on the presence or absence of materials in medical charts 96.2% of the time.

In addition to coding materials in medical records, research assistants rescored 34% of the postprotocol packets at each site to assess the accuracy of primary care staff members' scoring. Packets were photocopied or otherwise altered so that the assistant was blind to original scores. Research assistants also completed a summary form based on the scores they generated and the summary forms completed by the primary care staff and the research assistant were compared (by J.P.). Where discrepancies were observed, the source of the error was determined. Only errors committed by the staff were recorded in addition to how the error was made.

RESULTS

Postprotocol Assessment

Tables 2 and 3 depict the percent of charts including measures initiated as part of the AAP-recommended protocol for clinic 1 and 2, respectively. All of the rating scales in the protocol showed a significant increase that was maintained over time subsequent to training. Across clinics and years, collection rates ranged from 88% to 100% for all rating scales.

TABLE 2.

Percent of Protocol Measures in Medical Records Before and After Training Across Years at Clinic 1

TABLE 3.

Percent of Protocol Measures in Medical Records Before and After Training Across Years at Clinic 2

Ratings and Forms Collected Before Protocol Training

Both clinics had initiated some data collection as part of their ADHD-assessment practice before training using the protocol described in this article. Clinic 1 was collecting standardized measures such as the CLAM14 and Attention-Deficit Disorder Evaluation Scale (ADDES)15 and had developed its own set of checklists and interview forms for ADHD-assessment purposes. clinic 2 was conducting a continuous performance task with patients, the Test of Variables of Attention (TOVA),16 as part of its standard ADHD assessment. Results show that the practices were not successfully recruiting this information. Medical records from clinic 1 contained rating scale information (CLAM and/or ADDES)14,15 ∼20% of the time, with clinic-developed checklists and forms present in 26% to 43% of the records reviewed. Records from clinic 2 contained results from the TOVA16 29% of the time.

Subsequent to training, both clinics discontinued use of these various measures altogether (<1% of postprotocol charts included ADDES/CLAM/TOVA).1416 Clinic 1 did continue to use its self-developed checklists, although less so. Physicians at clinic 1 increased their use of their self-developed structured parent interview in the context of the recommended protocol from 33% preprotocol to 86% postprotocol. A summary of these data is shown in Table 4.

TABLE 4.

Assessment Measures Present in Medical Charts Before Introduction of Protocol

Recruitment of School Records

Both practices attempted to ascertain school records as part of their evaluation both before and after protocol training. Results showed no increase in the practices' collection of school records after the protocol was initiated (Table 5).

TABLE 5.

Presence of School Record Data in Medical Charts Preprotocol and Postprotocol

Accuracy of Nurse/Staff Scoring of Protocols

As described above, 34% (N = 29) of the protocol packets were selected randomly and rescored by a research assistant to check the primary care staff's scoring accuracy. The research assistant also completed a summary form that was compared with the original, staff-completed summary form. There was only 1 case (3%) in which the data were present, but the summary form was not completed, and in this case, scores were compared on the rating scales themselves. Only errors made by the primary care staff were recorded. Two “percent correct” scores were calculated for each subscale appearing on the form: (1) the t score and (2) the significance of that score as designated by asterisks on the summary form.

Protocols rescored by research assistants blind to original scores showed high agreement with those originally scored by nurses or office staff across clinics and rating scales. Table 6 depicts agreement scores for the t score and significance of each rating scale, averaged across each of its subscales. Errors made on the CBCL/TRF10 or BASC11 were generally due to minor data-entry mistakes (eg, entering a “2” instead of a “0”). Errors on the other rating scales were typically due to incorrect addition or the use of incorrect age norms when converting raw to t scores. On the CPRS/CTRS,6 1 error occurred due to the use of norms for the wrong gender.

TABLE 6.

Average Percent Correct on Summary Form for Each Scale in Protocol

Overall, scoring errors were minor and did not always result in a change in the significance of that score as noted on the summary form. This is an important point, because physicians used the significance notations (rather than the t score itself) as a primary guide in interpreting the summary form. As can be seen in Table 6, the percent of scores accurately notated as “significant” or “nonsignificant” on the summary form was generally higher than the percent of accurate t scores for the same subscale. Clinic 2 accuracy on the ADHD-IV Teacher form was noticeably lower than scores on other forms (42% accurate). A majority of these errors occurred when converting raw to t scores, and errors were minor as reflected by the accuracy of the level of significance noted on the summary form.

DISCUSSION

Results from the present study suggest that rural pediatric primary care practices are able to implement and longitudinally maintain a protocol for the assessment of ADHD that is in line with AAP guidelines. Data show that, initially, the clinics under study were attempting to collect a variety of information including self-developed checklists and forms, parent and teacher rating scales, school records, and results from clinic-administered continuous performance tasks. Although empirically supported rating scales were in line with AAP guidelines, a majority of the measures collected were not. Regardless, the 2 clinics were collecting this information for only 20% to 30% of the assessments conducted.

Subsequent to the introduction of the AAP-consistent protocol, the development of an efficient, “primary care-friendly” system for collecting the data, and a short training session regarding its interpretation, both clinics increased their collection rates dramatically (88–100% rating scales collected across home and school). Moreover, clinic 1 maintained this level of integrity 3 years after the protocol was initiated and clinic 2 did so during the second year. In addition, data show that both clinics were scoring the various rating scales with a high degree of accuracy.

It is noteworthy that the system can be set up with limited expense. Altogether, the cost of starting the system (materials for the first 25 packets and 5-hour consultation/training with behavioral health professionals) is estimated at $1200. Subsequent to these initial start-up costs, it is estimated that assessment using this protocol would cost approximately $3.50 per patient including materials/copy costs. This cost does not include support-staff time for organizing and scoring materials; however, neither of the current practices felt this indirect cost was exorbitant.

Although no formal measures of satisfaction were obtained, physicians reported liking the protocol and feeling more confident about their diagnoses when using it. At the time of this writing, clinic 1 is in its fourth year and clinic 2 is in its third year of use. Other pediatric clinics collaborating with the research staff have also requested assistance in initiating such a protocol. Moreover, clinic 2 has begun using data from this assessment protocol to serve as a “baseline” comparison after medication is initiated. Specifically, clinic 2 added a side effects rating scale17 to the assessment process and requires families to get parent and teacher ratings on that scale and the ADHD-IV at their 30-day medication follow-up appointment. A similar, staff-developed system is in place for collecting, scoring, and summarizing scores across appointments for physicians' ease of use.

Whether results from the current study can be generalized to other pediatric practices is unknown. On one hand, it is well-documented that rural practices are typically very busy and have been described as “overwhelmed” by the high demand for time-consuming specialty services in these health care–shortage areas.18 Thus, if these practices were able to add this protocol to their ongoing caseload, it seems other practices could do so as well. On the other hand, the limitations of this study were the small number of practices participating and the restricted geographic area studied. Both practices were located in small communities in rural Nebraska, where there is a significant shortage of behavioral health services. Therefore, due to limited resources in the community, physicians in these practices may have been more willing to engage time and resources to conduct the protocol, whereas physicians in urban locations may find it more time- and/or cost-efficient to refer families to a psychologist for assessment.

An additional limitation may be that, as part of an outreach program, both practices had behavioral health professionals providing outpatient treatment services in their primary care settings 1 to 2 days per week. Indeed, the colocation and cooperative arrangement with behavioral health providers is indicative of prior interest in behavioral health on the part of these practices. Thus, these real-world clinics may be best-case scenarios for outcomes with the protocol described in this article.

Another benefit of colocated behavioral health specialists in these primary care sites was readily available assistance with the ADHD protocol. Although the behavioral health professionals did not initiate any formal mechanism for assisting the physicians or staff with the ADHD-assessment protocol, they were regularly on site to answer questions about the protocol and did provide informal assistance on an as-needed basis. No records were maintained regarding these interactions, so it is not known to what extent they occurred or whether outcomes were influenced by this support.

This archival study was descriptive in nature; thus, it remains unknown whether this protocol improved physicians' diagnostic accuracy or the child's outcome. Indeed, the protocol presented in this study is a “best-practice” method, but its empirical support is derived from well-controlled laboratory studies and research in traditional behavioral health settings. Thus, translational research examining not just the utility but also the effectiveness of these measures in rural primary care is needed.

Similarly, research is needed to demonstrate which or how many measures are needed to achieve the optimal amount of information in making a diagnosis. The current study made use of a multimeasure protocol designed to capture the best information from each scale (ie, broadband screening10,11 as well as empirically6 and conceptually driven12 measures designed to assess ADHD). The Eyberg Child Behavior Inventory (ECBI)19 was included because it is an important part of ongoing work in the integrated behavioral health clinics but has some overlap with other measures administered. Indeed, this protocol could have been simplified by eliminating the ECBI19 and possibly other measures as well. Although a multimeasure assessment has traditionally been considered a best practice,13 in 2002, the AAP published the ADHD Toolkit,17 providing physicians with a single measure for assessing symptoms of ADHD across home and school. The use of this single measure would certainly be more efficient than the protocol described in this article; however, its effectiveness is unknown.

It is noteworthy that the initiation of the protocol did not increase recruitment of school records. Although the interpretation of results from psychoeducational testing was discussed in protocol training, no specific instructions were provided regarding how to recruit that information or what information to request. Moreover, it was not necessary for these data to be present for an evaluation to be scheduled, as was the system for the rating scales. Previous research using self-report data found that although a vast majority of pediatricians “prefer” to receive information related to academic testing, only ∼24% “usually” or “always” receive it.20 These data are commensurate with the collection rate observed in the present sample.

Although the AAP guidelines make reference to the use of school records, no specific information about their utility is included. For children with suspected ADHD, academic underachievement, or behavior problems, such records are often lengthy and may be too time-consuming for the busy pediatrician to review in detail. Future research needs to examine the utility of the primary care physician obtaining and reviewing school records, identifying the specific forms of information included therein that may be helpful, and developing a system for assisting physicians in recruiting, reviewing, and understanding that information as it relates to the assessment process.

CONCLUSIONS

An efficient system for conducting ADHD assessments according to AAP guidelines in rural pediatrics clinics can be initiated and maintained with integrity. In this way, these physicians can provide a service that may not otherwise be available in their communities. The present study demonstrated that an empirically supported assessment protocol can be transported to the “real-world” setting; however, additional research should examine whether such practices improve diagnostic decision-making and patient outcomes.

Acknowledgments

This work was supported by National Institute of Mental Health grant 5K23MH06612701A1.

The following behavioral pediatric trainees, supported by a grant from the Nebraska Healthcare Cash Fund, contributed to the collection of data for this study: Jody Lieske, Amy Walters, Jessica Mack, Tammi Beckman, Brenda Surber, Asha King, and Stephanie Cole. In addition, Drs Andrea Lund and Rachel Valleley helped organize and train these students. We are especially grateful to staff at the Children and Adolescent Clinic, P.C. (Hastings) and the Columbus Community Hospital Pediatrics department for their participation in this study.

Footnotes

    • Accepted September 28, 2004.
  • Address correspondence to Jodi Polaha, PhD, Department of Psychology, University of Nebraska, 985450 Nebraska Medical Center, Omaha, NE 68105. E-mail: jpolaha{at}unmc.edu
  • Conflict of interest: Dr Kratochvil received grant support from or was a consultant and/or member of the speaker's bureau of Eli Lilly, GlaxoSmithKline, Forest, Cephalon, Novartis, McNeil, Organon, and AstraZeneca.

ADHD, attention-deficit/hyperactivity disorderAAP, American Academy of PediatricsDSM, Diagnostic and Statistical Manual of Mental DisordersODD, oppositional defiant disorderLD, learning disabilityCBCL/TRF, Child Behavior Checklist/Teacher Report FormECBI, Eyberg Child Behavior InventoryBASC, Behavior Assessment System for ChildrenTOVA, Test of Variables of AttentionADDES, Attention-Deficit Disorder Evaluation ScaleCPRS-R:S, Conners' Parent Rating Scale–Revised: Short FormCTRS-R:S, Conners' Teacher Rating Scale–Revised: Short Form

REFERENCES