BACKGROUND AND OBJECTIVE: Behavioral disorders are highly comorbid with childhood learning disabilities (LDs), and accurate identification of LDs is vital for guiding appropriate interventions. However, it is difficult to conduct comprehensive assessment of academic skills within the context of primary care visits, lending utility to screening of academic skills via informant reports. The current study evaluated the clinical utility of a parent-reported screening measure in identifying children with learning difficulties.
METHODS: Participants included 440 children (66.7% male), ages 5.25 to 17.83 years (mean = 10.32 years, SD = 3.06 years), referred for neuropsychological assessment. Academic difficulties were screened by parent report using the Colorado Learning Difficulties Questionnaire (CLDQ). Reading and math skills were assessed via individually administered academic achievement measures. Sensitivity, specificity, classification accuracy, and conditional probabilities were calculated to evaluate the efficacy of the CLDQ in predicting academic impairment.
RESULTS: Correlations between the CLDQ reading scale and reading achievement measures ranged from −0.35 to −0.65 and from −0.24 to −0.46 between the CLDQ math scale and math achievement measures (all P < .01). Sensitivity was good for both reading and math scales, whereas specificity was low. Taking into account the high base rate of reading and math LDs within our sample, the conditional probability of true negatives (96.2% reading, 85.1% math) was higher than for true positives (40.5% reading, 37.9% math).
CONCLUSIONS: Overall, the CLDQ may more accurately predict children without LDs than children with LDs. As such, the absence of parent-reported difficulties may be adequate to rule out an overt LD, whereas elevated scores likely indicate the need for more comprehensive assessment.
- ADHD —
- attention-deficit/hyperactivity disorder
- CLDQ —
- Colorado Learning Difficulties Questionnaire
- LD —
- learning disability
- WIAT-III —
- Wechsler Individual Achievement Test, Third Edition
- WJ-III —
- Woodcock-Johnson Tests of Achievement, Third Edition
What’s Known on This Subject:
Caregiver behavioral symptom ratings are frequently used to assist in diagnosing childhood behavioral disorders. Although behavioral disorders are highly comorbid with learning disabilities (LDs), little work has examined the utility of caregiver ratings of learning concerns for screening of comorbid LD.
What This Study Adds:
The validity of a time- and cost-efficient caregiver rating of academic concerns (Colorado Learning Difficulties Questionnaire) was examined. The screening measure accurately predicted children without LD, suggesting that the absence of parent-reported difficulties may be adequate to rule out overt LD.
The estimates of lifetime prevalence of learning disability (LD) in children fall at around 10%,1 and increasing evidence suggests that learning problems are also frequently comorbid with other common childhood disorders. Close to half of children with attention-deficit/hyperactivity disorder (ADHD) have cooccurring learning disabilities,2–4 putting them at substantially greater risk of poor academic outcomes. Speech and language disorders,4,5 mood and behavioral disorders,4,6–10 motor coordination disorders,11 and a variety of developmental medical conditions4,12–14 also demonstrate increased association with LDs. The appropriate identification of LDs (either as primary or cooccurring conditions) is critical to accurately guide both medical treatment and behavioral interventions. Specifically, although medication treatment alone can improve core symptoms of ADHD in children, it does not consistently improve quality of life, with improvement less likely in children with comorbid diagnoses.15 Without careful screening for LDs, treatment of a variety of childhood conditions may therefore not only be ineffective but also more costly (for a review, see ref 16).
Although community-based clinicians have been increasingly successful at acquiring parent and teacher behavior ratings17 to assist in diagnosis of behavioral disorders such as ADHD, current diagnostic approaches to LDs are not as easily transferred to the community provider setting.18 Specifically, a formal diagnosis of LD requires comprehensive assessment of cognitive and academic skills and/or careful assessment of the student’s response to targeted and empirically based intervention.19 As such, it is difficult to adequately conduct such assessment within the context of a primary care visit. Nevertheless, accurate assessment and more targeted treatment of behavioral disorders may be achieved by careful screening of potentially cooccurring learning difficulties via informant behavioral reports. Whereas use of teacher reports of behavior by pediatricians is increasing (eg, 67% of pediatricians report that they typically obtain teacher reports when diagnosing ADHD),17 it remains far easier for pediatricians to obtain behavioral ratings from parents/primary caregivers. Therefore, the use of a parent/caregiver report measure for identification of behaviors likely to indicate the possibility of LD has the potential to strengthen in-office screening of behavioral and academic functioning.
The Colorado Learning Difficulties Questionnaire (CLDQ)20 is a parent-report measure designed to screen for academic and behavioral difficulties in a school-aged population. Two subscales were designed to screen for learning difficulties, defined as behaviors likely to indicate the possibility of LD, in the areas of reading and math skills. Initial development and validation suggest that the CLDQ can identify youth at risk of LDs via parent report. The initial validation study of the CLDQ20 included a largely community-based sample recruited from local schools, although a small subsample was also recruited from clinics assessing for LDs. As such, the validity of the CLDQ and its value as a tool for screening for learning difficulties within a primarily clinically referred population remain unproven. The CLDQ, however, shows promise for use as an academic screening measure that could be concurrently administered with other behavioral checklists designed to be completed by parents within the context of a well-child office visit. Although many providers likely ask about academic functioning as a component of their assessment, the use of this type of rating scale has the benefit of standardizing practice and offering a normative comparison, both of which potentially increase the power of such screening in determining when referral for further assessment is appropriate.
The purpose of the current study was to examine the validity of the CLDQ as a screener for LD in a clinically referred sample. It was hypothesized that the CLDQ math and reading scales could provide a time- and cost-efficient screening for potential learning difficulties within this sample. In addition, we hypothesized that the academic subscales of the CLDQ would demonstrate adequate sensitivity and specificity for discriminating children with learning problems within a mixed clinical sample of school-age children referred for neuropsychological assessment.
For the study, deidentified patient records were accessed from the clinical database of a large outpatient neuropsychology clinic specializing in assessment of youth with developmental and medical disorders. Although patients are referred for a variety of medical concerns (eg, medical treatment, prematurity, neurogenetic disorders, etc), the largest proportion are seen secondary to concerns regarding symptoms of ADHD and none are referred for primary concern of LD. Data are routinely entered into this database by department clinicians via the electronic health record and are securely maintained by the hospital’s information systems department. As part of routine practice, parents of children referred to the clinic complete a set of behavioral and academic rating scales (including basic demographic data and the CLDQ20) through a secure online survey engine before assessment. All questionnaire data were subsequently entered into the clinical database. After approval from the hospital’s institutional review board, the clinical database was queried and deidentified data were extracted for any child for whom parent ratings on the CLDQ and academic achievement testing data on the measures of interest were available. Additional data extracted from the clinical data set included age, gender, race/ethnicity, and measures of cognitive ability, if available.
The final sample included 440 children (66.7% male), ages 5.25 to 17.83 years (mean = 10.32, SD = 3.06), from fairly diverse racial and ethnic backgrounds (58.0% white, 23.6% African American, 2.8% Asian or Asian Indian, 1.8% Hispanic, and 14.1% multiracial, other, or unknown). The highest levels of parent education included high school (14.0%), some college (15.8%), bachelor’s degree (36.0%), or graduate degree (33.6%). Although this was a clinically referred sample, verbal IQ estimates were generally within the average range (mean = 99.69, SD = 12.70) and did not significantly differ from the population mean of 100 (t = −0.46, P = .65). No scores fell below 77 (range: 77–142). Therefore, difficulties with reading and math were considered to be likely to reflect specific LDs rather than global intellectual disability.
The CLDQ is a 20-item parent-report rating scale that provides a brief screening of the child’s functioning within several domains; for the current study, the 11 items from the reading and math domains were used. Parents are asked to respond to each item (eg, “has/had difficulty learning letter names”; see Appendix for item content) on a 5-point Likert-type scale ranging from a rating of 1 (never/not at all) to 5 (always/a great deal). Higher scores indicate greater levels of perceived academic difficulty. Willcutt et al20 found that the reading scale showed adequate convergent and discriminant validity for identifying reading difficulties and for distinguishing reading from other types of academic deficits. The original 3-item CLDQ math scale was significantly correlated with math achievement in the initial validation sample, but 2 additional items (“difficulty learning early math facts” and “difficulty with math word problems”) were found to improve reliability and predictive power. In the initial validation sample, the mean parent rating was 1.79 (SD = 0.94) for reading scale items and 1.73 (SD = 0.88) for the math scale items. In the current sample, internal consistency was strong (reading scale α = 0.91; math scale α = 0.91).
Measures of single-word reading, reading decoding, reading fluency, and reading comprehension were obtained from commonly used, age norm–referenced, individually administered tests of academic achievement. Specifically, because data were obtained from clinician-selected and -administered batteries, some children were administered subtests from the Woodcock-Johnson Tests of Achievement, Third Edition21 (WJ-III), which included ≥1 of the following reading subtests: Letter-Word Identification, Word Attack, Reading Fluency, and Passage Comprehension. Other children were administered subtests from the Wechsler Individual Achievement Test, Third Edition22 (WIAT-III), including Word Reading, Pseudoword Decoding, Oral Reading Fluency, and/or Reading Comprehension. Raw scores were converted to age-normed standard scores (mean = 100, SD = 15). Strong internal consistency has been shown for the WJ-III reading subtests (Letter-Word Identification, α = 0.88–0.99; Word Attack, α = 0.88–0.94; Reading Fluency, α = 0.87–0.94; Passage Comprehension, α = 0.73–0.86) and WIAT-III subtests (Word Reading, α = 0.97; Pseudoword Decoding, α = 0.97; Oral Reading Fluency, α = 0.93; Reading Comprehension, α = 0.86).
Measures of basic calculation skills, math fluency, and problem-solving were obtained from the same standardized, individually administered academic achievement batteries. Specifically, children were administered subtests from either the WJ-III (Calculation, Math Fluency, and Applied Problems) or the WIAT-III (Numerical Operations, Math Fluency, and Math Problem-Solving). Strong internal consistency was found for all math subtests (WJ-III: Calculation, α = 0.80–0.97; Math Fluency, α = 0.66–0.91; Applied Problems, α = 0.91–0.95; WIAT-III: Problem-Solving, α = 0.91; Numerical Operations, α = 0.93; Math Fluency, α range = 0.84–0.90). Table 1 displays the number of individuals for whom the subtests of the WJ-III and the WIAT-III were administered as well as mean composite scores of the sample, across academic subtests.
Method of Analysis
Subtests measuring similar constructs across measures were combined to create a single variable assessing each academic area: single word reading, reading decoding, reading fluency, reading comprehension, basic math calculation, math fluency, and problem-solving. A cut point was identified on each of the composited academic measures, corresponding to ≤1 SD below the mean (standard score of ≤85), to minimize false negatives in use as a screening measure. Internal consistency (Cronbach’s α) was calculated for both the reading and math CLDQ scales. One-sample t tests were conducted comparing the observed CLDQ scale means with those of the normative sample. As for the achievement measures, an a priori cut score was set for the CLDQ reading scale mean (2.67) and math scale mean (2.60), corresponding to ≥1 SD above the validation sample mean20 for the reading and math scales, respectively. Sensitivity and specificity values of these CLDQ cut scores for identifying LDs were calculated for each of the composite academic measures (see Fig 1, Table 2). Base rates of LD on word reading and basic math calculation were calculated, because these were the most commonly administered measures for initial screening of academic performance in this clinically referred sample. Subsequently, conditional probabilities of CLDQ prediction accuracy were examined further for these core measures.
Mean parent ratings for CLDQ reading scale items in this clinical sample (n = 432; mean = 2.77, SD = 1.20) were significantly higher than ratings on reading items in the original community sample (t = 17.02, P < .001), suggesting that the present sample showed greater parent-reported impairment. Mean ratings for math scale items were also higher in our sample (n = 420; mean = 2.88, SD = 1.17) than in the community sample (t = 20.09, P < .001). Pooled responses to each of the 6 reading and 5 math CLDQ questions were significantly correlated with the standardized scores from achievement tests hypothesized to assess the same academic construct; for example, the item “reads slowly” was significantly correlated with performance on reading fluency measures (r = −0.46, P < .001), such that greater parent concern was associated with lower reading fluency scores. Correlations between CLDQ reading items and reading achievement measures ranged from r = −0.14 (P = .016) to r = −0.63 (P < .001), and the overall reading scale mean correlated well with each reading achievement measure (correlations ranged from −0.35 to −0.65, all P < .001). Correlations between CLDQ math items and math achievement measures ranged from −0.16 (P = .047) to −0.45 (P < .001), and total math scale correlations with math achievement measures ranged from −0.24 (P = .002) to −0.46, (P < .001).
For the purposes of the current study, we considered reading or math standard scores of ≤85, which is 1 SD below the mean (representing approximately the lowest 16th percentile), to indicate the presence of an LD. In our sample, 37.5% of those tested on reading achievement measures scored ≤85 on at least 1 reading subtest and 44.0% of those tested on math achievement measures scored ≤85 on at least 1 math subtest. Receiver operating characteristic analysis (see Figure 1) indicated that mean reading and math ratings on the CLDQ effectively discriminated children with LD, as indicated by achievement scores of ≤85, from those without LD. Specifically, the area under the curve, which refers to the accuracy of the CLDQ in identifying children as impaired or not impaired on academic achievement measures, showed generally good accuracy (areas under the curve ranged from 0.71 to 0.86 for reading achievement measures and from 0.62 to 0.71 for math measures).
Sensitivity, specificity, and classification accuracy were examined for CLDQ reading and math score cut points corresponding to ≥1 SD above the mean scale scores of the original community-based validation sample.20 As shown in Table 2, sensitivity was generally acceptable for both reading and math scales, but specificity was low. Because the most commonly administered screening tests in this clinical sample were measures of single-word reading and basic math calculation, conditional probabilities of CLDQ prediction accuracy were examined further for these core measures. Taking into account the sample base rate of word reading difficulty (25.5%), the conditional probability of a true-negative CLDQ rating was 96.2%, whereas the conditional probability of a true-positive CLDQ reading scale rating was 40.5%. Considering the sample base rate of math calculation difficulty (28.6%), the conditional probability of a true-negative CLDQ math scale rating was 85.1%, whereas the conditional probability of a true-positive CLDQ rating was 37.9%. These findings suggest greater confidence can be assigned to ratings that fall below the ≥1-SD cut point (eg, a CLDQ mean reading score <2.67) in “ruling out” LD than for “ruling in” LD with mean ratings above the cut point.
Post hoc analyses examining classification accuracy revealed few differences in age, gender, verbal ability, or racial background. Significant between-group differences in age and IQ were found only for accurate prediction on reading fluency (those correctly classified by the CLDQ were older (t = −2.37, P = .019) and had higher verbal IQ (t = −1.99, P = .049). Gender differences were found only on calculation, where males were more likely to be accurately identified (χ2 = 8.68, P = .003).
We sought to examine the validity of the CLDQ,20 a parent-report screening measure for learning problems, in a clinical setting. By using a cutoff score of ≥1 SD above the community sample mean, the CLDQ showed good sensitivity but low specificity in identifying children with LDs as indicated by scores falling ≥1 SD below the standardization sample mean on at least 1 individually administered reading or math subtest. Of note, when the high base rates of LD in the clinically referred sample are considered, the CLDQ appears better able to predict children without LD than children with LD. Given the clinical nature of the sample, it is important to note that learning problems may be related to a number of factors, including attention problems, mental health concerns, and chronic illness, among others. As a result, the base rate of LDs (as defined by performance falling ≥1 SD below the mean on a reading or math subtest) was high in our sample, relative to typical community prevalence rates.
The present data suggest that the CLDQ may be useful to pediatricians as a brief and time-effective parent-report screener of learning difficulties in reading and/or math. In particular, the absence of parent-reported difficulty with acquisition of skills on the CLDQ may be an adequate means for ruling out an overt LD. Although parent report of academic difficulties on the CLDQ may not map directly onto performance on age-normed measures of academic knowledge, acquiring such information during routine pediatrician visits could help to identify those children most in need of additional testing through their schools and/or inform clinical referral for more comprehensive neuropsychological testing and diagnostic clarification. The administration of screening measures of other possible behavioral difficulties, such as attention or mental health concerns, could be particularly useful in combination with administration of the CLDQ.
There are several limitations of the study, primarily due to the clinical nature of the sample. First, because all of the children included in this study had been referred to a psychological assessment service before parents completed the CLDQ, these parents may have been more attuned to their children’s learning difficulties and other concerns than parents of children attending a routine check-up at a pediatrician’s office. Although this situation may limit generalizability somewhat, it is also likely that administering the CLDQ has the potential to draw parents’ attention to their children’s learning patterns by specifically asking about them. Second, because children were clinically referred and test measures were chosen accordingly, there was not a standard battery of tests administered to every child. Instead, children were administered different measures according to clinician judgment and clinical necessity. Consequently, children were determined to meet the cutoff for LD in this study based on measures combined from 2 different tests of academic achievement. However, given the strong psychometric properties and widespread use of both sets of tests in clinical settings, it is unlikely that this compositing negatively affected results.
In using this measure (see Appendix) for screening in clinical practice, these data suggest that mean CLDQ reading scale scores <1 SD above the community mean (ie, <2.67) are associated with a low probability of a reading LD. Similarly, mean CLDQ math scale scores <2.60 suggest a low probability of a math LD. Whereas a score above the 1-SD cut point on either scale does not in and of itself suggest the presence of LD, elevated scores should be considered within the context of other cooccurring conditions or alternative explanations for school-related problems and likely warrant further evaluation. Screening of these learning-related behaviors along with behaviors suggestive of ADHD or other behavioral disorders thus has the potential to help refine identification of comorbidities in school-age children seen in routine pediatric practice.
- Accepted August 19, 2013.
- Address correspondence to Lisa A. Jacobson, PhD, Department of Neuropsychology, Kennedy Krieger Institute, 1750 East Fairmount Ave, Baltimore, MD 21231. E-mail:
Ms Patrick conducted initial data analysis, contributed to the initial manuscript, and reviewed and revised the manuscript; Mr McCurdy was responsible for acquisition of data, maintaining the clinic database, and drafting and revising the manuscript; Dr Chute contributed to analysis and interpretation of data and reviewed and revised the manuscript; Dr Mahone helped plan data acquisition methodology and reviewed and revised the manuscript; Dr Zabel conceptualized and designed the study, carried out data analysis, and reviewed and revised the manuscript; Dr Jacobson conceptualized and designed the study, conducted data analyses, contributed to the manuscript, and reviewed and revised the manuscript; and all authors approved the final manuscript as submitted.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.
FUNDING: Supported by P30 HD 024061
POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose.
- Altarac M,
- Saroha E
- Larson K,
- Russ SA,
- Kahn RS,
- Halfon N
- Peterson RL,
- Pennington BF
- ↵Cortiella C. The State of Learning Disabilities. New York, NY: National Center for Learning Disabilities. Available at: http://illinoiscte.org/PDF/research_and_reports/state_of_learning_disabilities.pdf. Accessed April 7, 2013
- Barnes MA,
- Fjichs LS,
- Ewing-Cobbs L
- Vuijk PJ,
- Hartman E,
- Mombarg R,
- Scherder E,
- Visscher C
- Gillberg C,
- Coleman M
- Wolraich ML,
- Bard DE,
- Stein MT,
- Rushton JL,
- O’Connor KG
- American Psychiatric Association
- ↵Woodcock RW, McGrew KS, Mather N. Woodcock-Johnson Tests of Achievement, 3rd ed. Rolling Meadows, IL: Riverside Publishing; 2007
- Wechsler D
- Copyright © 2013 by the American Academy of Pediatrics