OBJECTIVE. Despite available depression treatments, only one fourth to one third of depressed adolescents are receiving care. The problem of underdiagnosis and underreferral might be redressed if assessment of suicidality and depression became a more formal part of routine pediatric care. Our purpose for this study was to explore the feasibility and acceptability of implementing adolescent depression screening into clinical practice.
METHODS. In this study we implemented a 2-stage adolescent identification protocol, a first-stage pen-and-paper screen and a second-stage computerized assessment, into a busy primary care pediatric practice. Providers tracked the number of eligible patients screened at both health maintenance and urgent care visits and provided survey responses regarding the burden that screening placed on the practice and the effect on patient/parent-provider relationships.
RESULTS. Seventy-nine percent of adolescent patients presenting for health maintenance visits were screened, as were the majority of patients presenting for any type of visit. The average completion time for the paper screen was 4.6 minutes. Providers perceived parents and patients as expressing more satisfaction than dissatisfaction with the screening procedures and that the increased time burden could be handled. All providers wished to continue using the paper screen at the conclusion of the protocol.
CONCLUSIONS. Instituting universal systematic depression screening in a practice with a standardized screening instrument met with little resistance by patients and parents and was well perceived and accepted by providers.
Adolescent depression, with a point prevalence of 3% to 9% and a cumulative prevalence of 20% by the end of the teenage years,1–4 may result in severe morbidity and mortality. Depression often permeates an adolescent's social life, family relationships, and academic performance.5 Depressed adolescents face increased hospitalizations, recurrent depressions, psychosocial impairment, alcohol abuse, antisocial behaviors, and, of course, suicide.6–9 However, despite available treatments,10–12 studies report that only one fourth to one third of adolescents with depression receive treatment.13,14
The problem of underdiagnosis and referral might be redressed if assessment of suicidality and depression became a more formal part of routine pediatric care. However, pediatric providers are asked to engage in many competing aspects of preventive health15; managed care plans may make it difficult for pediatricians to access behavioral health services and/or get reimbursement,16 and confidence in the ability to arrive at a psychiatric diagnosis may be low.17 Although a national survey found that 90% of pediatricians believed it was their responsibility to identify depression, a full 46% of them lacked confidence that they could recognize the disorder,17 with 56% of those surveyed reporting that appointment times were too short to obtain an adequate psychiatric history.17 These figures, along with other data, suggest that pediatricians may be missing cases of depression.18,–20
The use of appropriate screening instruments could improve detection rates and also reduce reliance on a skilled psychosocial interview.21,22 Screening instruments could be completed in the waiting room, creating a time-efficient manner of inquiring about the adolescent's psychiatric state.
Our purpose for this study was to explore the feasibility and acceptance of implementing adolescent depression screening into clinical practice. The study objectives were to assess (1) the feasibility (ie, number of patients handed screens, number of patients completing screens, number of screens reviewed by providers) of universal adolescent depression screening using a standardized screening instrument during health maintenance visits and during urgent care visits, (2) the interest (ie, number of patients requested to complete the second-stage screen) of pediatricians in using an automated standardized assessment instrument to obtain additional diagnostic information at the providers' discretion, (3) providers' perceptions of the burden of screening and assessment on them, their practice, and their patients and parents over time, (4) providers' perceptions of the usefulness of the screening and assessment, and (5) the extent to which practice routines must be adapted to meet the needs of screening.
We used a 2-stage adolescent depression-identification method. During the first stage a paper screen was completed by the adolescent, and the optional second stage involved a computerized follow-up assessment. We evaluated the feasibility and acceptability of each process in a general pediatric practice. The New York State Psychiatric Institute's institutional review board and Columbia University's institutional review board approved the study.
The study was conducted at 3 sites of 1 pediatric primary care practice in Rockland County, New York, identified as a potential research site through one of the many managed care plans in which the practice participates.
All providers who worked in the practice at least half-time were eligible and consented. Eleven clinicians (4 women, 7 men) participated: 8 physician partners, 1 nurse practitioner, and 2 part-time physicians. Their ages ranged from 35 to 66 years (mean: 48.9 years).
Eligible adolescent patients included those who (1) were aged 13 to 17 years, (2) were able to read and understand English, (3) were accompanied by an English-speaking literate parent or guardian, (4) were scheduled for a health maintenance or urgent care visit during standard weekday hours, (5) were not deemed too ill to complete a written form, and (6) had not completed 1 in the previous 2 weeks, the duration criteria for a major depressive episode (repeat visits after 2 weeks were eligible). Adolescents and parents were given information sheets at the same time as the screens. Documentation of written consent was waived by the New York State Psychiatric Institute's institutional review board and Columbia University Medical Center's institutional review board.
We used a self-completed screen, the adolescent, present-state version of the Columbia Diagnostic Interview Schedule for Children (DISC) Depression Scale,23 which has been renamed the Columbia Depression Scale (CDS), a paper-and-pencil yes/no questionnaire (C. P. Lucas, MD, MPH, M. S. Gould, PhD, MPH, P.W.F., and D.S., unpublished data, 2006). The CDS scale consists of 22-items (21 of which are scored [the lifetime suicide-attempt question does not form part of the total CDS score]) that were derived from stem items (ie, those asked of everybody) from the DISC,23 designed to provide a continuous measure of current depression. The test-retest reliability of the CDS was assessed as part of a larger investigation into the reliability of the present-state DISC,24 with an average retest interval of 7 days. Reliability was moderate; the intraclass correlation coefficient (ICC) was 0.68. Additional psychometric data on the CDS are also available from 4 control schools that were part of a school study.25 Internal consistency was high (Cronbach's α = .87). The CDS correlated highly with the coadministered Beck Depression Inventory (ICC = 0.79). The area under the receiver operating curve was 0.89 (95% confidence limits: 0.82, 0.96) for the CDS compared with 0.87 (95% confidence limits: 0.81, 0.93) for the Beck Depression Inventory (C. P. Lucas, MD, MPH, M. S. Gould, PhD, MPH, P.W.F., and D.S., unpublished data, 2006).
The study also used the depression module of the youth-informant Columbia voice DISC-IV (an audio computer assisted self-interviewing version of the present-state version of the DISC), a highly structured, self-administered, diagnostic interview for 9- to 17-year-olds based on the criteria contained in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV),26 which allows the generation of a diagnostic report within minutes of completion of an interview. The DISC, in its many versions, has been used in multiple clinical and epidemiologic research projects all over the world and was implemented recently as a clinical tool in schools and juvenile justice facilities.27 The DISC uses a stem-contingent structure whereby stem items are clarified by contingent items to determine if a DSM-IV criterion is met. Formal psychometric testing on the voice DISC and the interviewer-administered present-state DISC showed high acceptability, with >70% of those adolescent subjects who expressed a preference preferring self-administration to conventional interviewer administration.24 In that study of delinquent youth (n = 97), test-retest reliability of the voice DISC major depressive disorder (MDD) youth module was 0.31 (κ statistic). Test-retest reliability of the interviewer-administered DISC-IV in a clinical sample of 82 youth, aged 9 to 17 years, was 0.78 for diagnosis (κ statistic) and 0.81 (ICC) for the criterion scale score.24 Validity data from an earlier version of the DISC (2.3), which compared clinician-rated and DISC diagnoses, showed good agreement with a κ statistic of 0.79 for major depression.23
Participating clinicians were educated regarding adolescent depression and instructed on how to use all study instruments. For the CDS, cutoff scores were taught and presented in a clinical context. Physicians were taught to incorporate the score along with a clinical assessment of all positive symptoms. A depression-education handbook provided sensitivity, specificity, and predictive values for varying screen levels. In particular, the physicians were shown that a cutoff score of 7, with a sensitivity and specificity of 100% and 54%, respectively, would not miss any true cases of depression but only 6% of patients with positive screens would have a diagnosis of MDD and that a cutoff of 12, with a sensitivity and specificity of 85% and 80%, respectively, would miss 15% of depression cases with 12% of patients with positive screens having a diagnosis of MDD. The providers were taught that choosing a high cutoff value of 15, with a sensitivity and specificity of 69% and 93%, respectively, would lessen the burden of false-positive results for MDD, but choosing a lower cutoff would alert them to more patients with some depressive symptoms who might benefit from further inquiry, even if it would not yield as many true cases of MDD. The importance of reviewing and acting on the questions related to suicidality, independent of the depression score, was stressed.
Sample DISC reports were reviewed, and providers were shown how the interview can help follow-up on a positive answer on the CDS. In keeping with the second goal of the study (to assess the interest of pediatricians in using an automated standardized assessment instrument to obtain additional diagnostic information at the providers' discretion), the decision to use a DISC interview as a means to elicit more information, instead of or as an adjunct to a more comprehensive clinical interview, was left to the providers. To facilitate outside referral, a list of mental health resources in the providers' area was provided, but no assurances of availability or participation in their patients' health plans were made. Nonclinical staff was educated on the technical administration of all study-related instruments.
The practice instituted a change in its standard of care so that all eligible adolescent patients would be offered a written depression screen, the CDS. Front-desk staff was responsible for identifying these adolescents and escorting them to a confidential space to complete and seal the screen. Once the screen had been attached to the chart, the pediatric provider assumed responsibility for reviewing and acting on the results.
Providers were asked to review the screens before the patients left the office. After reviewing the screen, the providers could, but were not obligated to (as explained above), request that their patients take a computerized interview to obtain a more detailed assessment of depression. Clinicians whose patients did not take the more detailed interview made disposition decisions on the basis of the screen along with their clinical assessment. Clinicians with results from both instruments made the disposition decisions after reviewing the DISC results as well.
For 11 weeks, providers were given daily tracking forms to anonymously record all eligible and noneligible adolescent patients and to note why they were or were not screened (see Fig 1). The providers were compensated for their tracking forms regardless of whether the patients were screened.
At the end of the study, a computer-generated anonymous list of all billed adolescent visits was provided to the researchers to check the accuracy of the tracking. This list was generated by a computer programmer who supports the practice's medical manager system by writing a program using date of birth on date of visit and selected Current Procedural Terminology codes in C-language with his Unix operating system to extract the data from the billing and scheduling records.
The study evaluated provider feedback on the universal screening procedure and the optional assessment instruments in approximate 2-week intervals from the first day of screening at each office location through the end of 3 months. Additional questions regarding the providers' opinions and experiences were administered at the end of the 3 months and in the sixth month (see Fig 1).
A research assistant visited the practice locations for the first 10 weeks after the initial start-up week on a systematically sampled schedule (with minor protocol violations [day substitution, missed day, etc, secondary to snow days and illness]), stratified according to site, study week, and day of the week, and tracked the minutes needed to complete the screens. At additional visits, the screening and assessment procedure were observed with the researchers having no direct contact with the patients or patient information. At the end of the 3-month protocol, all instruments were left with the practice, but no additional data were collected. Research staff visibly withdrew until the sixth month, when final feasibility forms and personal debriefings occurred.
Primary analyses conducted to assess feasibility and practitioner perception of practicability consisted of descriptive comparisons, frequency distributions, and point estimates of means and rates over time.
Data were processed by using the Statistical Package for Social Sciences.28
Number of Participants
Overall, there were1394 urgent care and health maintenance visits generated from the billing records, with 893 visits reported by the providers (Fig 2). Of the 893 visits, 775 met study eligibility criteria for screening, with 94.7% (734 of 775) of the eligible reported patient visits screened. Using 1394 as the true number of visits (not patients), 734 (52.7%) completed screens, 34 (2.4%) refused, 96 (6.9%) were deemed ineligible, 7 (0.5%) were eligible and not screened for administrative reasons, 486 (34.9%) were unaccounted for in the physician report, and 20 (1.4%) were reported on by the physicians with study eligibility status missing. Of the 98 DISC screens completed, 71 (72.4%) were at the practice for urgent care visits and 27 (27.6%) were there for health maintenance visits (see Fig 2).
Specifically, of the 231 health maintenance adolescent visits billed, 182 (78.8%) completed screens, 6 (2.6%) refused, 20 (8.6%) were deemed ineligible, 1 (0.4%) was eligible and not screened for administrative reasons, 20 (8.6%) were unaccounted for in the physician report, and 2 (0.9%) were reported on by the physicians with study eligibility status missing.
The average completed time for the 70 systematically sampled CDS forms was 4.6 minutes (SD: 2.2) between receiving the screen and returning it to the front desk.
Provider Perception of Burden
Reported provider burden for the CDS was low. When asked if the CDS consumed too much practice time, all practitioners at all time points responded with 1 or 2 on a Likert scale that ranged from 1 to 4, with 1 being “not at all” and 4 being “to an impossible extent.” Eight of the 11 providers marked 1 at least once (16 of 66 responses). When asked the same question about the DISC, reported provider burden was higher. Although no providers ever responded “to an impossible extent” (4 on the Likert scale) just as with the CDS, only 1 provider ever marked “not at all” (1 of 48 responses).
When asked to consider their parents' and patients' reactions to the new depression-identification procedure, providers perceived a positive response. Using a Likert scale that ranged from 1 (none) to 4 (almost all), provider biweekly responses over the 13 weeks indicated that they perceived that more patients expressed satisfaction (29 of 66 responses of many and almost all) rather than dissatisfaction (1 of 66 responses of many and almost all) with the depression-identification procedure.
Although 2 providers were unsure at the 2-week mark about their desire to continue use of the CDS, all practitioners, starting at 4 weeks and ending at 6 months, wanted to continue using the CDS in their practice. No practitioners initially refused use of the DISC, but interest decreased from week 2 to week 12 (5 yes, 5 no, 1 abstain) and stayed steady at 6 months (7 yes, 4 no).
At week 13, 9 of 11 practitioners felt more comfortable assessing adolescent depression, and 8 of 11 felt more comfortable assessing suicidal behavior. Table 1 reports attitudes at a 6-month follow-up.
Debriefings at 6 Months
At the end of the study, 10 of the 11 providers stated that the CDS aided in identifying children in need of help by either opening up lines of communication or relying directly on screen scores. The 11th provider endorsed continued screening because it promoted the practice in the parents' eyes.
Only 7 providers wished to continue use of the DISC at 6 months. Reasons varied from its help in convincing skeptical families and teens of the need for referral to its ability to add new clinical information regarding suicidality and acuity.
Several practice issues across 3 staffing levels (front desk, nursing, and administrative staff) needed to be addressed to maintain implementation of the protocol. Because the front-desk staff included several workers who changed shifts and office sites frequently, they needed to be reoriented to the procedure and how to respond to parent and patient queries and concerns. Once successfully oriented to the purpose of the protocol, the front-desk staff, with better knowledge of their office than the researchers, created their own method of checking on the eligibility criteria, such as date of birth, and assuring that the appropriate patients were flagged and given the screen. The nursing staff was originally omitted from the protocol orientation, which led to premature interruption of a patient who was completing a screen. Finally, the administrative staff needed to organize a system to file the new forms.
This study suggests that brief questionnaires can be implemented in a suburban pediatric practice's waiting room as part of the routine standard of care, at least at health maintenance visits, with the providers perceiving the identification procedure as both acceptable and useful. However, changes to the routine office procedure need to be learned and accepted by many different key players to create an efficient and acceptable procedure. Initially, we had mistakenly oriented only the pediatricians and office and practice managers, not realizing that practice redesign includes all staff to take on new roles and responsibilities. The very low refusal rate and the provider-reported parental and patient satisfaction suggest that parent and patient concerns regarding time, stigma, and confidentiality were not a major barrier.
Screening at all urgent care visits may be less practical. However, >70% of the DISCs were given at urgent care visits, further supporting the feasibility of some form of depression assessment at these urgent care visits. Patients who came in for specific problems and for scheduled short visits agreed to stay for longer visits, and some providers were clearly willing to spend more time evaluating their patients' mood by using an optional procedure.
The study results suggest that using a more intensive computerized interview as a follow-up may not be as useful or feasible on a regular basis, as noted by the drop-off in provider interest. Although not specifically studied, one may anticipate that the DISC's detailed report may serve as an educational tool to the providers initially, teaching them what specific questions need to be asked or confirming their clinical diagnoses but representing less of an aid as their knowledge and comfort increased. The voice DISC, then, may serve a purpose in certain situations with certain providers (ie, when the diagnosis is unclear or the physician feels uncomfortable relying on their clinical acumen) and may even serve as an educational tool for the pediatricians.
Our design was unique in that the pediatric practice and providers assumed full responsibility for the administration of screens and the screening results with no additional mental health backup. A review of the pediatric literature29 demonstrated that there are limited studies of adolescent depression-identification practices in English-speaking pediatric primary care and no studies that clinically incorporated a specific adolescent depression self-report instrument into a general US (non–adolescent-specific) pediatric practice. Thus, although pediatric psychosocial screening has been studied in the United States, our feasibility/acceptability study remains unique in its description of the implementation of universal adolescent depression-specific instruments into a US pediatric general practice, allowing the front desk to administer the screen and empowering the providers to assume clinical responsibility for the results.
Unfortunately, the percentage of urgent care visits screened is somewhat unclear secondary to the discrepancy between provider-reported visits and the computer-generated visits. In an effort to capture “real-world” clinical data, we avoided direct contact between the researchers and patients. Thus, we were forced to rely on the clinicians to gather, document, and deidentify the research data. As a consequence, we have no physician reports on 486 urgent care patient visits noted by the computer list. These visits were gleaned from the computer billing records, and we were unable to determine their study eligibility. We do think, however, that urgent care visits were more likely to have met the exclusion criteria of a repeat visit or a patient too sick to participate and that these patients were more likely to be ineligible for screening and not just missed. Thus, the number of total patients may be more accurate with the computer records, but the number of eligible patients may be reflected more accurately by the provider reports.
In addition, we had to rely on the computer programmer from the medical billing software company to glean the data for us from the billing records by a complicated methodology that did not completely fit with the purposes of the program design; we cannot be assured that this information is more accurate and more representative of the true patient population than the provider-research records.
Concerns have been raised in a number of studies in adult primary care that screening alone, even accompanied by provider education and guideline dissemination, does not improve patient outcomes and that only multilevel interventions30–34 along with depression screening will ultimately result in successful patient outcomes. However, we were not focused on patient outcomes in this study; we were trying to determine the providers' perceptions of utility versus burden. The purpose of this pilot study was to determine if pediatricians in 1 large practice would adopt depression screening and find it feasible and useful or if they would say that they were overwhelmed by the time burden or the burden of having to deal with newly identified depressive symptoms. Because of the fact that we set out to study the feasibility of screening, as well as the fact that we requested from the institutional review boards a waiver of documentation of consent for the parents, we did not collect any information on the providers' actual patient-specific clinical decisions or dispositions after the initial screen. Thus, we can only assume that those patients referred to the DISC had high scores on the initial screens. In reality, those with obvious depression may have received a direct referral, bypassing the DISC. In addition, although providers gave anecdotal accounts of successful dispositions after screening, we did not collect any outcome data.
This study highlights how pediatric providers may positively experience an adolescent depression screening questionnaire at health maintenance visits that benefits their patients and practice and does not create undue burden or generate patient and/or parent refusal. The study also highlights that screening will require practice redesign and the reassignment of roles and responsibilities among the ancillary staff. More research will need to be done to examine the steps needed to maintain depression screening as part of routine clinical practice (including the financial implications) and, more importantly, to examine the clinical benefits of such screening by tracking mental health referrals and depression treatment after such procedures.
Dr Zuckerbrot was supported by a National Institute of Mental Health T-32 research fellowship (T-32 MH 16434-22).
We acknowledge the advice received from Mark Olfson, MD, MPH, and Chris Lucas, MD, MPH, when designing the initial protocol. We also thank Annalise Caron, PhD, for her thoughtful editing. Dr Zuckerbrot acknowledges the ongoing advice and support concerning this study received from Peter S. Jensen, MD, her primary research mentor.
- Accepted September 1, 2006.
- Address correspondence to Rachel A. Zuckerbrot, MD, 1051 Riverside Dr, Unit 78, New York, NY 10032. E-mail:
The authors have indicated they have no financial relationships relevant to this article to disclose.
- ↵Garrison CZ, Addy CL, Jackson KL, McKeown RE, Waller JL. Major depressive disorder and dysthymia in young adolescents. Am J Epidemiol.1992;135 :792– 802
- Shaffer D, Fisher P, Dulcan MK, et al. The NIMH Diagnostic Interview Schedule for Children Version 2.3 (DISC-2.3): description, acceptability, prevalence rates, and performance in the MECA Study. Methods for the Epidemiology of Child and Adolescent Mental Disorders Study. J Am Acad Child Adolesc Psychiatry.1996;35 :865– 877
- ↵Burns BJ, Costello EJ, Angold A, et al. Children's mental health service use across service sectors. Health Aff (Millwood).1995;14 :147– 159
- ↵Kramer T, Garralda ME. Psychiatric disorders in adolescents in primary care. Br J Psychiatry.1998;173 :508– 513
- ↵Smith MS, Mitchell J, McCauley EA, Calderon R. Screening for anxiety and depression in an adolescent clinic. Pediatrics.1990;85 :262– 266
- ↵Shaffer D, Fisher P, Lucas CP, Dulcan MK, Schwab-Stone ME. NIMH Diagnostic Interview Schedule for Children Version IV (NIMH DISC-IV): description, differences from previous versions, and reliability of some common diagnoses. J Am Acad Child Adolesc Psychiatry.2000;39 :28– 38
- ↵Lucas CP. The Use of structured diagnostic interviews in clinical child psychiatric practice. In: First MB, ed. Structured Evaluation in Clinical Practice. DC: American Psychiatric Press; 2003:75–102Review of Psychiatry, Vol 22. Washington
- ↵American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Washington, DC: American Psychiatric Association; 1994
- ↵Shaffer D, Fisher P, Lucas C. The Diagnostic Interview Schedule for Children (DISC). In: Hilsenroth MJ, Segal DL, Hersen M, eds. Comprehensive Handbook of Psychological Assessment. Vol 2, New York, NY: Wiley; 2003:256–270
- ↵SPSS [computer program]. Version 11.5.0. Chicago, IL: SPSS Inc; 2002
- Pignone M, Gaynes BN, Rushton JL, et al. Screening for Depression: systematic evidence review number 6. AHRQ publication 02-S002. Available at: www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=hstat3.chapter.1996. Accessed August 18, 2004
- Copyright © 2007 by the American Academy of Pediatrics