Integration of symptom ratings from multiple informants in ADHD diagnosis

A psychometric model with clinical utility

Michelle M. Martel, Ulrich Schimmack, Molly Nikolas, Joel Nigg

Research output: Contribution to journalArticle

14 Citations (Scopus)

Abstract

The Diagnostic and Statistical Manual of Mental Disorder-Fifth Edition explicitly requires that attention-deficit/hyperactivity disorder (ADHD) symptoms should be apparent across settings, taking into account reports from multiple informants. Yet, it provides no guidelines how information from different raters should be combined in ADHD diagnosis. We examined the validity of different approaches using structural equation modeling (SEM) for multiple-informant data. Participants were 725 children, 6 to 17 years old, and their primary caregivers and teachers, recruited from the community and completing a thorough research-based diagnostic assessment, including a clinician-administered diagnostic interview, parent and teacher standardized rating scales, and cognitive testing. A best-estimate ADHD diagnosis was generated by a diagnostic team. An SEM model demonstrated convergent validity among raters. We found relatively weak symptom-specific agreement among raters, suggesting that a general average scoring algorithm is preferable to symptom-specific scoring algorithms such as the "or" and "and" algorithms. Finally, to illustrate the validity of this approach, we show that averaging makes it possible to reduce the number of items from 18 items to 8 items without a significant decrease in validity. In conclusion, information from multiple raters increases the validity of ADHD diagnosis, and averaging appears to be the optimal way to integrate information from multiple raters.

Original languageEnglish (US)
Pages (from-to)1060-1071
Number of pages12
JournalPsychological Assessment
Volume27
Issue number3
StatePublished - Sep 1 2015

Fingerprint

Attention Deficit Disorder with Hyperactivity
Psychometrics
Diagnostic and Statistical Manual of Mental Disorders
Caregivers
Guidelines
Interviews
Research

Keywords

  • ADHD
  • Assessment
  • Diagnosis
  • Structural equation modeling

ASJC Scopus subject areas

  • Psychiatry and Mental health
  • Clinical Psychology

Cite this

Integration of symptom ratings from multiple informants in ADHD diagnosis : A psychometric model with clinical utility. / Martel, Michelle M.; Schimmack, Ulrich; Nikolas, Molly; Nigg, Joel.

In: Psychological Assessment, Vol. 27, No. 3, 01.09.2015, p. 1060-1071.

Research output: Contribution to journalArticle

Martel, Michelle M. ; Schimmack, Ulrich ; Nikolas, Molly ; Nigg, Joel. / Integration of symptom ratings from multiple informants in ADHD diagnosis : A psychometric model with clinical utility. In: Psychological Assessment. 2015 ; Vol. 27, No. 3. pp. 1060-1071.
@article{9332671460d54827bef299130ac3d1d6,
title = "Integration of symptom ratings from multiple informants in ADHD diagnosis: A psychometric model with clinical utility",
abstract = "The Diagnostic and Statistical Manual of Mental Disorder-Fifth Edition explicitly requires that attention-deficit/hyperactivity disorder (ADHD) symptoms should be apparent across settings, taking into account reports from multiple informants. Yet, it provides no guidelines how information from different raters should be combined in ADHD diagnosis. We examined the validity of different approaches using structural equation modeling (SEM) for multiple-informant data. Participants were 725 children, 6 to 17 years old, and their primary caregivers and teachers, recruited from the community and completing a thorough research-based diagnostic assessment, including a clinician-administered diagnostic interview, parent and teacher standardized rating scales, and cognitive testing. A best-estimate ADHD diagnosis was generated by a diagnostic team. An SEM model demonstrated convergent validity among raters. We found relatively weak symptom-specific agreement among raters, suggesting that a general average scoring algorithm is preferable to symptom-specific scoring algorithms such as the {"}or{"} and {"}and{"} algorithms. Finally, to illustrate the validity of this approach, we show that averaging makes it possible to reduce the number of items from 18 items to 8 items without a significant decrease in validity. In conclusion, information from multiple raters increases the validity of ADHD diagnosis, and averaging appears to be the optimal way to integrate information from multiple raters.",
keywords = "ADHD, Assessment, Diagnosis, Structural equation modeling",
author = "Martel, {Michelle M.} and Ulrich Schimmack and Molly Nikolas and Joel Nigg",
year = "2015",
month = "9",
day = "1",
language = "English (US)",
volume = "27",
pages = "1060--1071",
journal = "Psychological Assessment",
issn = "1040-3590",
publisher = "American Psychological Association Inc.",
number = "3",

}

TY - JOUR

T1 - Integration of symptom ratings from multiple informants in ADHD diagnosis

T2 - A psychometric model with clinical utility

AU - Martel, Michelle M.

AU - Schimmack, Ulrich

AU - Nikolas, Molly

AU - Nigg, Joel

PY - 2015/9/1

Y1 - 2015/9/1

N2 - The Diagnostic and Statistical Manual of Mental Disorder-Fifth Edition explicitly requires that attention-deficit/hyperactivity disorder (ADHD) symptoms should be apparent across settings, taking into account reports from multiple informants. Yet, it provides no guidelines how information from different raters should be combined in ADHD diagnosis. We examined the validity of different approaches using structural equation modeling (SEM) for multiple-informant data. Participants were 725 children, 6 to 17 years old, and their primary caregivers and teachers, recruited from the community and completing a thorough research-based diagnostic assessment, including a clinician-administered diagnostic interview, parent and teacher standardized rating scales, and cognitive testing. A best-estimate ADHD diagnosis was generated by a diagnostic team. An SEM model demonstrated convergent validity among raters. We found relatively weak symptom-specific agreement among raters, suggesting that a general average scoring algorithm is preferable to symptom-specific scoring algorithms such as the "or" and "and" algorithms. Finally, to illustrate the validity of this approach, we show that averaging makes it possible to reduce the number of items from 18 items to 8 items without a significant decrease in validity. In conclusion, information from multiple raters increases the validity of ADHD diagnosis, and averaging appears to be the optimal way to integrate information from multiple raters.

AB - The Diagnostic and Statistical Manual of Mental Disorder-Fifth Edition explicitly requires that attention-deficit/hyperactivity disorder (ADHD) symptoms should be apparent across settings, taking into account reports from multiple informants. Yet, it provides no guidelines how information from different raters should be combined in ADHD diagnosis. We examined the validity of different approaches using structural equation modeling (SEM) for multiple-informant data. Participants were 725 children, 6 to 17 years old, and their primary caregivers and teachers, recruited from the community and completing a thorough research-based diagnostic assessment, including a clinician-administered diagnostic interview, parent and teacher standardized rating scales, and cognitive testing. A best-estimate ADHD diagnosis was generated by a diagnostic team. An SEM model demonstrated convergent validity among raters. We found relatively weak symptom-specific agreement among raters, suggesting that a general average scoring algorithm is preferable to symptom-specific scoring algorithms such as the "or" and "and" algorithms. Finally, to illustrate the validity of this approach, we show that averaging makes it possible to reduce the number of items from 18 items to 8 items without a significant decrease in validity. In conclusion, information from multiple raters increases the validity of ADHD diagnosis, and averaging appears to be the optimal way to integrate information from multiple raters.

KW - ADHD

KW - Assessment

KW - Diagnosis

KW - Structural equation modeling

UR - http://www.scopus.com/inward/record.url?scp=84940103320&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84940103320&partnerID=8YFLogxK

M3 - Article

VL - 27

SP - 1060

EP - 1071

JO - Psychological Assessment

JF - Psychological Assessment

SN - 1040-3590

IS - 3

ER -