Adjusting our lens: Can developmental differences in diagnostic reasoning be harnessed to improve health professional and trainee assessment?

Research output: Contribution to journalArticle

15 Citations (Scopus)

Abstract

Objectives: Research in cognition has yielded considerable understanding of the diagnostic reasoning process and its evolution during clinical training. This study sought to determine whether or not this literature could be used to improve the assessment of trainees' diagnostic skill by manipulating testing conditions that encourage different modes of reasoning. Methods: The authors developed an online, vignette-based instrument with two sets of testing instructions. The "first impression" condition encouraged nonanalytic responses while the "directed search" condition prompted structured analytic responses. Subjects encountered six cases under the first impression condition and then six cases under the directed search condition. Each condition had three straightforward (simple) and three ambiguous (complex) cases. Subjects were stratified by clinical experience: novice (third- and fourth-year medical students), intermediate (postgraduate year [PGY] 1 and 2 residents), and experienced (PGY 3 residents and faculty). Two investigators scored the exams independently. Mean diagnostic accuracies were calculated for each group. Differences in diagnostic accuracy and reliability of the examination as a function of the predictor variables were assessed. Results: The examination was completed by 115 subjects. Diagnostic accuracy was significantly associated with the independent variables of case complexity, clinical experience, and testing condition. Overall, mean diagnostic accuracy and the extent to which the test consistently discriminated between subjects (i.e., yielded reliable scores) was higher when participants were given directed search instructions than when they were given first impression instructions. In addition, the pattern of reliability was found to depend on experience: simple cases offered the best reliability for discriminating between novices, complex cases offered the best reliability for discriminating between intermediate residents, and neither type of case discriminated well between experienced practitioners. Conclusions: These results yield concrete guidance regarding test construction for the purpose of diagnostic skill assessment. The instruction strategy and complexity of cases selected should depend on the experience level and breadth of experience of the subjects one is attempting to assess.

Original languageEnglish (US)
JournalAcademic Emergency Medicine
Volume18
Issue number10 SUPPL. 2
DOIs
StatePublished - Oct 2011

Fingerprint

Medical Students
Cognition
Lenses
Research Personnel
Health
Research

ASJC Scopus subject areas

  • Emergency Medicine

Cite this

@article{4e8d200f8724417e93d622c0e9bed76d,
title = "Adjusting our lens: Can developmental differences in diagnostic reasoning be harnessed to improve health professional and trainee assessment?",
abstract = "Objectives: Research in cognition has yielded considerable understanding of the diagnostic reasoning process and its evolution during clinical training. This study sought to determine whether or not this literature could be used to improve the assessment of trainees' diagnostic skill by manipulating testing conditions that encourage different modes of reasoning. Methods: The authors developed an online, vignette-based instrument with two sets of testing instructions. The {"}first impression{"} condition encouraged nonanalytic responses while the {"}directed search{"} condition prompted structured analytic responses. Subjects encountered six cases under the first impression condition and then six cases under the directed search condition. Each condition had three straightforward (simple) and three ambiguous (complex) cases. Subjects were stratified by clinical experience: novice (third- and fourth-year medical students), intermediate (postgraduate year [PGY] 1 and 2 residents), and experienced (PGY 3 residents and faculty). Two investigators scored the exams independently. Mean diagnostic accuracies were calculated for each group. Differences in diagnostic accuracy and reliability of the examination as a function of the predictor variables were assessed. Results: The examination was completed by 115 subjects. Diagnostic accuracy was significantly associated with the independent variables of case complexity, clinical experience, and testing condition. Overall, mean diagnostic accuracy and the extent to which the test consistently discriminated between subjects (i.e., yielded reliable scores) was higher when participants were given directed search instructions than when they were given first impression instructions. In addition, the pattern of reliability was found to depend on experience: simple cases offered the best reliability for discriminating between novices, complex cases offered the best reliability for discriminating between intermediate residents, and neither type of case discriminated well between experienced practitioners. Conclusions: These results yield concrete guidance regarding test construction for the purpose of diagnostic skill assessment. The instruction strategy and complexity of cases selected should depend on the experience level and breadth of experience of the subjects one is attempting to assess.",
author = "Ilgen, {Jonathan S.} and Judith Bowen and Lalena Yarris and Fu, {Rongwei (Rochelle)} and Lowe, {Robert (Bob)} and Kevin Eva",
year = "2011",
month = "10",
doi = "10.1111/j.1553-2712.2011.01182.x",
language = "English (US)",
volume = "18",
journal = "Academic Emergency Medicine",
issn = "1069-6563",
publisher = "Wiley-Blackwell",
number = "10 SUPPL. 2",

}

TY - JOUR

T1 - Adjusting our lens

T2 - Can developmental differences in diagnostic reasoning be harnessed to improve health professional and trainee assessment?

AU - Ilgen, Jonathan S.

AU - Bowen, Judith

AU - Yarris, Lalena

AU - Fu, Rongwei (Rochelle)

AU - Lowe, Robert (Bob)

AU - Eva, Kevin

PY - 2011/10

Y1 - 2011/10

N2 - Objectives: Research in cognition has yielded considerable understanding of the diagnostic reasoning process and its evolution during clinical training. This study sought to determine whether or not this literature could be used to improve the assessment of trainees' diagnostic skill by manipulating testing conditions that encourage different modes of reasoning. Methods: The authors developed an online, vignette-based instrument with two sets of testing instructions. The "first impression" condition encouraged nonanalytic responses while the "directed search" condition prompted structured analytic responses. Subjects encountered six cases under the first impression condition and then six cases under the directed search condition. Each condition had three straightforward (simple) and three ambiguous (complex) cases. Subjects were stratified by clinical experience: novice (third- and fourth-year medical students), intermediate (postgraduate year [PGY] 1 and 2 residents), and experienced (PGY 3 residents and faculty). Two investigators scored the exams independently. Mean diagnostic accuracies were calculated for each group. Differences in diagnostic accuracy and reliability of the examination as a function of the predictor variables were assessed. Results: The examination was completed by 115 subjects. Diagnostic accuracy was significantly associated with the independent variables of case complexity, clinical experience, and testing condition. Overall, mean diagnostic accuracy and the extent to which the test consistently discriminated between subjects (i.e., yielded reliable scores) was higher when participants were given directed search instructions than when they were given first impression instructions. In addition, the pattern of reliability was found to depend on experience: simple cases offered the best reliability for discriminating between novices, complex cases offered the best reliability for discriminating between intermediate residents, and neither type of case discriminated well between experienced practitioners. Conclusions: These results yield concrete guidance regarding test construction for the purpose of diagnostic skill assessment. The instruction strategy and complexity of cases selected should depend on the experience level and breadth of experience of the subjects one is attempting to assess.

AB - Objectives: Research in cognition has yielded considerable understanding of the diagnostic reasoning process and its evolution during clinical training. This study sought to determine whether or not this literature could be used to improve the assessment of trainees' diagnostic skill by manipulating testing conditions that encourage different modes of reasoning. Methods: The authors developed an online, vignette-based instrument with two sets of testing instructions. The "first impression" condition encouraged nonanalytic responses while the "directed search" condition prompted structured analytic responses. Subjects encountered six cases under the first impression condition and then six cases under the directed search condition. Each condition had three straightforward (simple) and three ambiguous (complex) cases. Subjects were stratified by clinical experience: novice (third- and fourth-year medical students), intermediate (postgraduate year [PGY] 1 and 2 residents), and experienced (PGY 3 residents and faculty). Two investigators scored the exams independently. Mean diagnostic accuracies were calculated for each group. Differences in diagnostic accuracy and reliability of the examination as a function of the predictor variables were assessed. Results: The examination was completed by 115 subjects. Diagnostic accuracy was significantly associated with the independent variables of case complexity, clinical experience, and testing condition. Overall, mean diagnostic accuracy and the extent to which the test consistently discriminated between subjects (i.e., yielded reliable scores) was higher when participants were given directed search instructions than when they were given first impression instructions. In addition, the pattern of reliability was found to depend on experience: simple cases offered the best reliability for discriminating between novices, complex cases offered the best reliability for discriminating between intermediate residents, and neither type of case discriminated well between experienced practitioners. Conclusions: These results yield concrete guidance regarding test construction for the purpose of diagnostic skill assessment. The instruction strategy and complexity of cases selected should depend on the experience level and breadth of experience of the subjects one is attempting to assess.

UR - http://www.scopus.com/inward/record.url?scp=80054122225&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=80054122225&partnerID=8YFLogxK

U2 - 10.1111/j.1553-2712.2011.01182.x

DO - 10.1111/j.1553-2712.2011.01182.x

M3 - Article

C2 - 21999563

AN - SCOPUS:80054122225

VL - 18

JO - Academic Emergency Medicine

JF - Academic Emergency Medicine

SN - 1069-6563

IS - 10 SUPPL. 2

ER -