Understanding Inter-rater Disagreement: A Mixed Methods Approach

Emily M. Campbell, Dean F. Sittig, Wendy W. Chapman, Brian L. Hazlehurst, Aaron Cohen

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

In an experiment to investigate cognitive skill differences between clinicians and lay persons, eight individuals in each group were asked to determine if an explicit concept existed in an ambulatory encounter note (a simple task) or if the concept could be inferred from the same note (a complex task). Subjects answered questions, highlighted text used to answer each question, and commented on their reasoning for selecting specific text. Quantitative results were mixed for expert vs. non-expert task performance on simple vs. complex tasks. Qualitative analysis revealed that data ambiguity obscured quantifiable skill differences between groups. In addition, this analysis offered new insight into whether a concept identification task is simple or complex. We present this case study to demonstrate the value of mixed method approaches to task-based performance study design and evaluation. We discuss the results in terms of their implications for evaluating meaningful use of technologies.

Original languageEnglish (US)
Pages (from-to)81-85
Number of pages5
JournalAMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium
Volume2010
StatePublished - 2010

ASJC Scopus subject areas

  • General Medicine

Fingerprint

Dive into the research topics of 'Understanding Inter-rater Disagreement: A Mixed Methods Approach'. Together they form a unique fingerprint.

Cite this