Assessing the reliability, validity, and use of the lasater clinical judgment rubric: Three approaches

Katie Anne Adamson, Paula Gubrud, Stephanie Sideras, Kathie Lasater

    Research output: Contribution to journalArticlepeer-review

    84 Scopus citations

    Abstract

    The purpose of this article is to summarize the methods and findings from three different approaches examining the reliability and validity of data from the Lasater Clinical Judgment Rubric (LCJR) using human patient simulation. The first study, by Adamson, assessed the interrater reliability of data produced using the LCJR using intraclass correlation (2,1). Interrater reliability was calculated to be 0.889. The second study, by Gubrud-Howe, used the percent agreement strategy for assessing interrater reliability. Results ranged from 92% to 96%. The third study, by Sideras, used level of agreement for reliability analyses. Results ranged from 57% to 100%. Findings from each of these studies provided evidence supporting the validity of the LCJR for assessing clinical judgment during simulated patient care scenarios. This article provides extensive information about psychometrics and appropriate use of the LCJR and concludes with recommendations for further psychometric assessment and use of the LCJR.

    Original languageEnglish (US)
    Pages (from-to)66-73
    Number of pages8
    JournalJournal of Nursing Education
    Volume51
    Issue number2
    DOIs
    StatePublished - Feb 2012

    ASJC Scopus subject areas

    • General Nursing
    • Education

    Fingerprint

    Dive into the research topics of 'Assessing the reliability, validity, and use of the lasater clinical judgment rubric: Three approaches'. Together they form a unique fingerprint.

    Cite this