Assessing the reliability, validity, and use of the lasater clinical judgment rubric: Three approaches

    Research output: Contribution to journalArticle

    45 Citations (Scopus)

    Abstract

    The purpose of this article is to summarize the methods and findings from three different approaches examining the reliability and validity of data from the Lasater Clinical Judgment Rubric (LCJR) using human patient simulation. The first study, by Adamson, assessed the interrater reliability of data produced using the LCJR using intraclass correlation (2,1). Interrater reliability was calculated to be 0.889. The second study, by Gubrud-Howe, used the percent agreement strategy for assessing interrater reliability. Results ranged from 92% to 96%. The third study, by Sideras, used level of agreement for reliability analyses. Results ranged from 57% to 100%. Findings from each of these studies provided evidence supporting the validity of the LCJR for assessing clinical judgment during simulated patient care scenarios. This article provides extensive information about psychometrics and appropriate use of the LCJR and concludes with recommendations for further psychometric assessment and use of the LCJR.

    Original languageEnglish (US)
    Pages (from-to)66-73
    Number of pages8
    JournalJournal of Nursing Education
    Volume51
    Issue number2
    DOIs
    StatePublished - Feb 2012

    Fingerprint

    Reproducibility of Results
    psychometrics
    Psychometrics
    Patient Simulation
    patient care
    Patient Care
    scenario
    simulation
    evidence

    ASJC Scopus subject areas

    • Nursing(all)
    • Education

    Cite this

    @article{26375c251afa43acb178f76d65ad69ae,
    title = "Assessing the reliability, validity, and use of the lasater clinical judgment rubric: Three approaches",
    abstract = "The purpose of this article is to summarize the methods and findings from three different approaches examining the reliability and validity of data from the Lasater Clinical Judgment Rubric (LCJR) using human patient simulation. The first study, by Adamson, assessed the interrater reliability of data produced using the LCJR using intraclass correlation (2,1). Interrater reliability was calculated to be 0.889. The second study, by Gubrud-Howe, used the percent agreement strategy for assessing interrater reliability. Results ranged from 92{\%} to 96{\%}. The third study, by Sideras, used level of agreement for reliability analyses. Results ranged from 57{\%} to 100{\%}. Findings from each of these studies provided evidence supporting the validity of the LCJR for assessing clinical judgment during simulated patient care scenarios. This article provides extensive information about psychometrics and appropriate use of the LCJR and concludes with recommendations for further psychometric assessment and use of the LCJR.",
    author = "Adamson, {Katie Anne} and Paula Gubrud-Howe and Stephanie Sideras and Kathie Lasater",
    year = "2012",
    month = "2",
    doi = "10.3928/01484834-20111130-03",
    language = "English (US)",
    volume = "51",
    pages = "66--73",
    journal = "The Journal of nursing education",
    issn = "0148-4834",
    publisher = "Slack Incorporated",
    number = "2",

    }

    TY - JOUR

    T1 - Assessing the reliability, validity, and use of the lasater clinical judgment rubric

    T2 - Three approaches

    AU - Adamson, Katie Anne

    AU - Gubrud-Howe, Paula

    AU - Sideras, Stephanie

    AU - Lasater, Kathie

    PY - 2012/2

    Y1 - 2012/2

    N2 - The purpose of this article is to summarize the methods and findings from three different approaches examining the reliability and validity of data from the Lasater Clinical Judgment Rubric (LCJR) using human patient simulation. The first study, by Adamson, assessed the interrater reliability of data produced using the LCJR using intraclass correlation (2,1). Interrater reliability was calculated to be 0.889. The second study, by Gubrud-Howe, used the percent agreement strategy for assessing interrater reliability. Results ranged from 92% to 96%. The third study, by Sideras, used level of agreement for reliability analyses. Results ranged from 57% to 100%. Findings from each of these studies provided evidence supporting the validity of the LCJR for assessing clinical judgment during simulated patient care scenarios. This article provides extensive information about psychometrics and appropriate use of the LCJR and concludes with recommendations for further psychometric assessment and use of the LCJR.

    AB - The purpose of this article is to summarize the methods and findings from three different approaches examining the reliability and validity of data from the Lasater Clinical Judgment Rubric (LCJR) using human patient simulation. The first study, by Adamson, assessed the interrater reliability of data produced using the LCJR using intraclass correlation (2,1). Interrater reliability was calculated to be 0.889. The second study, by Gubrud-Howe, used the percent agreement strategy for assessing interrater reliability. Results ranged from 92% to 96%. The third study, by Sideras, used level of agreement for reliability analyses. Results ranged from 57% to 100%. Findings from each of these studies provided evidence supporting the validity of the LCJR for assessing clinical judgment during simulated patient care scenarios. This article provides extensive information about psychometrics and appropriate use of the LCJR and concludes with recommendations for further psychometric assessment and use of the LCJR.

    UR - http://www.scopus.com/inward/record.url?scp=84857024523&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84857024523&partnerID=8YFLogxK

    U2 - 10.3928/01484834-20111130-03

    DO - 10.3928/01484834-20111130-03

    M3 - Article

    C2 - 22132718

    AN - SCOPUS:84857024523

    VL - 51

    SP - 66

    EP - 73

    JO - The Journal of nursing education

    JF - The Journal of nursing education

    SN - 0148-4834

    IS - 2

    ER -