Developing Validity Evidence for the Written Pediatric History and Physical Exam Evaluation Rubric

Marta A. King, Carrie Phillipi, Paula M. Buchanan, Linda O. Lewin

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Objective The written history and physical examination (H&P) is an underutilized source of medical trainee assessment. The authors describe development and validity evidence for the Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric: a novel tool for evaluating written H&Ps. Methods Using an iterative process, the authors drafted, revised, and implemented the 10-item rubric at 3 academic institutions in 2014. Eighteen attending physicians and 5 senior residents each scored 10 third-year medical student H&Ps. Inter-rater reliability (IRR) was determined using intraclass correlation coefficients. Cronbach α was used to report consistency and Spearman rank-order correlations to determine relationships between rubric items. Raters provided a global assessment, recorded time to review and score each H&P, and completed a rubric utility survey. Results Overall intraclass correlation was 0.85, indicating adequate IRR. Global assessment IRR was 0.89. IRR for low- and high-quality H&Ps was significantly greater than for medium-quality ones but did not differ on the basis of rater category (attending physician vs. senior resident), note format (electronic health record vs nonelectronic), or student diagnostic accuracy. Cronbach α was 0.93. The highest correlation between an individual item and total score was for assessments was 0.84; the highest interitem correlation was between assessment and differential diagnosis (0.78). Mean time to review and score an H&P was 16.3 minutes; residents took significantly longer than attending physicians. All raters described rubric utility as “good” or “very good” and endorsed continued use. Conclusions The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps.

Original languageEnglish (US)
Pages (from-to)68-73
Number of pages6
JournalAcademic Pediatrics
Volume17
Issue number1
DOIs
StatePublished - Jan 1 2017

Fingerprint

History
Pediatrics
Physicians
Electronic Health Records
Medical Students
Physical Examination
Differential Diagnosis
Students

Keywords

  • assessment
  • clinical documentation
  • diagnostic reasoning
  • history
  • medical student
  • physical examination
  • undergraduate medical education

ASJC Scopus subject areas

  • Pediatrics, Perinatology, and Child Health

Cite this

Developing Validity Evidence for the Written Pediatric History and Physical Exam Evaluation Rubric. / King, Marta A.; Phillipi, Carrie; Buchanan, Paula M.; Lewin, Linda O.

In: Academic Pediatrics, Vol. 17, No. 1, 01.01.2017, p. 68-73.

Research output: Contribution to journalArticle

King, Marta A. ; Phillipi, Carrie ; Buchanan, Paula M. ; Lewin, Linda O. / Developing Validity Evidence for the Written Pediatric History and Physical Exam Evaluation Rubric. In: Academic Pediatrics. 2017 ; Vol. 17, No. 1. pp. 68-73.
@article{abb8b518d19c4de78259626c1ec96cbc,
title = "Developing Validity Evidence for the Written Pediatric History and Physical Exam Evaluation Rubric",
abstract = "Objective The written history and physical examination (H&P) is an underutilized source of medical trainee assessment. The authors describe development and validity evidence for the Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric: a novel tool for evaluating written H&Ps. Methods Using an iterative process, the authors drafted, revised, and implemented the 10-item rubric at 3 academic institutions in 2014. Eighteen attending physicians and 5 senior residents each scored 10 third-year medical student H&Ps. Inter-rater reliability (IRR) was determined using intraclass correlation coefficients. Cronbach α was used to report consistency and Spearman rank-order correlations to determine relationships between rubric items. Raters provided a global assessment, recorded time to review and score each H&P, and completed a rubric utility survey. Results Overall intraclass correlation was 0.85, indicating adequate IRR. Global assessment IRR was 0.89. IRR for low- and high-quality H&Ps was significantly greater than for medium-quality ones but did not differ on the basis of rater category (attending physician vs. senior resident), note format (electronic health record vs nonelectronic), or student diagnostic accuracy. Cronbach α was 0.93. The highest correlation between an individual item and total score was for assessments was 0.84; the highest interitem correlation was between assessment and differential diagnosis (0.78). Mean time to review and score an H&P was 16.3 minutes; residents took significantly longer than attending physicians. All raters described rubric utility as “good” or “very good” and endorsed continued use. Conclusions The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps.",
keywords = "assessment, clinical documentation, diagnostic reasoning, history, medical student, physical examination, undergraduate medical education",
author = "King, {Marta A.} and Carrie Phillipi and Buchanan, {Paula M.} and Lewin, {Linda O.}",
year = "2017",
month = "1",
day = "1",
doi = "10.1016/j.acap.2016.08.001",
language = "English (US)",
volume = "17",
pages = "68--73",
journal = "Academic Pediatrics",
issn = "1876-2859",
publisher = "Elsevier Inc.",
number = "1",

}

TY - JOUR

T1 - Developing Validity Evidence for the Written Pediatric History and Physical Exam Evaluation Rubric

AU - King, Marta A.

AU - Phillipi, Carrie

AU - Buchanan, Paula M.

AU - Lewin, Linda O.

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Objective The written history and physical examination (H&P) is an underutilized source of medical trainee assessment. The authors describe development and validity evidence for the Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric: a novel tool for evaluating written H&Ps. Methods Using an iterative process, the authors drafted, revised, and implemented the 10-item rubric at 3 academic institutions in 2014. Eighteen attending physicians and 5 senior residents each scored 10 third-year medical student H&Ps. Inter-rater reliability (IRR) was determined using intraclass correlation coefficients. Cronbach α was used to report consistency and Spearman rank-order correlations to determine relationships between rubric items. Raters provided a global assessment, recorded time to review and score each H&P, and completed a rubric utility survey. Results Overall intraclass correlation was 0.85, indicating adequate IRR. Global assessment IRR was 0.89. IRR for low- and high-quality H&Ps was significantly greater than for medium-quality ones but did not differ on the basis of rater category (attending physician vs. senior resident), note format (electronic health record vs nonelectronic), or student diagnostic accuracy. Cronbach α was 0.93. The highest correlation between an individual item and total score was for assessments was 0.84; the highest interitem correlation was between assessment and differential diagnosis (0.78). Mean time to review and score an H&P was 16.3 minutes; residents took significantly longer than attending physicians. All raters described rubric utility as “good” or “very good” and endorsed continued use. Conclusions The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps.

AB - Objective The written history and physical examination (H&P) is an underutilized source of medical trainee assessment. The authors describe development and validity evidence for the Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric: a novel tool for evaluating written H&Ps. Methods Using an iterative process, the authors drafted, revised, and implemented the 10-item rubric at 3 academic institutions in 2014. Eighteen attending physicians and 5 senior residents each scored 10 third-year medical student H&Ps. Inter-rater reliability (IRR) was determined using intraclass correlation coefficients. Cronbach α was used to report consistency and Spearman rank-order correlations to determine relationships between rubric items. Raters provided a global assessment, recorded time to review and score each H&P, and completed a rubric utility survey. Results Overall intraclass correlation was 0.85, indicating adequate IRR. Global assessment IRR was 0.89. IRR for low- and high-quality H&Ps was significantly greater than for medium-quality ones but did not differ on the basis of rater category (attending physician vs. senior resident), note format (electronic health record vs nonelectronic), or student diagnostic accuracy. Cronbach α was 0.93. The highest correlation between an individual item and total score was for assessments was 0.84; the highest interitem correlation was between assessment and differential diagnosis (0.78). Mean time to review and score an H&P was 16.3 minutes; residents took significantly longer than attending physicians. All raters described rubric utility as “good” or “very good” and endorsed continued use. Conclusions The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps.

KW - assessment

KW - clinical documentation

KW - diagnostic reasoning

KW - history

KW - medical student

KW - physical examination

KW - undergraduate medical education

UR - http://www.scopus.com/inward/record.url?scp=85006421194&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85006421194&partnerID=8YFLogxK

U2 - 10.1016/j.acap.2016.08.001

DO - 10.1016/j.acap.2016.08.001

M3 - Article

VL - 17

SP - 68

EP - 73

JO - Academic Pediatrics

JF - Academic Pediatrics

SN - 1876-2859

IS - 1

ER -