Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts

Jaime Jordan, Laura R. Hopson, Caroline Molins, Suzanne K. Bentley, Nicole Deiorio, Sally A. Santen, Lalena M. Yarris, Wendy C. Coates, Michael A. Gisondi

Research output: Contribution to journalArticlepeer-review

Abstract

Background: Research abstracts are submitted for presentation at scientific conferences; however, criteria for judging abstracts are variable. We sought to develop two rigorous abstract scoring rubrics for education research submissions reporting (1) quantitative data and (2) qualitative data and then to collect validity evidence to support score interpretation. Methods: We used a modified Delphi method to achieve expert consensus for scoring rubric items to optimize content validity. Eight education research experts participated in two separate modified Delphi processes, one to generate quantitative research items and one for qualitative. Modifications were made between rounds based on item scores and expert feedback. Homogeneity of ratings in the Delphi process was calculated using Cronbach's alpha, with increasing homogeneity considered an indication of consensus. Rubrics were piloted by scoring abstracts from 22 quantitative publications from AEM Education and Training “Critical Appraisal of Emergency Medicine Education Research” (11 highlighted for excellent methodology and 11 that were not) and 10 qualitative publications (five highlighted for excellent methodology and five that were not). Intraclass correlation coefficient (ICC) estimates of reliability were calculated. Results: Each rubric required three rounds of a modified Delphi process. The resulting quantitative rubric contained nine items: quality of objectives, appropriateness of methods, outcomes, data analysis, generalizability, importance to medical education, innovation, quality of writing, and strength of conclusions (Cronbach's α for the third round = 0.922, ICC for total scores during piloting = 0.893). The resulting qualitative rubric contained seven items: quality of study aims, general methods, data collection, sampling, data analysis, writing quality, and strength of conclusions (Cronbach's α for the third round = 0.913, ICC for the total scores during piloting = 0.788). Conclusion: We developed scoring rubrics to assess quality in quantitative and qualitative medical education research abstracts to aid in selection for presentation at scientific meetings. Our tools demonstrated high reliability.

Original languageEnglish (US)
Article numbere10654
JournalAEM Education and Training
Volume5
Issue number4
DOIs
StatePublished - Oct 2021
Externally publishedYes

ASJC Scopus subject areas

  • Emergency Medicine
  • Education
  • Emergency

Fingerprint

Dive into the research topics of 'Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts'. Together they form a unique fingerprint.

Cite this