Trends in study methods used in undergraduate medical education research, 1969-2007

Amy Baernstein, Hillary K. Liss, Patricia (Patty) Carney, Joann G. Elmore

Research output: Contribution to journalArticle

52 Citations (Scopus)

Abstract

Context: Evidence-based medical education requires rigorous studies appraising educational efficacy. Objectives: To assess trends over time in methods used to evaluate undergraduate medical education interventions and to identify whether participation of medical education departments or centers is associated with more rigorous methods. Data Sources: The PubMed, Cochrane Controlled Trials Registry, Campbell Collaboration, and ERIC databases (January 1966-March 2007) were searched using terms equivalent to students, medical and education, medical crossed with all relevant study designs. Study Selection: We selected publications in all languages from every fifth year, plus the most recent 12 months, that evaluated an educational intervention for undergraduate medical students. Four hundred seventy-two publications met criteria for review. Data Extraction: Data were abstracted on number of participants; types of comparison groups; whether outcomes assessed were objective, subjective, and/or validated; timing of outcome assessments; funding; and participation of medical education departments and centers. Ten percent of publications were independently abstracted by 2 authors to assess validity of the data abstraction. Results: The annual number of publications increased over time from 1 (1969-1970) to 147 (2006-2007). In the most recent year, there was a mean of 145 medical student participants; 9 (6%) recruited participants from multiple institutions; 80 (54%) used comparison groups; 37 (25%) used randomized control groups; 91 (62%) had objective outcomes; 23 (16%) had validated outcomes; 35 (24%) assessed an outcome more than 1 month later; 21 (14%) estimated statistical power; and 66 (45%) reported funding. In 2006-2007, medical education department or center participation, reported in 46 (31%) of the recent publications, was associated only with enrolling more medical student participants (P=.04); for all studies from 1969 to 2007, it was associated only with measuring an objective outcome (P=.048). Between 1969 and 2007, the percentage of publications reporting statistical power and funding increased; percentages did not change for other study features. Conclusions: The annual number of published studies of undergraduate medical education interventions demonstrating methodological rigor has been increasing. However, considerable opportunities for improvement remain.

Original languageEnglish (US)
Pages (from-to)1038-1045
Number of pages8
JournalJournal of the American Medical Association
Volume298
Issue number9
DOIs
StatePublished - Sep 5 2007

Fingerprint

Undergraduate Medical Education
Medical Education
Publications
Biomedical Research
Medical Students
Information Storage and Retrieval
PubMed
Registries
Language
Outcome Assessment (Health Care)
Databases
Students
Control Groups

ASJC Scopus subject areas

  • Medicine(all)

Cite this

Trends in study methods used in undergraduate medical education research, 1969-2007. / Baernstein, Amy; Liss, Hillary K.; Carney, Patricia (Patty); Elmore, Joann G.

In: Journal of the American Medical Association, Vol. 298, No. 9, 05.09.2007, p. 1038-1045.

Research output: Contribution to journalArticle

Baernstein, Amy ; Liss, Hillary K. ; Carney, Patricia (Patty) ; Elmore, Joann G. / Trends in study methods used in undergraduate medical education research, 1969-2007. In: Journal of the American Medical Association. 2007 ; Vol. 298, No. 9. pp. 1038-1045.
@article{b6ef2cee5f95479eb718c733e20ca5a5,
title = "Trends in study methods used in undergraduate medical education research, 1969-2007",
abstract = "Context: Evidence-based medical education requires rigorous studies appraising educational efficacy. Objectives: To assess trends over time in methods used to evaluate undergraduate medical education interventions and to identify whether participation of medical education departments or centers is associated with more rigorous methods. Data Sources: The PubMed, Cochrane Controlled Trials Registry, Campbell Collaboration, and ERIC databases (January 1966-March 2007) were searched using terms equivalent to students, medical and education, medical crossed with all relevant study designs. Study Selection: We selected publications in all languages from every fifth year, plus the most recent 12 months, that evaluated an educational intervention for undergraduate medical students. Four hundred seventy-two publications met criteria for review. Data Extraction: Data were abstracted on number of participants; types of comparison groups; whether outcomes assessed were objective, subjective, and/or validated; timing of outcome assessments; funding; and participation of medical education departments and centers. Ten percent of publications were independently abstracted by 2 authors to assess validity of the data abstraction. Results: The annual number of publications increased over time from 1 (1969-1970) to 147 (2006-2007). In the most recent year, there was a mean of 145 medical student participants; 9 (6{\%}) recruited participants from multiple institutions; 80 (54{\%}) used comparison groups; 37 (25{\%}) used randomized control groups; 91 (62{\%}) had objective outcomes; 23 (16{\%}) had validated outcomes; 35 (24{\%}) assessed an outcome more than 1 month later; 21 (14{\%}) estimated statistical power; and 66 (45{\%}) reported funding. In 2006-2007, medical education department or center participation, reported in 46 (31{\%}) of the recent publications, was associated only with enrolling more medical student participants (P=.04); for all studies from 1969 to 2007, it was associated only with measuring an objective outcome (P=.048). Between 1969 and 2007, the percentage of publications reporting statistical power and funding increased; percentages did not change for other study features. Conclusions: The annual number of published studies of undergraduate medical education interventions demonstrating methodological rigor has been increasing. However, considerable opportunities for improvement remain.",
author = "Amy Baernstein and Liss, {Hillary K.} and Carney, {Patricia (Patty)} and Elmore, {Joann G.}",
year = "2007",
month = "9",
day = "5",
doi = "10.1001/jama.298.9.1038",
language = "English (US)",
volume = "298",
pages = "1038--1045",
journal = "JAMA - Journal of the American Medical Association",
issn = "0002-9955",
publisher = "American Medical Association",
number = "9",

}

TY - JOUR

T1 - Trends in study methods used in undergraduate medical education research, 1969-2007

AU - Baernstein, Amy

AU - Liss, Hillary K.

AU - Carney, Patricia (Patty)

AU - Elmore, Joann G.

PY - 2007/9/5

Y1 - 2007/9/5

N2 - Context: Evidence-based medical education requires rigorous studies appraising educational efficacy. Objectives: To assess trends over time in methods used to evaluate undergraduate medical education interventions and to identify whether participation of medical education departments or centers is associated with more rigorous methods. Data Sources: The PubMed, Cochrane Controlled Trials Registry, Campbell Collaboration, and ERIC databases (January 1966-March 2007) were searched using terms equivalent to students, medical and education, medical crossed with all relevant study designs. Study Selection: We selected publications in all languages from every fifth year, plus the most recent 12 months, that evaluated an educational intervention for undergraduate medical students. Four hundred seventy-two publications met criteria for review. Data Extraction: Data were abstracted on number of participants; types of comparison groups; whether outcomes assessed were objective, subjective, and/or validated; timing of outcome assessments; funding; and participation of medical education departments and centers. Ten percent of publications were independently abstracted by 2 authors to assess validity of the data abstraction. Results: The annual number of publications increased over time from 1 (1969-1970) to 147 (2006-2007). In the most recent year, there was a mean of 145 medical student participants; 9 (6%) recruited participants from multiple institutions; 80 (54%) used comparison groups; 37 (25%) used randomized control groups; 91 (62%) had objective outcomes; 23 (16%) had validated outcomes; 35 (24%) assessed an outcome more than 1 month later; 21 (14%) estimated statistical power; and 66 (45%) reported funding. In 2006-2007, medical education department or center participation, reported in 46 (31%) of the recent publications, was associated only with enrolling more medical student participants (P=.04); for all studies from 1969 to 2007, it was associated only with measuring an objective outcome (P=.048). Between 1969 and 2007, the percentage of publications reporting statistical power and funding increased; percentages did not change for other study features. Conclusions: The annual number of published studies of undergraduate medical education interventions demonstrating methodological rigor has been increasing. However, considerable opportunities for improvement remain.

AB - Context: Evidence-based medical education requires rigorous studies appraising educational efficacy. Objectives: To assess trends over time in methods used to evaluate undergraduate medical education interventions and to identify whether participation of medical education departments or centers is associated with more rigorous methods. Data Sources: The PubMed, Cochrane Controlled Trials Registry, Campbell Collaboration, and ERIC databases (January 1966-March 2007) were searched using terms equivalent to students, medical and education, medical crossed with all relevant study designs. Study Selection: We selected publications in all languages from every fifth year, plus the most recent 12 months, that evaluated an educational intervention for undergraduate medical students. Four hundred seventy-two publications met criteria for review. Data Extraction: Data were abstracted on number of participants; types of comparison groups; whether outcomes assessed were objective, subjective, and/or validated; timing of outcome assessments; funding; and participation of medical education departments and centers. Ten percent of publications were independently abstracted by 2 authors to assess validity of the data abstraction. Results: The annual number of publications increased over time from 1 (1969-1970) to 147 (2006-2007). In the most recent year, there was a mean of 145 medical student participants; 9 (6%) recruited participants from multiple institutions; 80 (54%) used comparison groups; 37 (25%) used randomized control groups; 91 (62%) had objective outcomes; 23 (16%) had validated outcomes; 35 (24%) assessed an outcome more than 1 month later; 21 (14%) estimated statistical power; and 66 (45%) reported funding. In 2006-2007, medical education department or center participation, reported in 46 (31%) of the recent publications, was associated only with enrolling more medical student participants (P=.04); for all studies from 1969 to 2007, it was associated only with measuring an objective outcome (P=.048). Between 1969 and 2007, the percentage of publications reporting statistical power and funding increased; percentages did not change for other study features. Conclusions: The annual number of published studies of undergraduate medical education interventions demonstrating methodological rigor has been increasing. However, considerable opportunities for improvement remain.

UR - http://www.scopus.com/inward/record.url?scp=34548426036&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=34548426036&partnerID=8YFLogxK

U2 - 10.1001/jama.298.9.1038

DO - 10.1001/jama.298.9.1038

M3 - Article

C2 - 17785648

AN - SCOPUS:34548426036

VL - 298

SP - 1038

EP - 1045

JO - JAMA - Journal of the American Medical Association

JF - JAMA - Journal of the American Medical Association

SN - 0002-9955

IS - 9

ER -