Difference-in-differences and matching on outcomes: a tale of two unobservables

Stephan Lindner, Kenneth (John) McConnell

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Difference-in-differences combined with matching on pre-treatment outcomes is a popular method for addressing non-parallel trends between a treatment and control group. However, previous simulations suggest that this approach does not always eliminate or reduce bias, and it is not clear when and why. Using Medicaid claims data from Oregon, we systematically vary the distribution of two key unobservables—fixed effects and the random error term—to examine how they affect bias of matching on pre-treatment outcomes levels or trends combined with difference-in-differences. We find that in most scenarios, bias increases with the standard deviation of the error term because a higher standard deviation makes short-term fluctuations in outcomes more likely, and matching cannot easily distinguish between these short-term fluctuations and more structural outcome trends. The fixed effect distribution may also create bias, but only when matching on pre-treatment outcome levels. A parallel-trend test on the matched sample does not reliably distinguish between successful and unsuccessful matching. Researchers using matching on pre-treatment outcomes to adjust for non-parallel trends should report estimates from both unadjusted and propensity-score matching adjusted difference-in-differences, compare results for matching on outcome levels and trends and examine outcome changes around intervention begin to assess remaining bias.

Original languageEnglish (US)
JournalHealth Services and Outcomes Research Methodology
DOIs
StateAccepted/In press - Jan 1 2018

Fingerprint

Propensity Score
Medicaid
Research Personnel
Control Groups

Keywords

  • Difference-in-differences
  • Matching
  • Simulation

ASJC Scopus subject areas

  • Health Policy
  • Public Health, Environmental and Occupational Health

Cite this

@article{5e85187f3562465c80471ac04ca7d24a,
title = "Difference-in-differences and matching on outcomes: a tale of two unobservables",
abstract = "Difference-in-differences combined with matching on pre-treatment outcomes is a popular method for addressing non-parallel trends between a treatment and control group. However, previous simulations suggest that this approach does not always eliminate or reduce bias, and it is not clear when and why. Using Medicaid claims data from Oregon, we systematically vary the distribution of two key unobservables—fixed effects and the random error term—to examine how they affect bias of matching on pre-treatment outcomes levels or trends combined with difference-in-differences. We find that in most scenarios, bias increases with the standard deviation of the error term because a higher standard deviation makes short-term fluctuations in outcomes more likely, and matching cannot easily distinguish between these short-term fluctuations and more structural outcome trends. The fixed effect distribution may also create bias, but only when matching on pre-treatment outcome levels. A parallel-trend test on the matched sample does not reliably distinguish between successful and unsuccessful matching. Researchers using matching on pre-treatment outcomes to adjust for non-parallel trends should report estimates from both unadjusted and propensity-score matching adjusted difference-in-differences, compare results for matching on outcome levels and trends and examine outcome changes around intervention begin to assess remaining bias.",
keywords = "Difference-in-differences, Matching, Simulation",
author = "Stephan Lindner and McConnell, {Kenneth (John)}",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/s10742-018-0189-0",
language = "English (US)",
journal = "Health Services and Outcomes Research Methodology",
issn = "1387-3741",
publisher = "Springer Netherlands",

}

TY - JOUR

T1 - Difference-in-differences and matching on outcomes

T2 - a tale of two unobservables

AU - Lindner, Stephan

AU - McConnell, Kenneth (John)

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Difference-in-differences combined with matching on pre-treatment outcomes is a popular method for addressing non-parallel trends between a treatment and control group. However, previous simulations suggest that this approach does not always eliminate or reduce bias, and it is not clear when and why. Using Medicaid claims data from Oregon, we systematically vary the distribution of two key unobservables—fixed effects and the random error term—to examine how they affect bias of matching on pre-treatment outcomes levels or trends combined with difference-in-differences. We find that in most scenarios, bias increases with the standard deviation of the error term because a higher standard deviation makes short-term fluctuations in outcomes more likely, and matching cannot easily distinguish between these short-term fluctuations and more structural outcome trends. The fixed effect distribution may also create bias, but only when matching on pre-treatment outcome levels. A parallel-trend test on the matched sample does not reliably distinguish between successful and unsuccessful matching. Researchers using matching on pre-treatment outcomes to adjust for non-parallel trends should report estimates from both unadjusted and propensity-score matching adjusted difference-in-differences, compare results for matching on outcome levels and trends and examine outcome changes around intervention begin to assess remaining bias.

AB - Difference-in-differences combined with matching on pre-treatment outcomes is a popular method for addressing non-parallel trends between a treatment and control group. However, previous simulations suggest that this approach does not always eliminate or reduce bias, and it is not clear when and why. Using Medicaid claims data from Oregon, we systematically vary the distribution of two key unobservables—fixed effects and the random error term—to examine how they affect bias of matching on pre-treatment outcomes levels or trends combined with difference-in-differences. We find that in most scenarios, bias increases with the standard deviation of the error term because a higher standard deviation makes short-term fluctuations in outcomes more likely, and matching cannot easily distinguish between these short-term fluctuations and more structural outcome trends. The fixed effect distribution may also create bias, but only when matching on pre-treatment outcome levels. A parallel-trend test on the matched sample does not reliably distinguish between successful and unsuccessful matching. Researchers using matching on pre-treatment outcomes to adjust for non-parallel trends should report estimates from both unadjusted and propensity-score matching adjusted difference-in-differences, compare results for matching on outcome levels and trends and examine outcome changes around intervention begin to assess remaining bias.

KW - Difference-in-differences

KW - Matching

KW - Simulation

UR - http://www.scopus.com/inward/record.url?scp=85054543712&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85054543712&partnerID=8YFLogxK

U2 - 10.1007/s10742-018-0189-0

DO - 10.1007/s10742-018-0189-0

M3 - Article

AN - SCOPUS:85054543712

JO - Health Services and Outcomes Research Methodology

JF - Health Services and Outcomes Research Methodology

SN - 1387-3741

ER -