Difference-in-Differences

Revision as of 18:10, 23 July 2019 by Mrijanrimal (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The difference-in-differences method is a quasi-experimental approach that compares the changes in outcomes over time between a population enrolled in a program (the treatment group) and a population that is not (the comparison group). It is a useful tool for data analysis. This page gives an overview of the approach, implementation, and assumptions of differences-in-differences.

Read First

  • Difference-in-differences requires data on outcomes in the group that receives the program and the group that does not – both before and after the program.
  • For difference-in-differences implementation in Stata, see ieddtab.
  • Difference-in-differences relies on the equal trends assumption, which can be tested via placebo tests and other methods.

Overview

Difference-in-differences is an analytical approach that facilitates causal inference even when randomization is not possible. As discussed in the Randomized Control Trials page, we cannot draw causal conclusions by observing simple before-and-after changes in outcomes, as factors other than the treatment may influence the outcome over time; further, we cannot simply compare enrolled and unenrolled groups due to selection bias and differences in unobservable characteristics between the groups. Difference-in-differences combines these two methods to compare the before-and-after changes in outcomes for treatment and control groups and estimate the overall impact of the program.

Difference-in-differences takes the before-after difference in treatment group’s outcomes. This is the first difference. In comparing the same group to itself, the first difference controls for factors that are constant over time in that group. Then, to capture time-varying factors, difference-in-differences takes the before-after difference in the control group, which was exposed to the same set of environmental conditions as the treatment group. This is the second difference. Finally, difference-in-differences “cleans” all time-varying factors from the first difference by subtracting the second difference from it. This leaves us with the impact estimation – or the difference-in-differences.

Implementation

Difference-in-differences requires data on outcomes in the group that receives the program and the group that does not – both before and after the program. Compute the difference-in-differences as follows:

  1. Calculate the before-after difference in the outcome (Y) for the treatment group (B-A).
  2. Calculate the before-after difference in the outcome (Y) for the comparison group (D-C)
  3. Calculate the difference between the difference in outcomes for the treatment group (B-A) and the difference for the comparison group (D-C). This is the difference-in-differences: (DD)=(B-A)-(D-C).

For details on how to calculate difference-in-differences in Stata, see ieddtab.

Equal Trends Assumption

The validity of the difference-in-differences approach relies on the equal trends assumption, or rather the assumption that no time-varying differences exist between the treatment and control groups. While this assumption cannot be proved, research teams can assess its validity in four ways:

  1. Compare changes in the outcomes for the treatment and control groups repeatedly before the program is implemented (i.e. in t-3, t-2, t-1). If the outcome trend moves in parallel before the program began, it likely would have continued moving in tandem in the absence of the program.
  2. Perform a placebo test using a fake treatment group. The fake treatment group should be a group that was not affected by the program. A placebo test that reveals zero impact supports the equal-trend assumption.
  3. Perform a placebo test using a fake outcome. A placebo test that reveals zero impact supports the equal-trend assumption.
  4. Perform the difference-in-differences estimation using different comparison groups. Similar estimates of the impact of program confirms the equal-trend assumption.

Back to Parent

This article is part of the topic Quasi-Experimental Methods

Additional Resources