Difference between revisions of "Quasi-Experimental Methods"

Jump to: navigation, search
 
(12 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Impact evaluations aim to identify the impact of a particular intervention or program (a "treatment"), by comparing treated units (households, groups, villages, schools, firms, etc) to control units. Well-designed impact evaluations estimate the impact that can be 'causally attributed' to the treatment, i.e. the impact that was a result of the treatment itself not other factors. The main challenge in designing a rigorous impact evaluation is identifying a control group that is comparable to the treatment group. The gold-standard method for assigning treatment and control is randomization. In cases where randomization is not possible, quasi-experimental methods can be used. To design a rigorous impact evaluation, it is essential to have a clear understanding of the [[Theory of Change]].
Quasi-experimental methods are research designs that that aim to identify the impact of a particular intervention, program, or event (a "treatment") by comparing treated units (households, groups, villages, schools, firms, etc.) to control units. While quasi-experimental methods use a control group, they differ from [[Experimental Methods | experimental methods]] in that they do not use [[Randomization | randomization]] to select the control group. Quasi-experimental methods are useful for estimating the impact of a program or event for which it is not [[Research Ethics | ethically]] or logistically feasible to '''randomize'''. This page outlines common types of quasi-experimental methods.


== Introduction ==
==Read First==
Unlike in experimental research methods, the investigator does not have direct control over the exposure. "Natural experiments", such as [[Regression Discontinuity | regression discontinuity]] designs and [[Event Study | event studies]], identify existing circumstances where assignment of treatment has an exploitable element of randomness. In other cases, researchers attempt to simulate an experimental counterfactual by constructing a control group that is as similar as possible to the treatment group, using techniques such as [[Propensity Score Matching|propensity score matching]].
*Common examples of quasi-experimental methods include
 
**[[Difference-in-Differences | Difference-in-differences]]
 
**[[Regression Discontinuity | Regression discontinuity design]]
== Guidelines ==
**[[Instrumental Variables | Instrumental variables]]  
=== The Power of Quasi-Experimental Methods ===
**[[Propensity Score Matching|Propensity score matching]]  
Like experimental methods, quasi-experimental methods aim to solve the problems of [[confoundedness]] and [[reverse or simultaneous causality]] in identifying the marginal effect of a change in condition. Unlike [[Experimental Methods | experimental methods]], quasi-experiments tend to be based on policy changes or natural events that were not actually randomly assigned, but occurred in such a way that they can be considered "[[as good as randomly assigned]]" relative to some of the outcomes under consideration. Two examples illustrate these differences well.
**[[Matching | Matching]]
 
*In general, quasi-experimental methods require larger [[Sampling|sample sizes]] and more assumptions than [[Experimental Methods|experimental methods]] in order to provide valid and unbiased estimates of program impacts.
First, consider a time-based shock that can potentially apply to all people participating in a given market, such as joining a job guarantee program. [[Regression Discontinuity]] and [[Event Study]] designs are natural choices in such a situation, but comparing across time (ie, before and after the join date) at the individual level is inappropriate, since individuals may have joined the program due to other factors that would have affected their income, such as job market seasonality, business failure, or family changes. This type of [[reverse causality]] confounds the effect of the estimate because the true [[counterfactual]] cannot be estimated.
 
Instead, it is necessary to use a population-level cutoff such as the implementation of the program in an area [[differences-in-differences]] or [[staged rollout]] designs, or, if only one area is implementing the policy, [[Synthetic Control Method | synthetic control]] methods or the assumption that events occurring on either side of the policy cutoff could not manipulate on which side of the cutoff they occurred (the [[running variable]]).
 
Second, consider a natural disaster that has a various impact across space, such as an earthquake with a random fault/epicenter activation. Assuming that there was no way for households to foresee the timing or location of the earthquake, and that there was no serious [[differential attrition]] on critical dimensions, it can be used as a natural experiment on outcomes later in life for affected individuals. However, for the input component to be truly [[exogenous]], something like the distance to the faultline should be used as the treatment variable - even though estimates of local magnitude may be available.
 
This is because the local magnitude may be correlated with other varying characteristics (such as soil quality or hilliness) that also contribute to the outcomes (such as family income) and therefore re-introduce the confounding problem by including a non-random component that is correlated to a key characteristic. Distance from a randomly placed disaster, though a less precise measure of "exposure", is critically uncorrelated with almost any imaginable outcome under the correct assumption (unlike, for example, distance to a volcano, or to a tsunami-affected shoreline).


== Overview ==
Like [[Experimental Methods|experimental methods]], quasi-experimental methods aim to estimate program effects free of confoundedness, reverse causality or simultaneous causality. While quasi-experimental methods use a counterfactual, they differ from '''experimental methods''' in that they do not [[Randomization | randomize]] treatment assignment. Instead, quasi-experimental methods exploit existing circumstances in which treatment assignment has a sufficient element of randomness, as in [[Regression Discontinuity | regression discontinuity design]] or [[Event Study|event studies]]; or simulate an experimental counterfactual by constructing a control group as similar as possible to the treatment group, as in [[Propensity Score Matching|propensity score matching]].


==Assumptions and Limitations==
In general, quasi-experimental methods require larger [[Sampling|samples]] than [[Experimental Methods|experimental methods]]. Further, for quasi-experimental methods to provide valid and unbiased estimates of program impacts, researchers must make more assumptions about the control group than in '''experimental methods'''. For example, [[Difference-in-differences|difference-in-differences]] relies on the equal trends assumption, while [[Matching|matching]] assumes identical unobserved characteristics between the treatment and control groups.


== Additional Resources ==
== Additional Resources ==
* list here other articles related to this topic, with a brief description and link
* Robert Michael's slides on [http://www.indiana.edu/~educy520/sec6342/week_05/quasi_designs_2up.pdf Quasi-Experimental Designs]
*Gertler et al.’s [https://siteresources.worldbank.org/EXTHDOFFICE/Resources/5485726-1295455628620/Impact_Evaluation_in_Practice.pdf Impact Evaluation in Practice]


[[Category: Quasi-Experimental Methods ]]
[[Category: Quasi-Experimental Methods ]]

Latest revision as of 19:07, 9 August 2023

Quasi-experimental methods are research designs that that aim to identify the impact of a particular intervention, program, or event (a "treatment") by comparing treated units (households, groups, villages, schools, firms, etc.) to control units. While quasi-experimental methods use a control group, they differ from experimental methods in that they do not use randomization to select the control group. Quasi-experimental methods are useful for estimating the impact of a program or event for which it is not ethically or logistically feasible to randomize. This page outlines common types of quasi-experimental methods.

Read First

Overview

Like experimental methods, quasi-experimental methods aim to estimate program effects free of confoundedness, reverse causality or simultaneous causality. While quasi-experimental methods use a counterfactual, they differ from experimental methods in that they do not randomize treatment assignment. Instead, quasi-experimental methods exploit existing circumstances in which treatment assignment has a sufficient element of randomness, as in regression discontinuity design or event studies; or simulate an experimental counterfactual by constructing a control group as similar as possible to the treatment group, as in propensity score matching.

Assumptions and Limitations

In general, quasi-experimental methods require larger samples than experimental methods. Further, for quasi-experimental methods to provide valid and unbiased estimates of program impacts, researchers must make more assumptions about the control group than in experimental methods. For example, difference-in-differences relies on the equal trends assumption, while matching assumes identical unobserved characteristics between the treatment and control groups.

Additional Resources