Difference between revisions of "Quasi-Experimental Methods"
Line 2: | Line 2: | ||
==Read First== | ==Read First== | ||
*Common examples of quasi-experimental methods include [[Difference-in-Differences | difference-in-differences]], [[Regression Discontinuity | regression discontinuity design]], and [[Matching | matching]]. For more details on each type, please visit their respective pages. | *Common examples of quasi-experimental methods include [[Difference-in-Differences | difference-in-differences]], [[Regression Discontinuity | regression discontinuity design]], [[Instrumental Variables | instrumental variables]] and [[Matching | matching]]. For more details on each type, please visit their respective pages. | ||
*In general, quasi-experimental methods require larger sample sizes and more assumptions than experimental methods in order to provide valid and unbiased estimates of program impacts. | *In general, quasi-experimental methods require larger sample sizes and more assumptions than experimental methods in order to provide valid and unbiased estimates of program impacts. | ||
== Overview == | == Overview == | ||
Like experimental methods, quasi-experimental methods aim to estimate program effects free of confoundedness, reverse causality or simultaneous causality. While quasi-experimental methods use a counterfactual, they differ from [[Experimental Methods | experimental methods]] in that they do not [[Randomization in Stata | randomize]] treatment assignment. Instead, quasi-experimental methods exploit existing circumstances in which treatment assignment has a sufficient element of randomness, as in [[Regression Discontinuity | regression discontinuity design]] or event studies; or simulate an experimental counterfactual by constructing a control group as similar as possible to the treatment group, as in [[Propensity Score Matching|propensity score matching]]. Other examples of quasi-experimental methods include [[Difference-in-Differences | difference-in-differences]]. | Like experimental methods, quasi-experimental methods aim to estimate program effects free of confoundedness, reverse causality or simultaneous causality. While quasi-experimental methods use a counterfactual, they differ from [[Experimental Methods | experimental methods]] in that they do not [[Randomization in Stata | randomize]] treatment assignment. Instead, quasi-experimental methods exploit existing circumstances in which treatment assignment has a sufficient element of randomness, as in [[Regression Discontinuity | regression discontinuity design]] or event studies; or simulate an experimental counterfactual by constructing a control group as similar as possible to the treatment group, as in [[Propensity Score Matching|propensity score matching]]. Other examples of quasi-experimental methods include [[Instrumental Variables | instrumental variables]] and [[Difference-in-Differences | difference-in-differences]]. | ||
==Assumptions and Limitations== | ==Assumptions and Limitations== |
Revision as of 19:18, 19 June 2019
Quasi-experimental methods are research designs that that aim to identify the impact of a particular intervention, program or event (a "treatment") by comparing treated units (households, groups, villages, schools, firms, etc.) to control units. While quasi-experimental methods use a control group, they differ from experimental methods in that they do not use randomization to select the control group. Quasi-experimental methods are useful for estimating the impact of a program or event for which it is not ethically or logistically feasible to randomize. This page outlines common types of quasi-experimental methods.
Read First
- Common examples of quasi-experimental methods include difference-in-differences, regression discontinuity design, instrumental variables and matching. For more details on each type, please visit their respective pages.
- In general, quasi-experimental methods require larger sample sizes and more assumptions than experimental methods in order to provide valid and unbiased estimates of program impacts.
Overview
Like experimental methods, quasi-experimental methods aim to estimate program effects free of confoundedness, reverse causality or simultaneous causality. While quasi-experimental methods use a counterfactual, they differ from experimental methods in that they do not randomize treatment assignment. Instead, quasi-experimental methods exploit existing circumstances in which treatment assignment has a sufficient element of randomness, as in regression discontinuity design or event studies; or simulate an experimental counterfactual by constructing a control group as similar as possible to the treatment group, as in propensity score matching. Other examples of quasi-experimental methods include instrumental variables and difference-in-differences.
Assumptions and Limitations
In general, quasi-experimental methods require larger samples than experimental methods. Further, for quasi-experimental methods to provide valid and unbiased estimates of program impacts, researchers must make more assumptions about the control group than in experimental methods. For example, difference-in-differences relies on the equal trends assumption (see Difference-in-Differences for more details), while matching assumes identical unobserved characteristics between the treatment and control groups.
Additional Resources
- Robert Michael's slides on Quasi-Experimental Designs
- Gertler et al.’s Impact Evaluation in Practice