Difference between revisions of "Experimental Methods"

Jump to: navigation, search
Line 25: Line 25:


== Additional Resources ==
== Additional Resources ==
* list here other articles related to this topic, with a brief description and link
<!---- Impact Evaluation Methods Chart from JPAL: [[File:2016.08.31-Impact-Evaluation-Methods.pdf]] ------>
 
[http://runningres.com/ Running Randomized Evaluations] - the website includes all content from the book Running Randomized Evaluations, supplemental materials like case studies, and a blog.
 
Impact Evaluation Toolkit from the Results-Based Financing Team at the World Bank -  [[http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTHEALTHNUTRITIONANDPOPULATION/EXTHSD/EXTIMPEVALTK/0,,contentMDK:23262154~pagePK:64168427~piPK:64168435~theSitePK:8811876,00.html| Impact Evaluation Questions]]
 
[http://www.oecd.org/dac/evaluation/dcdndep/37671602.pdf Impact Evaluation Design Principles] from OECD


[[Category: Experimental Methods ]]
[[Category: Experimental Methods ]]

Revision as of 16:30, 9 February 2018

Read First

Experimental methods are research designs in which the investigator explicitly and intentionally induces exogenous variation in the uptake of the program to be evaluated. Experimental methods, such as Randomized Control Trials, are typically considered the gold standard design for impact evaluation, since by construction the takeup of the treatment is uncorrelated with other characteristics of the treated population. Under these conditions, it is always possible for the analyst to construct a regression model in which the estimate of the treatment effect is unbiased.


Guidelines

The Power of Experimental Methods

Experimental methods, such as Randomized Control Trials, are the gold standard for impact evaluation. Experimental variation imposes known variation on the study population. This guarantees that the intervention effect is not confounded (since it is not correlated with any external variable) and that causality is identified, since selection into the randomization is not possible. However, this leads to natural concerns about the structure of differential takeup and attrition in a randomization setting which must be addressed in every sample where noncompliance is a possibility.

Experimental methods solve two primary sources of bias:

  • First, the estimate may be confounded, in the sense that it masks an effect produced reality by another, correlated variable. For example, schooling may improves the quality of job offers via network exposure, but the actual education adds no value. In this case the result remains "correct" in the sense that those who got more schooling got higher earnings, but "incorrect" in the sense that the estimate is not the marginal value of education.
  • Second, the direction of causality may be reversed or simultaneous. For example, individuals who are highly motivated may choose to complete more years of schooling as well as being more competent at work in general; or those who are highly motivated by financial returns in the workplace may choose more schooling because of that motivation.


Common Types of Experimental Methods

Experimental methods typically include directly randomized variation of programs or interventions offered to study populations. This variation is usually broadly summarized as "Randomized Control Trials", but can include cross-unit variation with one or more periods (cross-sectional or difference-in-difference designs); within-participant variation (panel studies); or treatment randomization at a clustered level with further variation within clusters (multi-level), for example.

Experimental variation is also possible on the research side through randomized variation in the survey methodology. For example, public health surveys have used "mystery patients" to identify the quality of medical advice given to people in primary care settings; by comparing the outcomes with other health care providers given medical vignettes instead of mystery patients, or by changing the information given from the patient to the provider, or by changing the setting in which the interaction is conducted, causal differences in outcomes can be estimated.

Additionally, designs like endorsement experiments and list experiments randomly vary the contents of the survey itself to elicit accurate responses from participants when there is concern about social desirability bias or Hawthorne effects.

Additional Resources

Running Randomized Evaluations - the website includes all content from the book Running Randomized Evaluations, supplemental materials like case studies, and a blog.

Impact Evaluation Toolkit from the Results-Based Financing Team at the World Bank - [Impact Evaluation Questions]

Impact Evaluation Design Principles from OECD