Difference between revisions of "Experimental Methods"

Jump to: navigation, search
 
(10 intermediate revisions by 4 users not shown)
Line 1: Line 1:
<onlyinclude>
Experimental methods are research designs in which the researcher explicitly and intentionally induces [[Exogeneity Assumption | exogenous variation]] in the intervention assignment to facilitate causal inference. Experimental methods typically include directly [[Randomization in Stata | randomized]] variation of programs or interventions. This page outlines common types of experimental methods and explains how they avoid biased results.  
Impact evaluations aim to identify the impact of a particular intervention or program (a "treatment"), by comparing treated units (households, groups, villages, schools, firms, etc) to control units. Well-designed impact evaluations estimate the impact that can be 'causally attributed' to the treatment, i.e. the impact that was a result of the treatment itself not other factors. The main challenge in designing a rigorous impact evaluation is identifying a control group that is comparable to the treatment group. The gold-standard method for assigning treatment and control is randomization. To design a rigorous impact evaluation, it is essential to have a clear understanding of the [[Theory of Change]].
</onlyinclude>
== Read First ==
Experimental methods are research designs in which the investigator explicitly and intentionally induces exogenous variation in the uptake of the program to be evaluated. Experimental methods, such as [[Randomized Control Trials]], are typically considered the gold standard design for impact evaluation, since by construction the takeup of the treatment is uncorrelated with other characteristics of the treated population. Under these conditions, it is always possible for the analyst to construct a regression model in which the estimate of the treatment effect is unbiased.  


==Read First==
*Experimental methods introduce exogeneity, allowing researchers to draw conclusions about the effects of an event or a program. (See [[Randomized Evaluations: Principles of Study Design|study design]] for key principles of designing an evaluation.)
*Without experimental methods, results may be biased by confounding variables or reverse causality.
*During [[Primary Data Collection | data collection]] and [[Data Analysis | analysis]], make sure to consider and account for differential take-up, compliance, and attrition between randomized groups.


== The Power of Experimental Methods ==
== Common Types of Experimental Methods ==
Experimental methods, such as [[Randomized Control Trials]], are the gold standard for impact evaluation. Experimental variation imposes known variation on the study population. This guarantees that the intervention effect is not confounded (since it is not correlated with any external variable) and that causality is identified, since selection into the randomization is not possible. However, this leads to natural concerns about the structure of differential [[takeup]] and [[attrition]] in a randomization setting which must be addressed in every sample where [[noncompliance]] is a possibility.
 
Experimental methods solve two primary sources of bias:
 
* First, the estimate may be confounded, in the sense that it masks an effect produced reality by another, correlated variable. For example, schooling may improves the quality of job offers via network exposure, but the actual education adds no value. In this case the result remains "correct" in the sense that those who got more schooling got higher earnings, but "incorrect" in the sense that the estimate is not the marginal value of education.
 
* Second, the direction of causality may be reversed or simultaneous. For example, individuals who are highly motivated may choose to complete more years of schooling as well as being more competent at work in general; or those who are highly motivated by financial returns in the workplace may choose more schooling because of that motivation.
 


== Common Types of Experimental Methods ==
Experimental methods typically include directly [[Randomization in Stata | randomized]] variation of programs or interventions offered to study populations. This variation is usually broadly summarized as "[[Randomized Control Trials]],” but can include cross-unit variation with one or more periods (cross-sectional designs); within-participant variation (panel studies); or treatment randomization at a [[Randomized Control Trials#Clustered RCTs | clustered]] level with further variation within clusters (multi-level), for example.


Experimental methods typically include directly randomized variation of programs or interventions offered to study populations. This variation is usually broadly summarized as "[[Randomized Control Trials]]", but can include cross-unit variation with one or more periods ([[Cross-sectional Data|cross-sectional]] or [[Difference-in-Differences|difference-in-differences]] designs); within-participant variation ([[Panl Data |panel]] studies); or treatment randomization at a [[Randomized Control Trials#Clustered RCTs | clustered]] level with further variation within clusters (multi-level), for example.
Researchers can also achieve exogenous variation on the research side through randomized variation in the survey methodology. For example, public health surveys may use mystery patients to identify the quality of medical advice given to people in primary care settings. By comparing the outcomes with other health care providers given medical vignettes instead of mystery patients, by changing the information given from the patient to the provider, or by changing the setting in which the interaction is conducted, researchers can use the data collected to estimate causal differences in outcomes. Other designs like endorsement experiments and [[List Experiments | list experiments]] randomly vary the contents of the survey itself.


Experimental variation is also possible on the research side through randomized variation in the survey methodology. For example,  public health surveys have used "[[mystery patients]]" to identify the quality of medical advice given to people in primary care settings; by comparing the outcomes with other health care providers given [[medical vignettes]] instead of mystery patients, or by changing the information given from the patient to the provider, or by changing the setting in which the interaction is conducted, causal differences in outcomes can be estimated.
==Experimental Methods as a Solution for Bias==


Additionally, designs like [[endorsement experiments]] and [[list experiments]] randomly vary the contents of the survey itself to elicit accurate responses from participants when there is concern about [[social desirability bias]] or [[Hawthorne effects]].
Experimental variation imposes known variation on the study population, guaranteeing an un-confounded intervention effect. Without exogenous variation, however, the treatment effect may be biased by an external variable. Consider the following examples of non-experimental research in which bias confounds results:


== Designing Experimental Impact Evaluations ==
* The estimate of the intervention on the outcome may mask an effect produced by another, correlated variable. For example, schooling may improve the quality of job offers via network exposure, but the actual education adds no value. In this case the result would remain "correct" in the sense that those who got more schooling did, in fact, receive higher earnings, but "incorrect" in the sense that the estimate does not represent marginal value of education. Through randomization and exogeneity, experimental methods ensure that the analysis is not biased by confounding variables like that highlighted above.
*The direction of causality may be reversed or simultaneous. For example, individuals who are highly motivated may choose to complete more years of schooling as well as being more competent at work in general; or those who are highly motivated by financial returns in the workplace may choose more schooling because of that motivation. Again, through randomization and exogeneity, experimental methods avoid reverse causality or endogeneity like that highlighted above.


== Additional Resources ==
== Additional Resources ==
<!---- Impact Evaluation Methods Chart from JPAL: [[File:2016.08.31-Impact-Evaluation-Methods.pdf]] ------>
*See JPAL’s [https://www.povertyactionlab.org/sites/default/files/research-resources/2016.08.31-Impact-Evaluation-Methods.pdf Impact Evaluation Methods Chart].
 
*[http://runningres.com/ Running Randomized Evaluations] includes all content from the book Running Randomized Evaluations, supplemental materials like case studies, and a blog.
[http://runningres.com/ Running Randomized Evaluations] - the website includes all content from the book Running Randomized Evaluations, supplemental materials like case studies, and a blog.
*The World Bank and The IDB’s [http://www.worldbank.org/en/programs/sief-trust-fund/publication/impact-evaluation-in-practice Impact Evaluation in Practice] contains plentiful information on how to implement and analyze via experimental methods
 
*See [http://www.oecd.org/dac/evaluation/dcdndep/37671602.pdf Impact Evaluation Design Principles] from OECD.
Impact Evaluation Toolkit from the Results-Based Financing Team at the World Bank -  [[http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTHEALTHNUTRITIONANDPOPULATION/EXTHSD/EXTIMPEVALTK/0,,contentMDK:23262154~pagePK:64168427~piPK:64168435~theSitePK:8811876,00.html| Impact Evaluation Questions]]
*[http://www.bhub.org The Behavioral Evidence Hub (B-Hub)] is a continually updated collection of strategies drawn from insights about human behavior that are proven to solve real world problems. All results published on the B-Hub are evaluated with randomized controlled trials.
 
*For information on quasi-experimental methods, see [[Quasi-Experimental Methods]].
[http://www.oecd.org/dac/evaluation/dcdndep/37671602.pdf Impact Evaluation Design Principles] from OECD
[[Category: Research Design]]
 
[[Category: Experimental Methods]]
[[Category: Experimental Methods]]

Latest revision as of 14:50, 13 April 2021

Experimental methods are research designs in which the researcher explicitly and intentionally induces exogenous variation in the intervention assignment to facilitate causal inference. Experimental methods typically include directly randomized variation of programs or interventions. This page outlines common types of experimental methods and explains how they avoid biased results.

Read First

  • Experimental methods introduce exogeneity, allowing researchers to draw conclusions about the effects of an event or a program. (See study design for key principles of designing an evaluation.)
  • Without experimental methods, results may be biased by confounding variables or reverse causality.
  • During data collection and analysis, make sure to consider and account for differential take-up, compliance, and attrition between randomized groups.

Common Types of Experimental Methods

Experimental methods typically include directly randomized variation of programs or interventions offered to study populations. This variation is usually broadly summarized as "Randomized Control Trials,” but can include cross-unit variation with one or more periods (cross-sectional designs); within-participant variation (panel studies); or treatment randomization at a clustered level with further variation within clusters (multi-level), for example.

Researchers can also achieve exogenous variation on the research side through randomized variation in the survey methodology. For example, public health surveys may use mystery patients to identify the quality of medical advice given to people in primary care settings. By comparing the outcomes with other health care providers given medical vignettes instead of mystery patients, by changing the information given from the patient to the provider, or by changing the setting in which the interaction is conducted, researchers can use the data collected to estimate causal differences in outcomes. Other designs like endorsement experiments and list experiments randomly vary the contents of the survey itself.

Experimental Methods as a Solution for Bias

Experimental variation imposes known variation on the study population, guaranteeing an un-confounded intervention effect. Without exogenous variation, however, the treatment effect may be biased by an external variable. Consider the following examples of non-experimental research in which bias confounds results:

  • The estimate of the intervention on the outcome may mask an effect produced by another, correlated variable. For example, schooling may improve the quality of job offers via network exposure, but the actual education adds no value. In this case the result would remain "correct" in the sense that those who got more schooling did, in fact, receive higher earnings, but "incorrect" in the sense that the estimate does not represent marginal value of education. Through randomization and exogeneity, experimental methods ensure that the analysis is not biased by confounding variables like that highlighted above.
  • The direction of causality may be reversed or simultaneous. For example, individuals who are highly motivated may choose to complete more years of schooling as well as being more competent at work in general; or those who are highly motivated by financial returns in the workplace may choose more schooling because of that motivation. Again, through randomization and exogeneity, experimental methods avoid reverse causality or endogeneity like that highlighted above.

Additional Resources