Difference between revisions of "Experimental Methods"
(10 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
Experimental methods are research designs in which the researcher explicitly and intentionally induces [[Exogeneity Assumption | exogenous variation]] in the intervention assignment to facilitate causal inference. Experimental methods typically include directly [[Randomization in Stata | randomized]] variation of programs or interventions. This page outlines common types of experimental methods and explains how they avoid biased results. | |||
Experimental methods are research designs in which the | |||
==Read First== | |||
*Experimental methods introduce exogeneity, allowing researchers to draw conclusions about the effects of an event or a program. (See [[Randomized Evaluations: Principles of Study Design|study design]] for key principles of designing an evaluation.) | |||
*Without experimental methods, results may be biased by confounding variables or reverse causality. | |||
*During [[Primary Data Collection | data collection]] and [[Data Analysis | analysis]], make sure to consider and account for differential take-up, compliance, and attrition between randomized groups. | |||
== | == Common Types of Experimental Methods == | ||
Experimental methods typically include directly [[Randomization in Stata | randomized]] variation of programs or interventions offered to study populations. This variation is usually broadly summarized as "[[Randomized Control Trials]],” but can include cross-unit variation with one or more periods (cross-sectional designs); within-participant variation (panel studies); or treatment randomization at a [[Randomized Control Trials#Clustered RCTs | clustered]] level with further variation within clusters (multi-level), for example. | |||
Researchers can also achieve exogenous variation on the research side through randomized variation in the survey methodology. For example, public health surveys may use mystery patients to identify the quality of medical advice given to people in primary care settings. By comparing the outcomes with other health care providers given medical vignettes instead of mystery patients, by changing the information given from the patient to the provider, or by changing the setting in which the interaction is conducted, researchers can use the data collected to estimate causal differences in outcomes. Other designs like endorsement experiments and [[List Experiments | list experiments]] randomly vary the contents of the survey itself. | |||
Experimental | ==Experimental Methods as a Solution for Bias== | ||
Experimental variation imposes known variation on the study population, guaranteeing an un-confounded intervention effect. Without exogenous variation, however, the treatment effect may be biased by an external variable. Consider the following examples of non-experimental research in which bias confounds results: | |||
* The estimate of the intervention on the outcome may mask an effect produced by another, correlated variable. For example, schooling may improve the quality of job offers via network exposure, but the actual education adds no value. In this case the result would remain "correct" in the sense that those who got more schooling did, in fact, receive higher earnings, but "incorrect" in the sense that the estimate does not represent marginal value of education. Through randomization and exogeneity, experimental methods ensure that the analysis is not biased by confounding variables like that highlighted above. | |||
*The direction of causality may be reversed or simultaneous. For example, individuals who are highly motivated may choose to complete more years of schooling as well as being more competent at work in general; or those who are highly motivated by financial returns in the workplace may choose more schooling because of that motivation. Again, through randomization and exogeneity, experimental methods avoid reverse causality or endogeneity like that highlighted above. | |||
== Additional Resources == | == Additional Resources == | ||
*See JPAL’s [https://www.povertyactionlab.org/sites/default/files/research-resources/2016.08.31-Impact-Evaluation-Methods.pdf Impact Evaluation Methods Chart]. | |||
*[http://runningres.com/ Running Randomized Evaluations] includes all content from the book Running Randomized Evaluations, supplemental materials like case studies, and a blog. | |||
[http://runningres.com/ Running Randomized Evaluations] | *The World Bank and The IDB’s [http://www.worldbank.org/en/programs/sief-trust-fund/publication/impact-evaluation-in-practice Impact Evaluation in Practice] contains plentiful information on how to implement and analyze via experimental methods | ||
*See [http://www.oecd.org/dac/evaluation/dcdndep/37671602.pdf Impact Evaluation Design Principles] from OECD. | |||
*[http://www.bhub.org The Behavioral Evidence Hub (B-Hub)] is a continually updated collection of strategies drawn from insights about human behavior that are proven to solve real world problems. All results published on the B-Hub are evaluated with randomized controlled trials. | |||
*For information on quasi-experimental methods, see [[Quasi-Experimental Methods]]. | |||
[http://www.oecd.org/dac/evaluation/dcdndep/37671602.pdf Impact Evaluation Design Principles] from OECD | [[Category: Research Design]] | ||
[[Category: Experimental Methods]] | [[Category: Experimental Methods]] |
Latest revision as of 14:50, 13 April 2021
Experimental methods are research designs in which the researcher explicitly and intentionally induces exogenous variation in the intervention assignment to facilitate causal inference. Experimental methods typically include directly randomized variation of programs or interventions. This page outlines common types of experimental methods and explains how they avoid biased results.
Read First
- Experimental methods introduce exogeneity, allowing researchers to draw conclusions about the effects of an event or a program. (See study design for key principles of designing an evaluation.)
- Without experimental methods, results may be biased by confounding variables or reverse causality.
- During data collection and analysis, make sure to consider and account for differential take-up, compliance, and attrition between randomized groups.
Common Types of Experimental Methods
Experimental methods typically include directly randomized variation of programs or interventions offered to study populations. This variation is usually broadly summarized as "Randomized Control Trials,” but can include cross-unit variation with one or more periods (cross-sectional designs); within-participant variation (panel studies); or treatment randomization at a clustered level with further variation within clusters (multi-level), for example.
Researchers can also achieve exogenous variation on the research side through randomized variation in the survey methodology. For example, public health surveys may use mystery patients to identify the quality of medical advice given to people in primary care settings. By comparing the outcomes with other health care providers given medical vignettes instead of mystery patients, by changing the information given from the patient to the provider, or by changing the setting in which the interaction is conducted, researchers can use the data collected to estimate causal differences in outcomes. Other designs like endorsement experiments and list experiments randomly vary the contents of the survey itself.
Experimental Methods as a Solution for Bias
Experimental variation imposes known variation on the study population, guaranteeing an un-confounded intervention effect. Without exogenous variation, however, the treatment effect may be biased by an external variable. Consider the following examples of non-experimental research in which bias confounds results:
- The estimate of the intervention on the outcome may mask an effect produced by another, correlated variable. For example, schooling may improve the quality of job offers via network exposure, but the actual education adds no value. In this case the result would remain "correct" in the sense that those who got more schooling did, in fact, receive higher earnings, but "incorrect" in the sense that the estimate does not represent marginal value of education. Through randomization and exogeneity, experimental methods ensure that the analysis is not biased by confounding variables like that highlighted above.
- The direction of causality may be reversed or simultaneous. For example, individuals who are highly motivated may choose to complete more years of schooling as well as being more competent at work in general; or those who are highly motivated by financial returns in the workplace may choose more schooling because of that motivation. Again, through randomization and exogeneity, experimental methods avoid reverse causality or endogeneity like that highlighted above.
Additional Resources
- See JPAL’s Impact Evaluation Methods Chart.
- Running Randomized Evaluations includes all content from the book Running Randomized Evaluations, supplemental materials like case studies, and a blog.
- The World Bank and The IDB’s Impact Evaluation in Practice contains plentiful information on how to implement and analyze via experimental methods
- See Impact Evaluation Design Principles from OECD.
- The Behavioral Evidence Hub (B-Hub) is a continually updated collection of strategies drawn from insights about human behavior that are proven to solve real world problems. All results published on the B-Hub are evaluated with randomized controlled trials.
- For information on quasi-experimental methods, see Quasi-Experimental Methods.