Difference between revisions of "Randomization Inference"
Line 1: | Line 1: | ||
== Introduction == | == Introduction == | ||
Randomization inference is a statistical practice for calculating regression p-values that reflect | Randomization inference is a statistical practice for calculating regression p-values that reflect variation in experimentally assigned data arising from the randomization itself. When the researcher controls the treatment assignment of the entire observed group, variation arises from the treatment assignment rather than from the sampling strategy, and therefore p-values based on the randomization are more appropriate than "standard" p-values. | ||
==Motivation: Baseline Balance in Experimental Data== | ==Motivation: Baseline Balance in Experimental Data== |
Revision as of 20:53, 6 November 2017
Introduction
Randomization inference is a statistical practice for calculating regression p-values that reflect variation in experimentally assigned data arising from the randomization itself. When the researcher controls the treatment assignment of the entire observed group, variation arises from the treatment assignment rather than from the sampling strategy, and therefore p-values based on the randomization are more appropriate than "standard" p-values.
Motivation: Baseline Balance in Experimental Data
Recent discussions have pointed out that "baseline balance" t-tests on datasets where treatment was randomly assigned are conceptually challenging. This is because the p-values from t-tests are properly interpreted as the estimated probability that the observed difference between the sampled groups would have been observed if those samples had been drawn from underlying sampling frames with no true mean difference. However, in a randomization framework, there is no underlying sample of observations from which this sample is drawn: it is the full universe of the treated units and therefore the differences are exact, so "testing" them reveals no information.
Randomization Inference: Is the Treatment Effect Significant?
The same logic extends to differences in outcome variables that the researcher wants to investigate for causal response to a randomly assigned treatment. The differences between the treatment and control groups are exact because the full universes are typically observed, meaning "sampling variation" should not be used to calculate whether or not the difference between the treatment and control groups is statistically significant. Rather than estimating the variation in hypothetical draws from an underlying distribution (the mathematical approach of "standard" p-values), the researcher should instead compute p-values from the variation in hypothetical treatment assignments as that is the source of variation in the randomized experiment.
Calculating Randomization Inference p-Values
Thankfully, this is easy to implement with modern statistical software. The steps are conceptually straightforward in a Monte Carlo framework:
- Preserve the original treatment assignment
- Generate placebo treatment statuses according to the original assignment method
- Estimate the original regression equation with an additional term for the placebo treatment
- Repeat #1–3
- The randomization inference p-value is the proportion of times the placebo treatment effect was larger than the estimated treatment effect
Because the treatment assignment is the source of variation in the experimental design, the p-value is correctly interpretable as "the probability that a similar size treatment effect would have been observed under different hypothetical realizations of the chosen randomization method".
Implications for Experimental Design
When planing to utilize randomization inference for an experimental analysis, it is also important to consider the difference in variation source during experimental design. In particular, this means performing power calculations and actual randomization to account for the randomization-inference method of p-value calculation.
Athey and Imbens (2016) provide an extensive guide to these considerations. Major takeaways include:
- Power is maximized by forcing treatment-control balance on relevant baseline observables or outcome levels. This is achieved in theory by maximally partitioning into strata (2 treatment units and 2 control units in each, assuming a balanced design with one treatment arm), with fixed effects for the strata in the final regression.
- Pairwise randomization is inappropriate because within-strata variances cannot be computed.
- The "re-randomization" approach to force balance is typically inappropriate.