Difference between revisions of "Randomization Inference"

Jump to: navigation, search
Line 25: Line 25:


==Implications for Experimental Design==
==Implications for Experimental Design==
When planing to utilize randomization inference for an experimental analysis, it is also important to consider the difference in variation source during experimental design. In particular, this means performing power calculations and actual randomization to account for the randomization-inference method of p-value calculation.
[https://www.povertyactionlab.org/sites/default/files/publications/athey_imbens_june19.pdf Athey and Imbens (2016)] provide an extensive guide to these considerations.
#Power is maximized by forcing treatment-control balance on relevant baseline observables or outcome levels. This is achieved in theory by maximally partitioning into strata (2 treatment units and 2 control units in each, assuming a balanced design with one treatment arm), with fixed effects for the strata in the final regression.
#Pairwise randomization is inappropriate because within-strata variances cannot be computed.
#The "re-randomization" approach to force balance is typically inappropriate.

Revision as of 20:50, 6 November 2017

Read First

Randomization inference is a statistical practice for calculating regression p-values that reflect the true source of variation in experimentally assigned data. When the researcher controls the treatment assignment of the entire observed group, that variation arises from the treatment assignment (rather than from the sampling strategy), and therefore p-values based on the randomization are more appropriate than "standard" p-values.

Background: Baseline Balance

Recent discussions have pointed out that "baseline balance" t-tests on datasets where treatment was randomly assigned are conceptually challenging. This is because the p-values from t-tests are properly interpreted as the estimated probability that the observed difference between the sampled groups would have been observed if those samples had been drawn from underlying sampling frames with no true mean difference. However, in a randomization framework, there is no underlying sample of observations from which this sample is drawn: it is the full universe of the treated units and therefore the differences are exact, so "testing" them reveals no information.

Randomization Inference: Is the Treatment Effect Significant?

The same logic extends to differences in outcome variables that the researcher wants to investigate for causal response to a randomly assigned treatment. The differences between the treatment and control groups are exact because the full universes are typically observed, meaning "sampling variation" should not be used to calculate whether or not the difference between the treatment and control groups is statistically significant. Rather than estimating the variation in hypothetical draws from an underlying distribution (the mathematical approach of "standard" p-values), the researcher should instead compute p-values from the variation in hypothetical treatment assignments as that is the source of variation in the randomized experiment.

Randomization Inference p-Values

Thankfully, this is easy to implement with modern statistical software. The steps are conceptually straightforward in a Monte Carlo framework:

  1. Preserve the original treatment assignment
  2. Generate placebo treatment statuses according to the original assignment method
  3. Estimate the original regression equation with an additional term for the placebo treatment
  4. Repeat #1–3
  5. The randomization inference p-value is the proportion of times the placebo treatment effect was larger than the estimated treatment effect


Because the treatment assignment is the source of variation in the experimental design, the p-value is correctly interpretable as "the probability that a similar size treatment effect would have been observed under different hypothetical realizations of the chosen randomization method".

Implications for Experimental Design

When planing to utilize randomization inference for an experimental analysis, it is also important to consider the difference in variation source during experimental design. In particular, this means performing power calculations and actual randomization to account for the randomization-inference method of p-value calculation.

Athey and Imbens (2016) provide an extensive guide to these considerations.

  1. Power is maximized by forcing treatment-control balance on relevant baseline observables or outcome levels. This is achieved in theory by maximally partitioning into strata (2 treatment units and 2 control units in each, assuming a balanced design with one treatment arm), with fixed effects for the strata in the final regression.
  2. Pairwise randomization is inappropriate because within-strata variances cannot be computed.
  3. The "re-randomization" approach to force balance is typically inappropriate.