# Difference between revisions of "Randomization Inference"

Line 9: | Line 9: | ||

==Randomization Inference: Is the Treatment Effect Significant?== | ==Randomization Inference: Is the Treatment Effect Significant?== | ||

− | The same logic extends to differences in outcome variables that the researcher wants to investigate for causal response to a randomly assigned treatment. The differences between the treatment and control groups are in general exact because the full universes are observed in data, meaning asymptotically-motivated "sampling variation" cannot be used to calculate whether the difference between the treatment and control groups is statistically significant. Rather than estimating the variation in draws from a hypothesized infinite underlying distribution (the mathematical approach of "standard" p-values), the researcher should instead compute p-values based on the knowable variation in hypothetical ''treatment assignments'', using the randomization process as the source of variation for the estimate. | + | [https://jasonkerwin.com/nonparibus/2017/09/25/randomization-inference-vs-bootstrapping-p-values/ The same logic extends to differences in outcome variables] that the researcher wants to investigate for causal response to a randomly assigned treatment. The differences between the treatment and control groups are in general exact because the full universes are observed in data, meaning asymptotically-motivated "sampling variation" cannot be used to calculate whether the difference between the treatment and control groups is statistically significant. Rather than estimating the variation in draws from a hypothesized infinite underlying distribution (the mathematical approach of "standard" p-values), the researcher should instead compute p-values based on the knowable variation in hypothetical ''treatment assignments'', using the randomization process as the source of variation for the estimate. |

==Calculating Randomization Inference p-Values== | ==Calculating Randomization Inference p-Values== |

## Revision as of 21:05, 6 November 2017

## Introduction

Randomization inference is a statistical practice for calculating regression p-values that reflect variation in experimentally assigned data arising from the randomization itself. When the researcher controls the treatment assignment of the entire observed group, variation arises from the treatment assignment rather than from the sampling strategy, and therefore p-values based on the randomization may be more appropriate than "standard" p-values.

## Motivation: Baseline Balance in Experimental Data

Recent discussions have pointed out that "baseline balance" t-tests on datasets where treatment was randomly assigned are conceptually challenging. This is because the p-values from t-tests are properly interpreted as the estimated probability that the observed difference between the sampled groups would have been observed if those samples had been drawn from underlying sampling frames with no true mean difference. However, in a randomization framework, there is no underlying universe of observations from which the samples are drawn: the observed data comprises the full universe of eligible units and therefore the differences are exact, so "testing" them reveals no information in this view.

## Randomization Inference: Is the Treatment Effect Significant?

The same logic extends to differences in outcome variables that the researcher wants to investigate for causal response to a randomly assigned treatment. The differences between the treatment and control groups are in general exact because the full universes are observed in data, meaning asymptotically-motivated "sampling variation" cannot be used to calculate whether the difference between the treatment and control groups is statistically significant. Rather than estimating the variation in draws from a hypothesized infinite underlying distribution (the mathematical approach of "standard" p-values), the researcher should instead compute p-values based on the knowable variation in hypothetical *treatment assignments*, using the randomization process as the source of variation for the estimate.

## Calculating Randomization Inference p-Values

Thankfully, this is easy to implement with modern statistical software. The steps are conceptually straightforward in a Monte Carlo framework:

- Preserve the original treatment assignment
- Generate placebo treatment statuses according to the original assignment method
- Estimate the original regression equation with an additional term for the placebo treatment
- Repeat #1–3
- The randomization inference p-value is
*the proportion of times the placebo treatment effect was larger than the estimated treatment effect*

Because the treatment assignment is the source of variation in the experimental design, the p-value is correctly interpretable as "the probability that a similar size treatment effect would have been observed under different hypothetical realizations of the chosen randomization method".

## Implications for Experimental Design

When planing to utilize randomization inference for an experimental analysis, it is also important to consider the difference in variation source during experimental design. In particular, this means performing power calculations and actual randomization to account for the randomization-inference method of p-value calculation.

Athey and Imbens (2016) provide an extensive guide to these considerations. Major takeaways include:

- Power is maximized by forcing treatment-control balance on relevant baseline observables or outcome levels. This is achieved in theory by maximally partitioning into strata (2 treatment units and 2 control units in each, assuming a balanced design with one treatment arm), with fixed effects for the strata in the final regression.
- Pairwise randomization is inappropriate because within-strata variances cannot be computed.
- The "re-randomization" approach to force balance is typically inappropriate.