# Difference between revisions of "Randomization Inference"

Line 3: | Line 3: | ||

Randomization inference is a statistical practice for calculating regression p-values that reflect the true source of variation in experimentally assigned data. When the researcher controls the treatment assignment of the entire observed group, that variation arises from the treatment assignment (rather than from the sampling strategy), and therefore p-values based on the randomization are more appropriate than "standard" p-values. | Randomization inference is a statistical practice for calculating regression p-values that reflect the true source of variation in experimentally assigned data. When the researcher controls the treatment assignment of the entire observed group, that variation arises from the treatment assignment (rather than from the sampling strategy), and therefore p-values based on the randomization are more appropriate than "standard" p-values. | ||

− | == | + | ==Background: Baseline Balance== |

+ | |||

+ | Recent discussions have pointed out that "baseline balance" t-tests on datasets where treatment was randomly assigned are conceptually challenging. This is because the p-values from t-tests are properly interpreted as the estimated probability that the observed difference between the sampled groups would have been observed if those samples had been drawn from underlying sampling frames with no true mean difference. However, in a randomization framework, there is no underlying sample of observations from which this sample is drawn: it is the full universe of the treated units and therefore the differences are exact, so "testing" them reveals no information. | ||

+ | |||

+ | ==Randomization Inference: Is the Treatment Effect Significant?== | ||

+ | |||

+ | The same logic extends to differences in outcome variables that the researcher wants to investigate for causal response to a randomly assigned treatment. The differences between the treatment and control groups are exact because the full universes are typically observed, meaning "sampling variation" should not be used to calculate whether or not the difference between the treatment and control groups is statistically significant. Rather than estimating the variation in hypothetical draws from an underlying distribution (the mathematical approach of "standard" p-values), the researcher should instead compute p-values from the variation in hypothetical ''treatment assignments'' as that is the source of variation in the randomized experiment. | ||

+ | |||

+ | ==Randomization Inference p-Values== | ||

+ | |||

+ | Thankfully, this is easy to implement with modern statistical software. The steps are conceptually easy in a Monte Carlo framework: | ||

+ | |||

+ | #Preserve the original treatment assignment | ||

+ | #Generate placebo treatment statuses according to the original assignment method | ||

+ | #Estimate the original regression equation with an additional term for the placebo effect | ||

+ | #Repeat #1–3 | ||

+ | #The randomization inference p-value is ''the proportion of times the placebo treatment effect was larger than the estimated treatment effect'' | ||

+ | |||

+ | |||

+ | ==Implications for Experimental Design== |

## Revision as of 20:41, 6 November 2017

## Read First

Randomization inference is a statistical practice for calculating regression p-values that reflect the true source of variation in experimentally assigned data. When the researcher controls the treatment assignment of the entire observed group, that variation arises from the treatment assignment (rather than from the sampling strategy), and therefore p-values based on the randomization are more appropriate than "standard" p-values.

## Background: Baseline Balance

Recent discussions have pointed out that "baseline balance" t-tests on datasets where treatment was randomly assigned are conceptually challenging. This is because the p-values from t-tests are properly interpreted as the estimated probability that the observed difference between the sampled groups would have been observed if those samples had been drawn from underlying sampling frames with no true mean difference. However, in a randomization framework, there is no underlying sample of observations from which this sample is drawn: it is the full universe of the treated units and therefore the differences are exact, so "testing" them reveals no information.

## Randomization Inference: Is the Treatment Effect Significant?

The same logic extends to differences in outcome variables that the researcher wants to investigate for causal response to a randomly assigned treatment. The differences between the treatment and control groups are exact because the full universes are typically observed, meaning "sampling variation" should not be used to calculate whether or not the difference between the treatment and control groups is statistically significant. Rather than estimating the variation in hypothetical draws from an underlying distribution (the mathematical approach of "standard" p-values), the researcher should instead compute p-values from the variation in hypothetical *treatment assignments* as that is the source of variation in the randomized experiment.

## Randomization Inference p-Values

Thankfully, this is easy to implement with modern statistical software. The steps are conceptually easy in a Monte Carlo framework:

- Preserve the original treatment assignment
- Generate placebo treatment statuses according to the original assignment method
- Estimate the original regression equation with an additional term for the placebo effect
- Repeat #1–3
- The randomization inference p-value is
*the proportion of times the placebo treatment effect was larger than the estimated treatment effect*