Difference between revisions of "Randomized Evaluations: Principles of Study Design"

Jump to: navigation, search
Line 1: Line 1:
Randomized evaluations are field experiments involving the assignment of subjects ''randomly'' to one of two groups: ''one'', the '''treatment group''', which is receiving the policy intervention being evaluated, and ''two'', the '''control group''', which is in status-quo.
Randomized evaluations are field experiments involving the assignment of subjects ''randomly'' to one of two groups: ''one'', the '''treatment group''', which is receiving the policy intervention being evaluated, and ''two'', the '''control group''', which is in status-quo.
[[File:img1.jpg|center|Figure 1]]


The results of the trial are used to answer questions about effectiveness of an intervention, and can prevent inefficient allocation of resources to programs that might not be effective.  
The results of the trial are used to answer questions about effectiveness of an intervention, and can prevent inefficient allocation of resources to programs that might not be effective.  


This section covers the key principles of study design to guide researchers on best-practices in conducting field evaluations.
This section covers the key principles of study design to guide researchers on best-practices in conducting field evaluations.
==Read First==
*[[Experimental Methods]]
* There can be various biases that can affect results of an experiment, such as [[Selection Bias | selection bias]], or [[Recall Bias|recall bias]].
*We will also look at ways of tackling these through efficient study design.
==Step 1: Comprehensive protocol for the evaluation==
This involves selecting a hypothesis (assumption) that specifies the anticipated link between the predictor variables and the outcomes, that is, the '''''null hypothesis.'''''
===Key Concerns===
*The sample to be studied must be clearly specified, including exclusion/inclusion criteria.
*Pilot studies can help identify ideal target population, as well as ascertain '''''take-up rates''''', that can help with sample-size and power calculations.
*The sample size must be selected in a manner that provides a high probability.(see [[Sample Size and Power Calculations]])
*A good study can be designed by consulting experienced researchers.
==Step 2: Randomization==
Broadly speaking, this process involves allocating the sample selected (based on calculations in Step 1) into one of two groups: '''''treatment group''''', and ''control group''. This is the basis for establishing the '''''causal effect''''', which is the cornerstone of a randomized evaluation.
(See [[Randomization in Stata]] for technical explanation)
===Key Concerns===
*Effective randomization is important to tackle the issue of '''''confounding''''', that is, when a characteristic is associated with the intervention, as well as the outcome.
*This process must be concealed from the investigator. (see also [[Research Ethics]])
*Initial baseline characteristics must be measured across the two groups, and these should not be '''''significantly different'''''. One solution to this problem is [[Randomization Inference|randomization inference]]), a concept that is gaining ground in the field of randomized evaluations.
*Care must be taken to ensure minimum '''''attrition''''', that is, dropping out of some subjects after assignment.
*Regardless, outcomes should still be compared against initial members of the control group - this is called '''''intention-to-treat'''''.
==Step 3: Intervention, followed by measuring the outcomes==
The next step is to apply the intervention, and then measure outcomes, called '''''endline characteristics''''', after the pre-determined time-period has passed since the intervention.
===Key Concerns===
*Sufficient time should be given for the intervention to have its intended effect. Premature calculations of outcomes can indirectly affect the '''''power''''' of the evaluation by affecting the [[Minimum Detectable Effect|minimum detectable effect size(MDES)]]
*Blinding of the investigator to the intervention is crucial. It is also important for the subject to be blind to both, the assignment as well as the intervention, to prevent '''''spillovers'''''. This is called '''''double blinding.'''''
*Also refer to [[Measuring Abstract Concepts|measuring abstract concepts]], and [[Questionnaire Design|questionnaire design]].
==Final Step: Quality Control===

Revision as of 04:51, 22 February 2020

Randomized evaluations are field experiments involving the assignment of subjects randomly to one of two groups: one, the treatment group, which is receiving the policy intervention being evaluated, and two, the control group, which is in status-quo.

The results of the trial are used to answer questions about effectiveness of an intervention, and can prevent inefficient allocation of resources to programs that might not be effective.

This section covers the key principles of study design to guide researchers on best-practices in conducting field evaluations.

Read First

Step 1: Comprehensive protocol for the evaluation

This involves selecting a hypothesis (assumption) that specifies the anticipated link between the predictor variables and the outcomes, that is, the null hypothesis.

Key Concerns

  • The sample to be studied must be clearly specified, including exclusion/inclusion criteria.
  • Pilot studies can help identify ideal target population, as well as ascertain take-up rates, that can help with sample-size and power calculations.
  • The sample size must be selected in a manner that provides a high probability.(see Sample Size and Power Calculations)
  • A good study can be designed by consulting experienced researchers.

Step 2: Randomization

Broadly speaking, this process involves allocating the sample selected (based on calculations in Step 1) into one of two groups: treatment group, and control group. This is the basis for establishing the causal effect, which is the cornerstone of a randomized evaluation. (See Randomization in Stata for technical explanation)

Key Concerns

  • Effective randomization is important to tackle the issue of confounding, that is, when a characteristic is associated with the intervention, as well as the outcome.
  • This process must be concealed from the investigator. (see also Research Ethics)
  • Initial baseline characteristics must be measured across the two groups, and these should not be significantly different. One solution to this problem is randomization inference), a concept that is gaining ground in the field of randomized evaluations.
  • Care must be taken to ensure minimum attrition, that is, dropping out of some subjects after assignment.
  • Regardless, outcomes should still be compared against initial members of the control group - this is called intention-to-treat.

Step 3: Intervention, followed by measuring the outcomes

The next step is to apply the intervention, and then measure outcomes, called endline characteristics, after the pre-determined time-period has passed since the intervention.

Key Concerns

  • Sufficient time should be given for the intervention to have its intended effect. Premature calculations of outcomes can indirectly affect the power of the evaluation by affecting the minimum detectable effect size(MDES)
  • Blinding of the investigator to the intervention is crucial. It is also important for the subject to be blind to both, the assignment as well as the intervention, to prevent spillovers. This is called double blinding.
  • Also refer to measuring abstract concepts, and questionnaire design.

Final Step: Quality Control=