Difference between revisions of "List Experiments"

Jump to: navigation, search
Line 62: Line 62:
* Reduces power
* Reduces power
** However, as this is individually randomized, may not have to allocate half of your sample to direct response (instead could do 5-10%)
** However, as this is individually randomized, may not have to allocate half of your sample to direct response (instead could do 5-10%)

== Back to Parent ==
== Back to Parent ==

Revision as of 18:42, 10 May 2017

A technique to get around Social Desirability Bias, typically used when trying to measure Sensitive Topics

Read First


What is a list experiment?

List experiments aggregate responses to sensitive questions with responses to non-sensitive questions, e.g. how many of the following statements do you agree with? This provides the respondent with an additional level of privacy, as the researcher can never perfectly infer an individual’s answer to the sensitive item (unless either 0 or N+1 items are true).

Procedure: Randomly divide the sample into two groups

  • Direct response: report how many of N items are true for themselves, where items are neutral and non-sensitive
  • Veiled response: report how many of N+1 items are true, with N items being identical to control group’s items, and the N+1st item being a sensitive item

With a large enough sample, estimate the population mean for the N+1st item (sensitive item), by differencing out the mean of the sum of N items estimated from the control. In other words, subtracting the average response in control group from the average response in the treatment group yields the proportion of people who say 'yes' to the sensitive statements.

Examples of list experiments

From Coffman, K. B., Coffman, L. C., & Ericson, K. M. M. (2016). The size of the lgbt population and the magnitude of antigay sentiment are substantially underestimated. Management Science.

How many of the following statements are true for you?

Group A (direct response) gets:

  1. I remember where I was the day of the Challenger space shuttle disaster
  2. I spent a lot of time playing video games as a kid
  3. I would vote to legalize marijuana if there was a ballot question in my state
  4. I have voted for a political candidate who was pro-life

Group B (veiled response) gets: The same list as Group A, plus

5. I consider myself to be heterosexual

From Gilens, M., Sniderman, P. M., and Kuklinski, J. H. (1998). Affirmative action and the politics of realignment. British Journal of Political Science 28, 1, 159– 183.

"Now I am going to read you three things that sometimes make people angry or upset. After I read all three, just tell me HOW MANY of them upset you. I don’t want to know which ones, just HOW MANY."

Group A gets the list:

  1. The federal government increasing the tax on gasoline;
  2. Professional athletes earning large salaries;
  3. Requiring seat belts be used when driving;
  4. Large corporations polluting the environment

Group B receives Group A's list, plus:

5. Black leaders asking the government for affirmative action

Group C receives Group A's list, plus:

5. Awarding college scholarships on the basis of race

Issues with List Experiments

  • Require people to count/add, possibly introducing noise to the data (especially if the list is long)
  • Unless the “innocent” questions are completely unrelated and have a known distribution, there is a chance that the treatment in your RCT might have an effect on its distribution.
    • Moreover, designing your common questions that way makes your sensitive ones stand out even more. 
  • Reduces power
    • However, as this is individually randomized, may not have to allocate half of your sample to direct response (instead could do 5-10%)

Back to Parent

This article is part of the topic Questionnaire Design

Additional Resources

  • Development Impact Blog posts by Berk Ozler.
Issues of data collection and measurement" https://blogs.worldbank.org/impactevaluations/issues-data-collection-and-measurement
"List Experiments for Sensitive Questions – a Methods Bleg" https://blogs.worldbank.org/impactevaluations/list-experiments-sensitive-questions-methods-bleg

  • Blair, Graeme, Kosuke Imai, and Jason Lyall. 2014. “Comparing And Combining List and Endorsement Experiments: Evidence from Afghanistan.” American Journal of Political Science 58(4): 1043–63.
Abstract: List and endorsement experiments are becoming increasingly popular among social scientists as indirect survey techniques for sensitive questions. When studying issues such as racial prejudice and support for militant groups, these survey methodologies may improve the validity of measurements by reducing non-response and social desirability biases. We develop a statistical test and multivariate regression models for comparing and combining the results from list and endorsement experiments. We demonstrate that when carefully designed and analyzed, the two survey experiments can produce substantively similar empirical findings. Such agreement is shown to be possible even when these experiments are applied to one of the most challenging research environments: contemporary Afghanistan. We find that both experiments uncover similar patterns of support for the International Security Assistance Force among Pashtun respondents. Our findings suggest that multiple measurement strategies can enhance the credibility of empirical conclusions. Open-source software is available for implementing the proposed methods.

  • Blair, Graeme, and Kosuke Imai. 2012. “Statistical Analysis of List Experiments.” Political Analysis 20(1): 47–77.
Abstract: The validity of empirical research often relies upon the accuracy of self-reported behavior and beliefs. Yet eliciting truthful answers in surveys is challenging, especially when studying sensitive issues such as racial prejudice, corruption, and support for militant groups. List experiments have attracted much attention recently as a potential solution to this measurement problem. Many researchers, however, have used a simple difference-in-means estimator, which prevents the efficient examination of multivariate relationships between respondents’ characteristics and their responses to sensitive items. Moreover, no systematic means exists to investigate the role of underlying assumptions. We fill these gaps by developing a set of new statistical methods for list experiments. We identify the commonly invoked assumptions, propose new multivariate regression estimators, and develop methods to detect and adjust for potential violations of key assumptions. For empirical illustration, we analyze list experiments concerning racial prejudice. Open-source software is made available to implement the proposed methodology.