Difference between revisions of "List Experiments"

Jump to: navigation, search
Line 13: Line 13:


==== Examples of list experiments ====
==== Examples of list experiments ====
''From Coffman, K. B., Coffman, L. C., & Ericson, K. M. M. (2016). The size of the lgbt population and the magnitude of antigay sentiment are substantially underestimated. Management Science.''
From ''Coffman, K. B., Coffman, L. C., & Ericson, K. M. M. (2016). The size of the lgbt population and the magnitude of antigay sentiment are substantially underestimated. Management Science.''


How many of the following statements are true for you?
How many of the following statements are true for you?
Line 29: Line 29:




''From Gilens, M., Sniderman, P. M., and Kuklinski, J. H. (1998). Affirmative action and the politics of realignment. British Journal of Political Science 28, 1, 159– 183.''
From ''Gilens, M., Sniderman, P. M., and Kuklinski, J. H. (1998). Affirmative action and the politics of realignment. British Journal of Political Science 28, 1, 159– 183.''


"Now I am going to read you three things that sometimes make people angry or upset. After I read all three, just tell me HOW MANY of them upset you. I don’t want to know which ones, just HOW MANY."
"Now I am going to read you three things that sometimes make people angry or upset. After I read all three, just tell me HOW MANY of them upset you. I don’t want to know which ones, just HOW MANY."

Revision as of 16:01, 10 May 2017

A technique to get around Social Desirability Bias, typically used when trying to measure Sensitive Topics


Read First

Guidelines

What is a list experiment?

List experiments aggregate responses to sensitive questions with responses to non-sensitive questions, e.g. how many of the following statements do you agree with? For a control, the list of statements would not include any sensitive statements. The treatment group would choose from the same set of statements the control group had, plus additional sensitive statements. Subtracting the average response in control group from the average response in the treatment group yields the proportion of people who say 'yes' to the sensitive statements.

Examples of list experiments

From Coffman, K. B., Coffman, L. C., & Ericson, K. M. M. (2016). The size of the lgbt population and the magnitude of antigay sentiment are substantially underestimated. Management Science.

How many of the following statements are true for you?

Group A (direct response) gets:

  1. I remember where I was the day of the Challenger space shuttle disaster
  2. I spent a lot of time playing video games as a kid
  3. I would vote to legalize marijuana if there was a ballot question in my state
  4. I have voted for a political candidate who was pro-life

Group B (veiled response) gets: The same list as Group A, plus

5. I consider myself to be heterosexual


From Gilens, M., Sniderman, P. M., and Kuklinski, J. H. (1998). Affirmative action and the politics of realignment. British Journal of Political Science 28, 1, 159– 183.

"Now I am going to read you three things that sometimes make people angry or upset. After I read all three, just tell me HOW MANY of them upset you. I don’t want to know which ones, just HOW MANY."

Group A gets the list:

  1. The federal government increasing the tax on gasoline;
  2. Professional athletes earning large salaries;
  3. Requiring seat belts be used when driving;
  4. Large corporations polluting the environment

Group B receives Group A's list, plus:

5. Black leaders asking the government for affirmative action

Group C receives Group A's list, plus:

5. Awarding college scholarships on the basis of race

When should I use a list experiment?

Back to Parent

This article is part of the topic Questionnaire Design

Additional Resources

  • Blair, Graeme, Kosuke Imai, and Jason Lyall. 2014. “Comparing And Combining List and Endorsement Experiments: Evidence from Afghanistan.” American Journal of Political Science 58(4): 1043–63.

Abstract: List and endorsement experiments are becoming increasingly popular among social scientists as indirect survey techniques for sensitive questions. When studying issues such as racial prejudice and support for militant groups, these survey methodologies may improve the validity of measurements by reducing non-response and social desirability biases. We develop a statistical test and multivariate regression models for comparing and combining the results from list and endorsement experiments. We demonstrate that when carefully designed and analyzed, the two survey experiments can produce substantively similar empirical findings. Such agreement is shown to be possible even when these experiments are applied to one of the most challenging research environments: contemporary Afghanistan. We find that both experiments uncover similar patterns of support for the International Security Assistance Force among Pashtun respondents. Our findings suggest that multiple measurement strategies can enhance the credibility of empirical conclusions. Open-source software is available for implementing the proposed methods.

  • Blair, Graeme, and Kosuke Imai. 2012. “Statistical Analysis of List Experiments.” Political Analysis 20(1): 47–77.

Abstract: The validity of empirical research often relies upon the accuracy of self-reported behavior and beliefs. Yet eliciting truthful answers in surveys is challenging, especially when studying sensitive issues such as racial prejudice, corruption, and support for militant groups. List experiments have attracted much attention recently as a potential solution to this measurement problem. Many researchers, however, have used a simple difference-in-means estimator, which prevents the efficient examination of multivariate relationships between respondents’ characteristics and their responses to sensitive items. Moreover, no systematic means exists to investigate the role of underlying assumptions. We fill these gaps by developing a set of new statistical methods for list experiments. We identify the commonly invoked assumptions, propose new multivariate regression estimators, and develop methods to detect and adjust for potential violations of key assumptions. For empirical illustration, we analyze list experiments concerning racial prejudice. Open-source software is made available to implement the proposed methodology.