Sensitive Topics

Jump to: navigation, search

This article provides guidance for how to collect data on sensitive topics.

Read First

For certain topics, respondents will have incentives to conceal the truth, due to taboos, social pressure (e.g. Social Desirability Bias, fear of retaliation, etc. This can create bias, the size and direction of which can be hard to predict. To avoid this, it is essential to guarantee anonymity / confidentiality, and to develop Survey Protocols to guarantee privacy and maximize trust. If this is not sufficient, experimental methods such as Randomized Response Technique, List Experiments and Endorsement Experiments can be used.


Survey Protocols for Collecting Sensitive Data

  • Make sure the respondent knows that responses will never be personally identified. This should be part of the Informed Consent module.
  • Interviews should be done privately, without even family members around (especially for discussing issues such as domestic violence)
  • Enumerators who share characteristics with the respondent (same gender, age, ethnic group, background) may garner increased trust
  • Survey mode: self-administered questionnaires may provide more accurate data than interviews

Randomized Response Technique

Randomized response technique protects respondents by introducing random noise. For example, from Blair, Imai and Zhou (2015).[1] "For this question, I want you to answer yes or no. But I want you to consider the number of your dice throw. If 1 shows on the dice, tell me no. If 6 shows, tell me yes. But if another number, like 2 or 3 or 4 or 5 shows, tell me your own opinion about the question that I will ask you after you throw the dice. [TURN AWAY FROM THE RESPONDENT] Now you throw the dice so that I cannot see what comes out. Please do not forget the number that comes out. [ WAIT TO TURN AROUND UNTIL RESPONDENT SAYS YES TO: ] Have you thrown the dice? Have you picked it up?"

Thus, when the respondent rolls a one, they are forced to respond “no” to the question; when respondents roll a six, they are forced to respond “yes.” Finally, when respondents roll two, three, four, or five, they are instructed to truthfully answer the following sensitive question.

"Now, during the height of the conflict in 2007 and 2008, did you know any militants, like a family member, a friend, or someone you talked to on a regular basis. Please, before you answer, take note of the number you rolled on the dice."

See Blair, Imai and Zhou (2015) for additional examples and applications, e.g. mirrored question design, disguised response design, and unrelated question design.

List Experiments

List experiments aggregate responses to sensitive questions with responses to non-sensitive questions, e.g. how many of the following statements do you agree with? For a control, the list of statements would not include any sensitive statements. The treatment group would choose from the same set of statements the control group had, plus additional sensitive statements. Subtracting the average response in control group from the average response in the treatment group yields the proportion of people who say 'yes' to the sensitive statements.

For example, from Gilens, M., Sniderman, P. M., and Kuklinski, J. H. (1998). Affirmative action and the politics of realignment. British Journal of Political Science 28, 1, 159– 183. The control group receives the list:

  1. The federal government increasing the tax on gasoline;
  2. Professional athletes earning large salaries;
  3. Requiring seat belts be used when driving;
  4. Large corporations polluting the environment

The treatment groups receive the above with either:

  1. Black leaders asking the government for affirmative action


  1. Awarding college scholarships on the basis of race

Endorsement Experiments

Back to Parent

This article is part of the topic Questionnaire Design

Additional Resources

  • Survey Methods for Sensitive Topics:
  • Bowling, Ann. "Mode of questionnaire administration can have serious effects on data quality." Journal of public health 27.3 (2005): 281-291. [2]
  • Frauke Kreuter, Stanley Presser, Roger Tourangeau; Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity. Public Opin Q 2009; 72 (5): 847-865. doi: 10.1093/poq/nfn063