Difference between revisions of "Questionnaire Design"
Line 61: | Line 61: | ||
== Related Pages == | == Related Pages == | ||
[[Click here for pages that link to this topic.]] | [[Special:WhatLinksHere/Questionnaire_Design|Click here for pages that link to this topic.]] | ||
== Additional Resources == | == Additional Resources == |
Revision as of 15:40, 9 June 2020
Questionnaire design is the first step in primary data collection. A well-designed questionnaire requires planning, literature reviews of questionnaires, structured modules, and careful consideration of outcomes to measure. Questionnaire design involves multiple steps - drafting, content-focused pilot, programming, data-focused pilot, and translation. This process can take 4-5 months from start to finish, and so the impact evaluation team (or research team) must allocate sufficient time for each step.
Read First
- The designing stage should align the survey instrument (or questionnaire) with the key research questions and indicators.
- Carefully review existing survey instruments that cover similar topics before starting.
- Divide the questionnaire into modules - this makes it easier to structure the instrument.
- Think through measurement challenges during questionnaire design.
- To avoid recall bias, use objective indicators as much as possible.
- Recall bias is a type of error that occurs when participants are not able to accurately remember past events.
Draft Survey Instrument
- Begin the questionnaire with an informed consent form
- Identify each survey respondent and each survey with Unique IDs
- Group questions into modules
- Write an introductory script for each module, to guide the flow of the interview. For example: Now I would like to ask you some questions about your relationships. It’s not that I want to invade your privacy. We are trying to learn how to make young people’s lives safer and happier. Please be open because for our work to be useful to anyone, we need to understand the reality of young people’s lives. Remember that all your answers will be kept strictly confidential.
- All questions should have pre-coded answer options. Answer options must:
- Be clear, simple, and mutually exclusive
- Be exhaustive (tested and refined during the Survey Pilot)
- Include 'other' (but if >5% of respondents choose 'other', answer choices were insufficiently exhaustive)
- Include hints to the enumerator as necessary. These hints are typically coded to appear in italics to clarify that they are not part of the question read to the respondent.
- For example: "For how many months did you work in the last 12 months? Enumerator: if less than 1 month, round up to 1.
Challenges to Measurement
Nuanced Definitions
Sometimes, seemingly simple survey questions are actually quite nuanced. For example , while household size seems relatively straight-forward, it in fact depends entirely on the definition of household member. Is a household member anyone currently living in the household? Anyone who has lived more than 6 of the last 12 months in the household? Is a domestic worker a household member? Are students away at school who are economically dependent on the household considered household members? What about a household head who has migrated but sends remittances back to support the household? As a second example of nuanced data points, while it may seem that all respondents should know their age, age can be difficult if people are innumerate, do not have birth certificates, or do not know their birth year.
Pay careful attention during the Survey Pilot for questions that are hard for the respondent; adjust the questionnaire and training accordingly. Wording questions clearly, piloting questionnaires thoroughly, training enumerators well, and including definitions within the questionnaire all help to ensure that the questionnaire consistently elicits the same information across respondents.
Recall Bias and Estimations
When asking survey respondents to recall or estimate information (i.e. income in the last year, consumption last week, plot size, amount deposited in bank account last month), be aware of recall bias. To avoid recall bias, use objective indicators as much as possible. For example, rather than asking a respondent the size of her agricultural plot, it is better to measure the plot area directly using GPS devices. Rather than asking a respondent how many times she deposited money in her bank account last month, it is better to acquire administrative bank data for accuracy. However, objective measures are often more expensive and may not always be possible. In these cases, make use of internal consistency checks, multiple measurements, and contextual references to ensure high quality data.
Sensitive Topics
For certain topics perceived as socially undesirable (i.e. drug/alcohol use, sexual practice,s violent behaviors, criminal activities), respondents may have incentives to conceal the truth due to taboos or social pressure. This can create bias, the size and direction of which can be hard to predict. To avoid this, enumerators should guarantee anonymity and confidentiality during the informed consent section. Further, survey protocols should guarantee privacy and maximize trust. Consider asking the question in third person, framing the questions to avoid social desirability bias or even possibly allowing respondents to self-administer certain modules. Note that experimental methods such as randomized response technique, list experiments and endorsement experiments can also help elicit accurate data on sensitive topics.
Abstract concepts
Abstract concepts such as empowerment, risk aversion, social cohesion or trust may be defined differently across cultures and/or may not translate well. To measure abstract concepts, first define the concept, then choose the outcome you will use to measure that concept, and finally design a good measure for that outcome. Pilot the question and measurement well.
Outcomes Not Directly Observable
For outcomes not directly observable (i.e. corruption, quality of care), audit studies can help elicit accurate data. In general, it is always best to directly measure outcomes when possible. As a basic example, consider the following example of measuring literacy:
- "Can you read?" Answer choices: yes, no
- "Can you please read me this sentence?" [Enumerators holds up card with a sentence written in the local language]. Answer choices: read sentence correctly, read sentence with some errors, unable to read sentence.
The second option, a more objective measure, is always preferable.
Process
When designing a survey instrument from scratch, follow these steps:
- Review (or draft) a theory of change and pre-analysis plan.
- Make a list of all intermediary and final outcomes of interest, as well as important covariates and sources of heterogeneity.
- Prepare an outline of questionnaire modules, based on the above list. Get feedback from research team.
- For each module, prepare a list of specific indicators to measure. Get feedback from research team and implementing partners.
- Review existing questionnaires and compile a databank of relevant questions for each module.
- Draft the questionnaire and note the source of each question (i.e. source: Uganda DHS 2011, source: Uganda Social Assistance Grants for Empowerment Programme 2013, Evaluation Follow-Up Survey [1], source: own design - extra attention required in pilot). Get feedback from research team and implementing partners.
- Conduct a content-based pilot and make revisions accordingly.
- Translate & program the survey (can happen concurrently).
Designing a follow-up questionnaire is simpler. Try to keep it as close to the baseline survey instrument as possible in order to facilitate panel analysis. It is better to add and/or subtract questions than to modify existing ones.
Related Pages
Click here for pages that link to this topic.
Additional Resources
- Grosh and Glewwe’s Designing Household Survey Questionnaires for Developing Countries: Lessons from 15 Years of the Living Standards Measurement Study
- Dhar’s Instrument Design 101 via Poverty Action Lab
- McKenzie’s Three New Papers Measuring Stuff that is Difficult to Measure via The World Bank’s Development Impact Blog
- Lombardini’s measuring household income via Oxfam
- Zezza et al.’s Measuring food consumption and expenditures in household consumption and expenditure surveys (HCES)
- DIME Analytics’ Survey Instruments Design & Pilot
- DIME Analytics’ Preparing for Data Collection
- DIME Analytics’ Survey Guidelines
- DIME Analytics’ SurveyCTO slides
- DIME Analytics’ guidelines on survey design and pilot