Difference between revisions of "Questionnaire Design"

Jump to: navigation, search
 
(72 intermediate revisions by 4 users not shown)
Line 1: Line 1:
'''Questionnaire design''' is the first step in [[Primary Data Collection|primary data collection]]. A well-designed questionnaire requires planning, [[Literature Review for Questionnaire|literature reviews of questionnaires]], structured modules, and careful consideration of outcomes to measure. '''Questionnaire design''' involves multiple steps - drafting, [[Piloting Survey Content|content-focused pilot]], [[Questionnaire Programming|programming]], [[Survey Pilot#Stages of a Survey Pilot|data-focused pilot]], and [[Questionnaire Translation|translation]]. This process can take 4-5 months from start to finish, and so the [[Impact Evaluation Team|impact evaluation team]] (or '''research team''') must allocate sufficient time for each step.
'''Questionnaire design''' is the first step in [[Primary Data Collection|primary data collection]]. A well-designed questionnaire requires planning, [[Literature Review for Questionnaire|literature reviews of questionnaires]], structured modules, and careful consideration of outcomes to measure. '''Questionnaire design''' involves multiple steps - drafting, [[Survey Pilot|content-focused pilot]], [[Questionnaire Programming|programming]], [[Survey Pilot#Stages of a Survey Pilot|data-focused pilot]], and [[Questionnaire Translation|translation]]. This process can take 4-5 months from start to finish, and so the [[Impact Evaluation Team|impact evaluation team]] (or '''research team''') must allocate sufficient time for each step.


== Read First ==
== Read First ==
* The '''drafting''' stage should align the '''survey instrument''' (or questionnaire) with the key research questions and indicators.
* The '''drafting''' stage should align the '''survey instrument''' (or questionnaire) with the key research questions and indicators.
* Use an existing '''survey instrument''' wherever possible, instead of '''drafting''' a new one from scratch.
* Carefully [[Literature Review for Questionnaire|review existing survey instruments]] that cover similar topics before starting.  
* Carefully [[Literature Review for Questionnaire|review existing survey instruments]] that cover similar topics before starting.  
* Divide the questionnaire into '''modules''' - this makes it easier to structure the instrument.
* Divide the questionnaire into '''modules''' - this makes it easier to structure the instrument.
* Think through '''measurement challenges''' during '''questionnaire design'''.
* Think through '''measurement challenges''' during '''questionnaire design'''.
* To avoid '''recall bias''', use objective indicators as much as possible.
* To avoid [[Recall Bias|recall bias]], use objective indicators as much as possible.
* '''Recall bias''' is a type of error that occurs when participants are not able to accurately remember past events.


== Process ==
== Process and Timeline ==
When designing from scratch, follow the steps highlighted in the figure.
The figure below highlights the steps involved in '''questionnaire design''', along with the time that the [[Impact Evaluation Team|research team]] should allocate to each step. The entire process can take 4-5 months, which includes revising and reviewing the '''survey instrument''' based on feedback from the [[Survey Pilot|survey pilot]].


Designing a follow-up questionnaire is simpler. Try to keep it as close to the baseline survey instrument as possible in order to facilitate panel analysis. It is better to add and/or subtract questions than to modify existing ones.
[[File:Questionnaire.png|550px|thumb|center|'''Figure : Questionnaire design timeline''']]


== Draft Survey Instrument ==
== Draft Instrument ==
This is the first step in the '''questionnaire design''' process. This section describes the various guidelines and aspects of '''drafting''' a survey instrument. As a general rule, the questionnaire must always begin with an [[Informed Consent | informed consent]] form. The [[Remote Surveys|remote]] or [[Field Surveys|field survey]] can start only after a [[Protecting Human Research Subjects|survey participant]] agrees to participate. The [[Impact Evaluation Team|research team]] should consider the following while '''drafting a survey instrument''' :
'''Drafting the instrument''' is the first step of the '''questionnaire design''' workflow. This section describes the various guidelines and aspects of '''drafting''' a survey instrument. As a general rule, the questionnaire must always begin with an [[Informed Consent | informed consent]] form. The [[Remote Surveys|remote]] or [[Field Surveys|field survey]] can start only after a [[Protecting Human Research Subjects|survey participant]] agrees to participate. To make the process of '''drafting''' easier and more structured, the [[Impact Evaluation Team|research team]] should divide the instrument into '''modules''', and perform a [[Literature Review for Questionnaire|literature review of existing questionnaires]]. During this stage, the '''research team''' meets several times to discuss and modify the draft versions of the survey instrument.
* '''Divide the instrument into modules.'''
* '''Perform a literature review of existing instruments.'''
=== Modules ===
=== Modules ===
# Review (or draft) a [[Theory of Change | theory of change]] and [[Pre-Analysis Plan | pre-analysis plan]].
In the '''drafting''' stage, it is helpful for the '''research team''' to start by outlining the various '''modules''' that they want to include in the instrument. Follow these steps to structure the '''modules''':
# Make a list of all intermediary and final outcomes of interest, as well as important covariates and sources of heterogeneity.
# '''Theory of Change.''' Start by drafting and reviewing a [[Theory of Change | theory of change]], and prepare a [[Pre-Analysis Plan | pre-analysis plan]]. Use the inputs from the members of the research team for this step.
# Prepare an outline of questionnaire modules, based on the above list. Get feedback from research team.  
# '''Outline.''' Based on the '''theory of change''' and '''pre-analyis plan''', prepare an outline of questionnaire '''modules'''. The modules should align with the key research questions. Take feedback from the other members of the research team.  
# For each module, prepare a list of specific indicators to measure. Get feedback from research team and implementing partners.  
# '''Outcomes of interest.''' Then, for each module, make a list of all intermediary and final outcomes of interest, as well as important indicators to measure. Again, take inputs from members of the research team, as well as other implementing partners who have prior experience in preparing survey instruments.
# '''Relevance and looping.''' Based on this list, discuss '''relevance''' and '''looping''' for each module. For example, for a survey on households, '''relevance''' means asking if a particular module applies to all households. '''Looping''' means considering if the questions within a module should be asked to all members in the household.
# '''Questions.''' Finally, discuss and draft questions for each module. Do not start from scratch, even if the instrument is for a '''baseline survey''' (or first round).
 
=== Literature Review ===
=== Literature Review ===
When drafting questions for each of the '''modules''' in an instrument, do not start from scratch. Follow these steps to draft the questions:
# '''Literature review.''' Start by performing a [[Literature Review for Questionnaire|literature review]] of existing, reliable, and well-tested questionnaires. Examples of such questionnaires include [[Field Surveys|field]] or [[Remote Surveys|remote surveys]] in the same country (regardless of the sector), or in the same sector (regardless of the country).
# '''Compile.''' Use this review to compile a '''repository''' or bank of relevant questions for each module.
# '''Cite source.''' In the draft version, add a separate column to note the source of each question. Again, take feedback from the other members of the research team and implementing partners.


# [[Literature Review for Questionnaire|Review existing questionnaires]] and compile a databank of relevant questions for each module.
You can consult the following resources for the '''literature review''':
# Draft the questionnaire and note the source of each question (i.e. source: Uganda DHS 2011, source: Uganda Social Assistance Grants for Empowerment Programme 2013, Evaluation Follow-Up Survey [http://microdata.worldbank.org/index.php/catalog/2653], source: own design - extra attention required in pilot). Get feedback from research team and implementing partners.
* [http://microdata.worldbank.org/index.php/catalog/impact_evaluation World Bank microdata catalogue]
* [https://dataverse.harvard.edu/dataverse/jpal J-PAL microdata catalogue]
* [http://microdata.worldbank.org/index.php/catalog/dhs Demographic and Health Surveys (DHS) microdata catalogue]
* [https://dataverse.harvard.edu/dataverse/IFPRI IFPRI microdata catalogue]
* [http://catalog.ihsn.org/index.php/catalog International Household Survey Network (IHSN) survey catalogue]


==Challenges to Measurement ==
'''Note:''' While '''drafting''' the instrument, it is also important to keep in mind that certain indicators can be hard to measure. This can lead to certain '''measurement challenges'''.
===Nuanced Definitions===
Sometimes, seemingly simple survey questions are actually quite nuanced. For example , while ''household size'' seems relatively straight-forward, it in fact depends entirely on the definition of ''household member.'' Is a household member anyone currently living in the household? Anyone who has lived more than 6 of the last 12 months in the household? Is a domestic worker a household member? Are students away at school who are economically dependent on the household considered household members? What about a household head who has migrated but sends remittances back to support the household? As a second example of nuanced data points, while it may seem that all respondents should know their age, ''age'' can be difficult if people are innumerate, do not have birth certificates, or do not know their birth year.


Pay careful attention during the [[Survey Pilot]] for questions that are hard for the respondent; adjust the questionnaire and training accordingly. Wording questions clearly, [[Survey Pilot | piloting]] questionnaires thoroughly, [[Enumerator Training | training]] enumerators well, and including definitions within the questionnaire all help to ensure that the questionnaire consistently elicits the same information across respondents.  
== Challenges to Measurement ==
'''Measurement challenges''' arise when certain indicators that are important to answer key research questions are hard to measure. It is important for the [[Impact Evaluation Team|research team]] to discuss these, and find a way to resolve these challenges. The following are some of the '''challenges to measurement''', along with suggestions to resolve them.
===Nuanced definitions===
In some cases, questions which seem straightforward can actually be quite '''nuanced''' or complex. Consider the following examples:
* '''Example 1.''' While questions about '''household size''' seem straight-forward, the answer can differ depending on which definition of '''household member''' is used. Without any further clarification, '''household member''' can include one, or all of the following - anyone currently living in the household, anyone who has lived more than 6 of the last 12 months in the household, domestic workers, children who are studying in another location but are economically dependent on the household, and adults who live in another location but send monthly remittances.  


===Recall Bias and Estimations===
* '''Example 2.''' While it may seem that all respondents should know their age, '''age''' can be difficult to measure if respondents do not have birth certificates, do not know their birth year, or are '''innumerate'''
When asking survey respondents to recall or estimate information (i.e. income in the last year, consumption last week, plot size, amount deposited in bank account last month), be aware of [[Recall Bias | recall bias]]. To avoid recall bias, use objective indicators as much as possible. For example, rather than asking a respondent the size of her agricultural plot, it is better to measure the plot area directly using GPS devices. Rather than asking a respondent how many times she deposited money in her bank account last month, it is better to acquire administrative bank data for accuracy. However, objective measures are often more expensive and may not always be possible. In these cases, make use of internal consistency checks, multiple measurements, and contextual references to ensure high quality data.  
To deal with such cases, the [[Impact Evaluation Team|research team]] can take the following steps:
* Pay careful attention during the [[Survey Pilot|survey pilot]], and identify questions that may be hard to understand for the respondent.


=== Sensitive Topics ===
* Adjust the questionnaire accordingly, and [[Enumerator Training|train enumerators]] to resolve these concerns.


For [[Sensitive Topics| certain topics perceived as socially undesirable]] (i.e. drug/alcohol use, sexual practice,s violent behaviors, criminal activities), respondents may have incentives to conceal the truth due to taboos or social pressure. This can create bias, the size and direction of which can be hard to predict. To avoid this, enumerators should guarantee anonymity and confidentiality during the [[Informed Consent | informed consent]] section. Further, [[Survey Protocols | survey protocols]] should guarantee privacy and maximize trust. Consider asking the question in third person, framing the questions to avoid social desirability bias or even possibly allowing respondents to self-administer certain modules. Note that experimental methods such as [[Randomized Response Technique | randomized response technique]], [[List Experiments | list experiments]] and [[Endorsement Experiments | endorsement experiments]] can also help elicit accurate data on sensitive topics.  
* Change the wording of the questions so the meaning is clear, and include definitions within the questionnaire where necessary to make sure all respondents have the same information at the time of answering a question.
 
=== Recall bias and estimation issues ===
[[Recall Bias|Recall bias]] is bias that arises when a question asks '''respondents''' to '''estimate''' or '''recollect''' events that happened in the past. In such cases the answers can be incorrect, or incomplete, leading to '''recall bias'''. The [[Impact Evaluation Team|research team]] should therefore be careful of '''recall bias''' when asking questions related to income , consumption of food, health-related expenditures, profits, and so on.
 
To avoid '''recall bias''', use the following strategies :
* '''Avoid the question.''' Ask a different question, and use objective indicators as much as possible. For example, instead of asking a '''respondent''' the size of their agricultural plot, the '''research team''' can use [[Geo Spatial Data|geospatial data]] to find out the size of the plot. Similarly, the research team can try to use [[Administrative and Monitoring Data|administrative data]] where possible, instead of asking '''respondents''' about their income over the previous year.
 
* '''Consistency checks.''' Build in '''consistency checks''' into the instrument. For example, if the amount harvested in a year is less than the amount consumed, it means there is a '''recall bias'''. In such a case, the '''enumerators''' can check with the '''respondent''' in real-time, and correct the data.
 
* '''Multiple measurements.''' Measure the same indicator in multiple ways. For example, ask about the time taken to reach the nearest grocery store. Also ask the distance to the grocery store. This can allow the '''enumerator''' to compare the two answers, and if they do not make sense, the '''enumerator''' can verify with the '''respondent''' in real time.
 
'''Note:''' However, it is important to keep in mind that objective indicators are often more expensive to measure, and it may not always be possible to measure them. In these cases, the research team should do the following to ensure [[Monitoring Data Quality|data quality]]:
* Perform internal checks like [[Back Checks|back checks]] and [[High Frequency Checks|high frequency checks]] during [[Primary Data Collection|data collection]].
* Measure the same indicator multiple times, where feasible. This allows the research team to check if data is consistent across multiple rounds of '''data collection'''.
* Use '''contextual references''', like general trends in the study area, to get a reasonable estimate.
 
=== Sensitive questions ===
In some cases, questions often deal with [[Sensitive Topics|sensitive topics]] or issues that are considered socially undesirable. For example, '''respondents''' may have reasons to not answer, or hide information on topics like drug usage, alcohol consumption, sexual activity, or violent behavior. This can lead to bias in the [[Primary Data Collection|collected data]], and the size and direction of this bias can be hard to predict.
 
To deal with questions that deal with '''sensitive topics''', keep the following in mind:
* '''Guarantee anonymity.''' The '''enumerators''' should guarantee anonymity and confidentiality when sharing the [[Informed Consent | informed consent]] section of the instrument with the '''respondents'''.  
* '''Build comfort.''' The '''enumerators''' should first establish a level of comfort with '''respondents''' before moving on to the '''sensitive topics'''.
* '''Re-frame the questions.''' Consider asking the question in third person, framing the questions in a manner that does not make an activity sound socially undesirable.
* '''Self-administration.''' Consider the option of '''respondents''' answering certain '''modules''' themselves, instead of answering them through the '''enumerator'''.
* '''Different methods.''' Consider using different methods like [[Randomized Response Technique | randomized response technique]] and [[List Experiments | randomized lists]]. These can be useful in getting answers to questions that deal with '''sensitive topics.'''


=== Abstract concepts===
=== Abstract concepts===
[[Abstract concepts]] such as [[Measuring Empowerment|empowerment]], risk aversion, social cohesion or trust may be defined differently across cultures and/or may not translate well. To measure abstract concepts, first define the concept, then choose the outcome you will use to measure that concept, and finally design a good measure for that outcome.  Pilot the question and measurement well.
It is also hard to measure '''abstract concepts''' such as [[Measuring Empowerment|empowerment]], risk aversion, social cohesion, or bargaining power. Some of these aspects may be defined differently across cultures, and it might be hard for the [[Impact Evaluation Team|research team]] to identify a definition that works for a particular context.


=== Outcomes Not Directly Observable ===
In order to measure '''abstract concepts''', use the following strategies:
* '''Define the concept.''' If a particular concept is hard to define, [[Literature Review for Questionnaire|review existing studies]] to check if a previous study successfully came up with a definition. If not, then try to consult local experts and partnering agencies to come up with a definition that matches the context. Some of the resources that can help with measuring '''abstract concepts''' like '''empowerment''' include:
**[https://www.povertyactionlab.org/ J-PAL] has created a [https://www.povertyactionlab.org/practical-guide-measuring-womens-and-girls-empowerment-impact-evaluations guide on measuring women and girls' empowerment in impact evaluations].
** [https://www.ifpri.org/ IFPRI] has created a [https://www.ifpri.org/publication/womens-empowerment-agriculture-index Women's Empowerment Agricultural Index (AWEI)], along with an [https://www.ifpri.org/publication/instructional-guide-abbreviated-womens-empowerment-agriculture-index-weai instructional guide] that explains the methodology, and steps to adapt the AWEI to local contexts.
** [https://policy-practice.oxfam.org.uk/ Oxfam] has also created a [https://policy-practice.oxfam.org.uk/publications/a-how-to-guide-to-measuring-womens-empowerment-sharing-experience-from-oxfams-i-620271 'how-to' guide to measuring women's empowerment].


For outcomes not directly observable (i.e. corruption, quality of care), audit studies can help elicit accurate data. In general, it is always best to directly measure outcomes when possible. As a basic example, consider the following example of measuring literacy:
* '''Choose outcome.''' After finalizing a definition, choose the outcome that can measure that concept.
* "Can you read?" ''Answer choices'': yes, no
* "Can you please read me this sentence?" [Enumerators holds up card with a sentence written in the local language]. ''Answer choices:'' read sentence correctly, read sentence with some errors, unable to read sentence.
The second option, a more objective measure, is always preferable.


== Content-focused Pilot ==
* '''Design a measure.''' Finally, design a good measure for that outcome. For example, consider a question like - do you and your partner consult each other when taking decisions about your child? If the answer is '''Yes''', it can be a measure of '''empowerment.'''


== Program Instrument ==
== Final Instrument ==
After completing the '''draft''' stage, the [[Impact Evaluation Team|research team]] must plan the next steps in the '''questionnaire design''' workflow - [[Piloting Survey Content|conduct a content-focused pilot]], [[Questionnaire Programming|program the instrument]], [[Survey Pilot#Stages of a Survey Pilot|conduct a data-focused pilot]], and finally, [[Questionnaire Translation|translate the instrument]].


== Data-focused Pilot ==
Once these steps are complete, the '''research team''' can save and store the '''final version''' of the instrument. Note that the '''final version''' is not the [[Questionnaire Programming|programmed version]]. The following are some of the '''best practices''' for the '''final version''' of the instrument: 
* '''Informed consent.''' Start the instrument with an [[Informed Consent|informed consent section]]. Make sure that the '''interview''' ([[Computer-Assisted Personal Interviews (CAPI)|CAPI]] or [[Remote Surveys#Phone Surveys (CATI)|CATI]]) cannot continue if the '''respondent''' refuses to participate in the '''interview'''.


== Translate Instrument ==
* '''Unique ID.''' Identify each '''respondent''' and each completed instrument with a [[ID_Variable_Properties|unique ID]].


== Finalize Instrument ==
* '''Excel workbook.''' Store the '''final version''' as an Excel workbook, with one tab for each '''module'''. Keep one main tab that contains all the '''modules''', which will be used for [[Questionnaire Programming|programming]]. Link the cells which have questions from each tab to the main tab.


== Guidelines ==
* '''Introductory script.''' Write an introductory script for each '''module''', to guide the flow of the '''interview'''. For example - "Now I would like to ask you some questions about your relationships. I do not want to invade your privacy. We are simply trying to learn how to make young people’s lives safer and happier. We request you to be open to our questions, because for our work to be useful to anyone, we need to understand the reality of young people’s lives. Remember that all your answers will be kept strictly confidential."
* '''Correctly code answer choices.''' All questions should have answer choices that are correctly and consistently '''coded'''. For example, throughout the instrument, use '''-9''' for '''"Other"''', '''-8''' for '''"Do not know"''', and '''-7''' for '''"Prefer not to answer"'''. The answer choices should be '''complete''', that is, they must cover all possible responses that can exist for a question.


#Begin the questionnaire with an
* '''Provide helpful hints.''' Include hints wherever necessary to help the '''enumerator'''. These hints should typically appear in italics to clarify that they are not part of the question that is read to the '''respondent'''. For example, consider the question - "for how many months did you work in the last 12 months?". In this case, the hint can appear as follows - ''Hint : Enumerator, if less than 1 month, round up to 1.''
#Identify each survey respondent and each survey with [[ID_Variable_Properties| Unique IDs]]
# Group questions into modules
#* Write an introductory script for each module, to guide the flow of the interview. For example: ''Now I would like to ask you some questions about your relationships. It’s not that I want to invade your privacy. We are trying to learn how to make young people’s lives safer and happier. Please be open because for our work to be useful to anyone, we need to understand the reality of young people’s lives. Remember that all your answers will be kept strictly confidential.''  
# All questions should have pre-coded answer options. Answer options must:
#* Be clear, simple, and mutually exclusive
#* Be exhaustive (tested and refined during the [[Survey Pilot]])
#*Include 'other' (but if >5% of respondents choose 'other', answer choices were insufficiently exhaustive)
# Include hints to the enumerator as necessary. These hints are typically coded to appear in italics to clarify that they are not part of the question read to the respondent.  
#*For example: "For how many months did you work in the last 12 months? ''Enumerator: if less than 1 month, round up to 1.''


== Related Pages ==
== Related Pages ==
Line 78: Line 111:


== Additional Resources ==
== Additional Resources ==
 
* David L. Vannette (Stanford University), [https://iriss.stanford.edu/sites/g/files/sbiybj6196/f/questionnaire_design_1.pdf Questionnaire Design: Theory and Best Practices]
* Grosh and Glewwe’s [http://documents.worldbank.org/curated/en/452741468778781879/pdf/multi-page.pdf Designing Household Survey Questionnaires for Developing Countries: Lessons from 15 Years of the Living Standards Measurement Study]
* David McKenzie (World Bank), [http://blogs.worldbank.org/impactevaluations/three-new-papers-measuring-stuff-difficult-measure Three New Papers Measuring Stuff that is Difficult to Measure]
* Dhar’s [https://www.povertyactionlab.org/sites/default/files/documents/Instrument%20Design_Diva_final.pdf Instrument Design 101] via Poverty Action Lab
* DIME Analytics (World Bank), [https://osf.io/357uv Design and Pilot a Survey]
* McKenzie’s [http://blogs.worldbank.org/impactevaluations/three-new-papers-measuring-stuff-difficult-measure Three New Papers Measuring Stuff that is Difficult to Measure] via The World Bank’s Development Impact Blog
* DIME Analytics (World Bank), [https://osf.io/u5evr Engaging with Data Collectors]
* Lombardini’s [http://policy-practice.oxfam.org.uk/blog/2017/02/real-geek-faq-how-can-i-measure-household-income measuring household income] via Oxfam
* DIME Analytics (World Bank), [https://osf.io/aqv2g Overview: Working with Survey Firms]
* Zezza et al.’s [https://www.sciencedirect.com/science/article/pii/S0306919217306802?via%3Dihub Measuring food consumption and expenditures in household consumption and expenditure surveys (HCES)]
* DIME Analytics (World Bank), [https://osf.io/63uv9/ Survey Guidelines]
*DIME Analytics’ [https://github.com/worldbank/DIME-Resources/blob/master/survey-instruments.pdf Survey Instruments Design & Pilot]
* DIME Analytics (World Bank), [https://osf.io/ezm68 Overview of SurveyCTO at the World Bank]
*DIME Analytics’ [https://github.com/worldbank/DIME-Resources/blob/master/survey-preparing.pdf Preparing for Data Collection]
* Diva Dhar (JPAL-IFMR), [https://www.povertyactionlab.org/sites/default/files/documents/Instrument%20Design_Diva_final.pdf Instrument Design 101]
*DIME Analytics’ [https://github.com/worldbank/DIME-Resources/blob/master/survey-guidelines.pdf Survey Guidelines]
* Grosh and Glewwe (World Bank),  [http://documents.worldbank.org/curated/en/452741468778781879/pdf/multi-page.pdf Designing Household Survey Questionnaires for Developing Countries: Lessons from 15 Years of the Living Standards Measurement Study]
*DIME Analytics’ [https://github.com/worldbank/DIME-Resources/blob/master/survey-cto.pdf SurveyCTO] slides
* IPL's [https://immigrationlab.org/project/whatsappsurveys/ Low-Cost, Automated WhatsApp Surveys]
*DIME Analytics’ guidelines on [https://github.com/worldbank/DIME-Resources/blob/master/survey-instruments.pdf survey design and pilot]
* LSMS Guidebook 2021 (World Bank), [https://documents1.worldbank.org/curated/en/381751639456530686/pdf/Capturing-What-Matters-Essential-Guidelines-for-Designing-Household-Surveys.pdf Capturing What Matters: Essential Guidelines for Designing Household Surveys]
[[Category: Questionnaire Design]] [[Category: Primary Data Collection]]
* Simone Lombardini (Oxfam UK), [http://policy-practice.oxfam.org.uk/blog/2017/02/real-geek-faq-how-can-i-measure-household-income FAQ - How can I measure household income (Part 1)]  
* SurveyCTO, [https://drive.google.com/file/d/1supgBxfpDaGMbJEFCAOFQ8T84aHEWBip/view Successful Web Form Deployment]
* T Hlaka's [https://merltech.org/how-to-responsibly-collect-or-acquire-data-for-me/ How to responsibly collect or acquire data for M&E]  
* Zezza et al. (World Bank, FAO, IFPRI), [https://www.sciencedirect.com/science/article/pii/S0306919217306802?via%3Dihub Measuring food consumption and expenditures in household consumption and expenditure surveys (HCES)]
[[Category: Primary Data Collection]]

Latest revision as of 16:57, 29 June 2023

Questionnaire design is the first step in primary data collection. A well-designed questionnaire requires planning, literature reviews of questionnaires, structured modules, and careful consideration of outcomes to measure. Questionnaire design involves multiple steps - drafting, content-focused pilot, programming, data-focused pilot, and translation. This process can take 4-5 months from start to finish, and so the impact evaluation team (or research team) must allocate sufficient time for each step.

Read First

  • The drafting stage should align the survey instrument (or questionnaire) with the key research questions and indicators.
  • Use an existing survey instrument wherever possible, instead of drafting a new one from scratch.
  • Carefully review existing survey instruments that cover similar topics before starting.
  • Divide the questionnaire into modules - this makes it easier to structure the instrument.
  • Think through measurement challenges during questionnaire design.
  • To avoid recall bias, use objective indicators as much as possible.

Process and Timeline

The figure below highlights the steps involved in questionnaire design, along with the time that the research team should allocate to each step. The entire process can take 4-5 months, which includes revising and reviewing the survey instrument based on feedback from the survey pilot.

Figure : Questionnaire design timeline

Draft Instrument

Drafting the instrument is the first step of the questionnaire design workflow. This section describes the various guidelines and aspects of drafting a survey instrument. As a general rule, the questionnaire must always begin with an informed consent form. The remote or field survey can start only after a survey participant agrees to participate. To make the process of drafting easier and more structured, the research team should divide the instrument into modules, and perform a literature review of existing questionnaires. During this stage, the research team meets several times to discuss and modify the draft versions of the survey instrument.

Modules

In the drafting stage, it is helpful for the research team to start by outlining the various modules that they want to include in the instrument. Follow these steps to structure the modules:

  1. Theory of Change. Start by drafting and reviewing a theory of change, and prepare a pre-analysis plan. Use the inputs from the members of the research team for this step.
  2. Outline. Based on the theory of change and pre-analyis plan, prepare an outline of questionnaire modules. The modules should align with the key research questions. Take feedback from the other members of the research team.
  3. Outcomes of interest. Then, for each module, make a list of all intermediary and final outcomes of interest, as well as important indicators to measure. Again, take inputs from members of the research team, as well as other implementing partners who have prior experience in preparing survey instruments.
  4. Relevance and looping. Based on this list, discuss relevance and looping for each module. For example, for a survey on households, relevance means asking if a particular module applies to all households. Looping means considering if the questions within a module should be asked to all members in the household.
  5. Questions. Finally, discuss and draft questions for each module. Do not start from scratch, even if the instrument is for a baseline survey (or first round).

Literature Review

When drafting questions for each of the modules in an instrument, do not start from scratch. Follow these steps to draft the questions:

  1. Literature review. Start by performing a literature review of existing, reliable, and well-tested questionnaires. Examples of such questionnaires include field or remote surveys in the same country (regardless of the sector), or in the same sector (regardless of the country).
  2. Compile. Use this review to compile a repository or bank of relevant questions for each module.
  3. Cite source. In the draft version, add a separate column to note the source of each question. Again, take feedback from the other members of the research team and implementing partners.

You can consult the following resources for the literature review:

Note: While drafting the instrument, it is also important to keep in mind that certain indicators can be hard to measure. This can lead to certain measurement challenges.

Challenges to Measurement

Measurement challenges arise when certain indicators that are important to answer key research questions are hard to measure. It is important for the research team to discuss these, and find a way to resolve these challenges. The following are some of the challenges to measurement, along with suggestions to resolve them.

Nuanced definitions

In some cases, questions which seem straightforward can actually be quite nuanced or complex. Consider the following examples:

  • Example 1. While questions about household size seem straight-forward, the answer can differ depending on which definition of household member is used. Without any further clarification, household member can include one, or all of the following - anyone currently living in the household, anyone who has lived more than 6 of the last 12 months in the household, domestic workers, children who are studying in another location but are economically dependent on the household, and adults who live in another location but send monthly remittances.
  • Example 2. While it may seem that all respondents should know their age, age can be difficult to measure if respondents do not have birth certificates, do not know their birth year, or are innumerate

To deal with such cases, the research team can take the following steps:

  • Pay careful attention during the survey pilot, and identify questions that may be hard to understand for the respondent.
  • Adjust the questionnaire accordingly, and train enumerators to resolve these concerns.
  • Change the wording of the questions so the meaning is clear, and include definitions within the questionnaire where necessary to make sure all respondents have the same information at the time of answering a question.

Recall bias and estimation issues

Recall bias is bias that arises when a question asks respondents to estimate or recollect events that happened in the past. In such cases the answers can be incorrect, or incomplete, leading to recall bias. The research team should therefore be careful of recall bias when asking questions related to income , consumption of food, health-related expenditures, profits, and so on.

To avoid recall bias, use the following strategies :

  • Avoid the question. Ask a different question, and use objective indicators as much as possible. For example, instead of asking a respondent the size of their agricultural plot, the research team can use geospatial data to find out the size of the plot. Similarly, the research team can try to use administrative data where possible, instead of asking respondents about their income over the previous year.
  • Consistency checks. Build in consistency checks into the instrument. For example, if the amount harvested in a year is less than the amount consumed, it means there is a recall bias. In such a case, the enumerators can check with the respondent in real-time, and correct the data.
  • Multiple measurements. Measure the same indicator in multiple ways. For example, ask about the time taken to reach the nearest grocery store. Also ask the distance to the grocery store. This can allow the enumerator to compare the two answers, and if they do not make sense, the enumerator can verify with the respondent in real time.

Note: However, it is important to keep in mind that objective indicators are often more expensive to measure, and it may not always be possible to measure them. In these cases, the research team should do the following to ensure data quality:

  • Perform internal checks like back checks and high frequency checks during data collection.
  • Measure the same indicator multiple times, where feasible. This allows the research team to check if data is consistent across multiple rounds of data collection.
  • Use contextual references, like general trends in the study area, to get a reasonable estimate.

Sensitive questions

In some cases, questions often deal with sensitive topics or issues that are considered socially undesirable. For example, respondents may have reasons to not answer, or hide information on topics like drug usage, alcohol consumption, sexual activity, or violent behavior. This can lead to bias in the collected data, and the size and direction of this bias can be hard to predict.

To deal with questions that deal with sensitive topics, keep the following in mind:

  • Guarantee anonymity. The enumerators should guarantee anonymity and confidentiality when sharing the informed consent section of the instrument with the respondents.
  • Build comfort. The enumerators should first establish a level of comfort with respondents before moving on to the sensitive topics.
  • Re-frame the questions. Consider asking the question in third person, framing the questions in a manner that does not make an activity sound socially undesirable.
  • Self-administration. Consider the option of respondents answering certain modules themselves, instead of answering them through the enumerator.
  • Different methods. Consider using different methods like randomized response technique and randomized lists. These can be useful in getting answers to questions that deal with sensitive topics.

Abstract concepts

It is also hard to measure abstract concepts such as empowerment, risk aversion, social cohesion, or bargaining power. Some of these aspects may be defined differently across cultures, and it might be hard for the research team to identify a definition that works for a particular context.

In order to measure abstract concepts, use the following strategies:

  • Choose outcome. After finalizing a definition, choose the outcome that can measure that concept.
  • Design a measure. Finally, design a good measure for that outcome. For example, consider a question like - do you and your partner consult each other when taking decisions about your child? If the answer is Yes, it can be a measure of empowerment.

Final Instrument

After completing the draft stage, the research team must plan the next steps in the questionnaire design workflow - conduct a content-focused pilot, program the instrument, conduct a data-focused pilot, and finally, translate the instrument.

Once these steps are complete, the research team can save and store the final version of the instrument. Note that the final version is not the programmed version. The following are some of the best practices for the final version of the instrument:

  • Informed consent. Start the instrument with an informed consent section. Make sure that the interview (CAPI or CATI) cannot continue if the respondent refuses to participate in the interview.
  • Unique ID. Identify each respondent and each completed instrument with a unique ID.
  • Excel workbook. Store the final version as an Excel workbook, with one tab for each module. Keep one main tab that contains all the modules, which will be used for programming. Link the cells which have questions from each tab to the main tab.
  • Introductory script. Write an introductory script for each module, to guide the flow of the interview. For example - "Now I would like to ask you some questions about your relationships. I do not want to invade your privacy. We are simply trying to learn how to make young people’s lives safer and happier. We request you to be open to our questions, because for our work to be useful to anyone, we need to understand the reality of young people’s lives. Remember that all your answers will be kept strictly confidential."
  • Correctly code answer choices. All questions should have answer choices that are correctly and consistently coded. For example, throughout the instrument, use -9 for "Other", -8 for "Do not know", and -7 for "Prefer not to answer". The answer choices should be complete, that is, they must cover all possible responses that can exist for a question.
  • Provide helpful hints. Include hints wherever necessary to help the enumerator. These hints should typically appear in italics to clarify that they are not part of the question that is read to the respondent. For example, consider the question - "for how many months did you work in the last 12 months?". In this case, the hint can appear as follows - Hint : Enumerator, if less than 1 month, round up to 1.

Related Pages

Click here for pages that link to this topic.

Additional Resources