Difference between revisions of "Field Surveys"

Jump to: navigation, search
 
(80 intermediate revisions by one other user not shown)
Line 1: Line 1:
<onlyinclude>
'''Field surveys''' are one of the most commonly used methods used by researchers for the process of [[Primary Data Collection|primary data collection]]. In cases where [[Secondary Data Sources|secondary sources of data]] do not provide sufficient information, '''field surveys''' allow researchers to better monitor and evaluate the impact of field experiments. For example, consider a study that aims to evaluate the impact of micro-loans on farm output in a small village. It is possible that data on farm output for the last 10 years is not available, or is insufficient. In this case, researchers can conduct a '''field survey''' among local farmers to [[Primary Data Collection|collect data]] on farmer incomes and farm outputs.
'''Field surveys''' (or survey instruments/questionnaires) are the primary instrument used by researchers for the process of [[Primary Data Collection|primary data collection]]. When [[Secondary Data Sources|secondary sources of data]] are either insufficient, or not available, field surveys allow researchers to monitor and evaluate the impact of field experiments. '''''For example''''', consider a study that aims to evaluate the impact of micro-loans on farm output in a small village in Malawi. It is possible that data on farm output for the last 10 years is not available, or is insufficient. In this case, researchers can conduct a survey among local farmers to [[Primary Data Collection|collect data]] on farmer incomes and farm outputs.
 
</onlyinclude>
== Read First ==
== Read First ==
*'''Primary data''' has become the dominant source for empirical inquiry in development economics.  
* '''Primary data''' is vital for conducting empirical inquiry in the field of development economics.  
*With increasing access to specialized survey tools like '''SurveyCTO''', [[Survey Firm|survey firms]], and field manuals, there are plenty of useful guidelines on streamlining the process of conducting surveys. (see [[Field Management|field management]])
* The research team can either conduct the survey directly, or indirectly through a [[Survey Firm|survey firm]].
*Surveys involve multiple stages from start to finish, with a clear [[[Timeline of Survey Pilot|timeline]] for each step.
* Researchers conduct surveys by asking respondents to answer a '''survey instrument''' (questionnaire).
*Surveys can either be conducted directly, by the research team, or by [[Procuring a Survey Firm|procuring a survey firm]].
* With increasing availability of specialized [[Survey Firm|survey firms]], ODK-based tools like [[Computer-Assisted Personal Interviews (CAPI)|CAPI]], and standardized [[Field Management|field management practices]], it is important for researchers to follow certain best practices to gather data.
 
== Feasibility ==
'''Field surveys''' allow research teams to collect data through in-person interviews. These can either be done in the form of [[Computer-Assisted Personal Interviews (CAPI)|computer-assisted personal interviews (CAPI)]] or [[Computer-Assisted Field Entry (CAFE)|computer-assisted field entry (CAFE)]]. One of the biggest advantage of field surveys over other forms is that it allows for human interaction. This can result in higher response rates compared to other forms of data collection (like [[Remote Surveys#Phone Surveys|phone surveys]]), and improve the quality of collected data.
 
However, in some situations in-person data collection is not feasible, and the research team must look at other options like [[Remote Surveys|remote surveys]]. Examples of such cases are studies involving areas that are hard to physically reach or conflict areas that might be unsafe for fieldwork.


== Stages of a Survey ==
== Preparation ==
If the research team decides that conducting a field survey is in fact feasible, they can move to the process of [[Preparing for Field Data Collection|preparing for field surveys]]. This process involves multiple stages such as [[Questionnaire Design|drafting]], [[Survey Pilot|piloting]], [[Questionnaire Programming|programming]], and [[Questionnaire Translation|translating]], with clearly defined [[Survey_Pilot#Timeline|timelines]] for each step.


=== Draft and pre-pilot ===
=== Pre-pilot and draft ===
The '''first stage''' of implementing a survey involves defining rules and guidelines, in the form of [[Survey Protocols|survey protocols]]. Clear protocols ensure that fieldwork is carried out consistently across teams and/or regions, and are important for [[Reproducible Research | reproducible research]].
The '''pre-pilot and draft stage''' of implementing a survey starts by defining rules and guidelines in the form of [[Survey Protocols|survey protocols]]. Clear protocols ensure that fieldwork is carried out consistently across teams and regions, and are important for [[Reproducible Research |reproducible research]].


Then a '''pre-pilot''' is conducted, which is the first stage of [[Survey Pilot|survey pilot]]. This involves answering '''qualitative''' questions about the following:
The '''pre-pilot''', which is the first component of [[Survey Pilot|piloting a survey]], involves answering '''qualitative''' questions about the following:
*Selection of respondents.
*Selection of respondents.
*Tracking mechanism.
*Tracking mechanism.
*Number of revisits.
*Number of revisits.
*Dropping and replacement criteria.
*Dropping and replacement criteria.
Based on the pre-pilot, the research team [[Questionnaire Design|designs]] and drafts a questionnaire. In this process, it helps to use existing studies and questionnaires as a resource, to avoid starting from scratch.
Based on the pre-pilot, the research team [[Questionnaire Design|designs]] a questionnaire to generate a first '''draft'''. For this purpose, avoid starting from scratch, and try using existing studies and questionnaires as a point of reference.


=== Content-focused pilot ===
=== Content-focused pilot ===
The next step is to conduct a [[Piloting Survey Content|content-focused pilot]]. This stage involves answering questions about the '''structure''' and '''content''' of the questionnaire. Global best-practices recommend conducting the '''content-focused pilot''' on paper, to make it easier to revise and refine the survey instrument.
The next step is to conduct a [[Checklist:_Content-focused_Pilot|content-focused pilot]]. This stage involves answering questions about the '''structure''' and '''content''' of the questionnaire. Global best practices recommend conducting the content-focused pilot on paper, which makes it easier to revise and refine the survey instrument. It is equally important to simultaneously [[Checklist: Piloting Survey Protocols|pilot survey protocols]] like interview scheduling, infrastructure, and sampling.


In this stage, it is equally important to [[Piloting Survey Protocols|test survey protocols]] like scheduling, survey infrastructure, and sampling methods.
=== Programming ===
Once the questionnaire content and design have been finalized, the next step is to [[Questionnaire Programming|program the survey instrument]]. Researchers should not program the instrument before finalizing the design of the questionnaire, otherwise they will waste crucial time and resources in going back and forth. Researchers must set aside 2-3 weeks for '''coding the instrument''', and another 2-3 weeks for '''testing and debugging'''.


''' Note: DIME has created the following checklists for this process:'''
=== Data-focused pilot ===
* [[Preparing_for_the_survey_checklist|Checklist: Preparing for a Survey Pilot]]
Before conducting a '''data-focused pilot''', the research team must [[Procuring a Survey Firm|procure a survey firm]] and sign a contract with the selected firm. The data-focused pilot test for the following:
* [[Checklist: Refine the Questionnaire (Content)|Checklist: Refining questionnaire content]]
* '''Survey design.''' Re-check the design of the survey. Check if comments from earlier reviews have been resolved.
* '''Interview flow.''' Re-check the time taken for each module of the interview. Ensure that the interview has a flow to it, and both the interviewer and respondent are clear about the question.
* '''Survey programming.''' Check if questions display correctly. Check if modules need re-ordering. Ensure that built-in data checks are working.
* '''Data.''' Check whether all variables appear correctly. Check for missing data. Check for variance in data.
* '''High frequency checks.''' [[Monitoring Data Quality|Monitor data quality]] through high frequency checks.


=== Program instrument ===
[https://www.worldbank.org/en/research/dime/data-and-analytics DIME Analytics] has also created the following checklists to assist with piloting:
Once the questionnaire's content and design have been finalized, the next step is to [[Questionnaire Programming|program the questionnaire]]. Researchers should not program the instrument '''before''' finalizing the design, otherwise they will waste crucial time and resources in going back and forth.
* [[Preparing_for_the_survey_checklist|Checklist: Preparing for a survey pilot]]
* [[Checklist: Checklist:_Content-focused_Pilot|Checklist: Refining questionnaire content]]
* [[Checklist:_Data-focused_Pilot|Checklist: Refining the questionnaire data]]


While there are various tools to do this, [[SurveyCTO Coding Practices|SurveyCTO]] is the most widely used. Researchers must set aside 2-3 weeks for '''programming''', and another 2-3 weeks for '''debugging and testing'''.
=== Data-focused pilot ===
Before moving forward, the research team must [[Procuring a Survey Firm|finalize a survey firm]]. Then the next step is to conduct a '''data-focused pilot'''. This step tests the following:
*Survey design and interview flow (re-check design and revisions made earlier)
*Survey programming (check if questions display correctly, and built-in data checks are working)
*Data (whether all variables appear or not, missing data, variance in data)
*[[Monitoring Data Quality|High frequency checks]]
Also refer to the [[Checklist: Refine the Questionnaire (Data)]]|DIME checklist for refining data]].
=== Translate ===
=== Translate ===
The next step in the process is to [[Questionnaire Translation|translate the questionnaire]]. Translation will be considered '''good''' or '''complete''' only when enumerators and respondents have the same understanding for each question.
The next step is to [[Questionnaire Translation|translate the questionnaire]]. Translation will be considered good or complete only when enumerators and respondents have the same understanding of each question.
Often, with repeated translations, the research team must ensure adequate '''version control''' norms.
Often, with repeated translations, the research team must list out adequate '''version control''' norms.


=== Train enumerators ===
=== Train enumerators ===
These steps must be completed '''before''' [[Enumerator Training|training enumerators]]. This helps to minimize '''enumerator effects''', which arise due to differences in the way a question is asked to each respondent because different enumerators may share a '''different''' translation of the same question.
The final step before '''launching''' the survey is [[Enumerator Training|enumerator training]]. The research team must complete the previous steps before the training of enumerators starts. This minimizes '''enumerator effects''', which arise due to differences in the way a question is asked to each respondent because different enumerators may share different translations of the same question.
 
== Survey Launch ==
After preparing the survey, the research team '''launches''' the final instrument for fieldwork. This step concludes the process of '''data collection''', and sets the stage for the next step in conducting a field evaluation, [[Main Page|analysis]].


== Challenges ==  
=== Challenges ===
The processing of implementing field surveys comes with its own set of challenges. These include:
The process of implementing field surveys comes with its own set of challenges.
#Measurement challenges - Sometimes some questions are sensitive, or aim to measure things that seem hard to quantify, '''''for instance''''', employee-satisfaction.
* '''Measurement challenges'''. Sometimes some questions are sensitive, or aim to measure things that seem hard to quantify. For instance, while employee-satisfaction is very important for various studies to assess labor-market conditions, it is also very hard to measure objectively.
#Data-quality assurance- It is very important to have a [[Data Quality Assurance Plan|plan for ensuring data quality]].  
* '''Data-quality assurance.''' It is very important to have a [[Data Quality Assurance Plan|data quality assurance plan]]. Researchers must ensure that this plan is followed by field coordinators and enumerators to avoid problems during the subsequent steps of [[Data Cleaning|data cleaning]] and [[Data Analysis| data analysis]].
#Translation errors - If translation is '''poor''' or '''incomplete''', the data that is collected might be incorrect, or insufficient.
* '''Translation errors'''. If translation is poor or incomplete, the meaning of the questionnaire can change, resulting in incorrect data.
#Gaps in enumerator training - This can also lead to improper data collection, which can hamper the results of an evaluation.
* '''Gaps in enumerator training.''' This can affect the quality of responses during a survey, and therefore negatively affect the results of a field evaluation.


== Back to Parent ==  
== Related Pages ==  
This page redirects from [[Primary Data Collection|primary data collection]].
[[Special:WhatLinksHere/Field_Surveys|Click here for pages that link to this topic]].


== Additional Resources ==  
== Additional Resources ==  
* DIME (World Bank), [https://github.com/worldbank/DIME-Resources/blob/master/survey-instruments.pdf Survey Design and Pilot Guidelines]
* DIME (World Bank), [https://osf.io/357uv Design and Pilot a Survey]
[[Category: Primary Data Collection]] [[Category: Questionnaire Design]]
* DIME (World Bank), [https://osf.io/z45uw Program an Electronic Survey]
* DIME Analytics (World Bank), [https://github.com/worldbank/DIME-Resources/blob/master/stata2-4-quality.pdf Data Quality Assurance]
* DIME (World Bank), [https://osf.io/qeawx Preparing a Successful Survey]
* DIME (World Bank), [https://osf.io/bptgj Design a Survey]
* DIME (World Bank), [https://osf.io/uzfws Pilot a Survey]
* DIME Analytics (World Bank), [https://osf.io/j8t5f Assuring Data Quality]
*DIME Analytics (World Bank), [https://osf.io/u5evr Engaging With Data Collectors]

Latest revision as of 18:08, 6 July 2023

Field surveys are one of the most commonly used methods used by researchers for the process of primary data collection. In cases where secondary sources of data do not provide sufficient information, field surveys allow researchers to better monitor and evaluate the impact of field experiments. For example, consider a study that aims to evaluate the impact of micro-loans on farm output in a small village. It is possible that data on farm output for the last 10 years is not available, or is insufficient. In this case, researchers can conduct a field survey among local farmers to collect data on farmer incomes and farm outputs.

Read First

  • Primary data is vital for conducting empirical inquiry in the field of development economics.
  • The research team can either conduct the survey directly, or indirectly through a survey firm.
  • Researchers conduct surveys by asking respondents to answer a survey instrument (questionnaire).
  • With increasing availability of specialized survey firms, ODK-based tools like CAPI, and standardized field management practices, it is important for researchers to follow certain best practices to gather data.

Feasibility

Field surveys allow research teams to collect data through in-person interviews. These can either be done in the form of computer-assisted personal interviews (CAPI) or computer-assisted field entry (CAFE). One of the biggest advantage of field surveys over other forms is that it allows for human interaction. This can result in higher response rates compared to other forms of data collection (like phone surveys), and improve the quality of collected data.

However, in some situations in-person data collection is not feasible, and the research team must look at other options like remote surveys. Examples of such cases are studies involving areas that are hard to physically reach or conflict areas that might be unsafe for fieldwork.

Preparation

If the research team decides that conducting a field survey is in fact feasible, they can move to the process of preparing for field surveys. This process involves multiple stages such as drafting, piloting, programming, and translating, with clearly defined timelines for each step.

Pre-pilot and draft

The pre-pilot and draft stage of implementing a survey starts by defining rules and guidelines in the form of survey protocols. Clear protocols ensure that fieldwork is carried out consistently across teams and regions, and are important for reproducible research.

The pre-pilot, which is the first component of piloting a survey, involves answering qualitative questions about the following:

  • Selection of respondents.
  • Tracking mechanism.
  • Number of revisits.
  • Dropping and replacement criteria.

Based on the pre-pilot, the research team designs a questionnaire to generate a first draft. For this purpose, avoid starting from scratch, and try using existing studies and questionnaires as a point of reference.

Content-focused pilot

The next step is to conduct a content-focused pilot. This stage involves answering questions about the structure and content of the questionnaire. Global best practices recommend conducting the content-focused pilot on paper, which makes it easier to revise and refine the survey instrument. It is equally important to simultaneously pilot survey protocols like interview scheduling, infrastructure, and sampling.

Programming

Once the questionnaire content and design have been finalized, the next step is to program the survey instrument. Researchers should not program the instrument before finalizing the design of the questionnaire, otherwise they will waste crucial time and resources in going back and forth. Researchers must set aside 2-3 weeks for coding the instrument, and another 2-3 weeks for testing and debugging.

Data-focused pilot

Before conducting a data-focused pilot, the research team must procure a survey firm and sign a contract with the selected firm. The data-focused pilot test for the following:

  • Survey design. Re-check the design of the survey. Check if comments from earlier reviews have been resolved.
  • Interview flow. Re-check the time taken for each module of the interview. Ensure that the interview has a flow to it, and both the interviewer and respondent are clear about the question.
  • Survey programming. Check if questions display correctly. Check if modules need re-ordering. Ensure that built-in data checks are working.
  • Data. Check whether all variables appear correctly. Check for missing data. Check for variance in data.
  • High frequency checks. Monitor data quality through high frequency checks.

DIME Analytics has also created the following checklists to assist with piloting:

Translate

The next step is to translate the questionnaire. Translation will be considered good or complete only when enumerators and respondents have the same understanding of each question. Often, with repeated translations, the research team must list out adequate version control norms.

Train enumerators

The final step before launching the survey is enumerator training. The research team must complete the previous steps before the training of enumerators starts. This minimizes enumerator effects, which arise due to differences in the way a question is asked to each respondent because different enumerators may share different translations of the same question.

Survey Launch

After preparing the survey, the research team launches the final instrument for fieldwork. This step concludes the process of data collection, and sets the stage for the next step in conducting a field evaluation, analysis.

Challenges

The process of implementing field surveys comes with its own set of challenges.

  • Measurement challenges. Sometimes some questions are sensitive, or aim to measure things that seem hard to quantify. For instance, while employee-satisfaction is very important for various studies to assess labor-market conditions, it is also very hard to measure objectively.
  • Data-quality assurance. It is very important to have a data quality assurance plan. Researchers must ensure that this plan is followed by field coordinators and enumerators to avoid problems during the subsequent steps of data cleaning and data analysis.
  • Translation errors. If translation is poor or incomplete, the meaning of the questionnaire can change, resulting in incorrect data.
  • Gaps in enumerator training. This can affect the quality of responses during a survey, and therefore negatively affect the results of a field evaluation.

Related Pages

Click here for pages that link to this topic.

Additional Resources