Difference between revisions of "Field Surveys"

Jump to: navigation, search
Line 51: Line 51:

== Survey Launch ==
== Survey Launch ==
After preparing the survey, the research team '''launches''' the final instrument for field work. This step completes the process of [[Main Page|data collection]], and sets the stage for the next step in conducting a field evaluation, '''analysis'''.  
After preparing the survey, the research team '''launches''' the final instrument for field work. This step completes the process of [[Main Page|data collection]], and sets the stage for the next step in conducting a field evaluation, '''analysis'''.

== Challenges ==  
== Challenges ==  

Revision as of 21:06, 25 March 2020

Field surveys (or survey instruments/questionnaires) are one of the most commonly used methods used by researchers for the process of primary data collection. In cases where secondary sources of data do not provide sufficient information, field surveys allow researchers to better monitor and evaluate the impact of field experiments. For example, consider a study that aims to evaluate the impact of micro-loans on farm output in a small village. It is possible that data on farm output for the last 10 years is not available, or is insufficient. In this case, researchers can conduct a survey among local farmers to collect data on farmer incomes and farm outputs.

Read First

  • Primary data is vital for conducting empirical inquiry in the field of development economics.
  • Surveys can be conducted either directly by the research team or indirectly through a survey firm.
  • With increasing availability of specialized survey firms, ODK-based tools like CAPI, and standardized field management practices, it is important for researchers to follow certain best practices to gather data.

Preparing a Survey

The process of preparing surveys involves multiple stages such as drafting, piloting, programming, and translating, with clearly definedtimelines for each step.

Pre-pilot and draft

The pre-pilot and draft stage of implementing a survey starts by defining rules and guidelines in the form of survey protocols. Clear protocols ensure that fieldwork is carried out consistently across teams and regions, and are important for reproducible research.

The pre-pilot, which is the first component of piloting a survey, involves answering qualitative questions about the following:

  • Selection of respondents.
  • Tracking mechanism.
  • Number of revisits.
  • Dropping and replacement criteria.

Based on the pre-pilot, the research team designs a questionnaire to generate a first draft. For this purpose, avoid starting from scratch, and try using existing studies and questionnaires as a point of reference.

Content-focused pilot

The next step is to conduct a content-focused pilot. This stage involves answering questions about the structure and content of the questionnaire. Global best practices recommend conducting the content-focused pilot on paper, which makes it easier to revise and refine the survey instrument.

It is equally important to simultaneously pilot survey protocols like scheduling, software support, infrastructure, and sampling methods. DIME Analytics has created the following checklists to assist with this step:

Program instrument

Once the questionnaire content and design have been finalized, the next step is to program the survey instrument. Researchers should not program the instrument before finalizing the design of the questionnaire, otherwise they will waste crucial time and resources in going back and forth.

While there are various tools to do this, [[Computer-Assisted Personal Interview (CAPI)|computer-assisted personal interviews (CAPI) are the most widely used. Researchers must set aside 2-3 weeks for programming, and another 2-3 weeks for testing and debugging.

Data-focused pilot

Before conducting a data-focused pilot, the research team must procure a survey firm and sign a contract with the selected firm. The data-focused pilot tests the following:

  • Survey design. Re-check the design of the survey. Check if comments from earlier reviews have been resolved.
  • Interview flow. Re-check the time taken for each module of the interview. Ensure that the interview has a flow to it, and both the interviewer and respondent are clear about the question.
  • Survey programming. Check if questions display correctly. Check if modules need re-ordering. Ensure that built-in data checks are working.
  • Data. Check whether all variables appear correctly. Check for missing data. Check for variance in data.
  • High frequency checks. Monitor data quality through high frequency checks.

DIME Analytics has also developed a checklist for refining data during the data-focused pilot.


The next step is to translate the questionnaire. Translation will be considered good or complete only when enumerators and respondents have the same understanding of each question. Often, with repeated translations, the research team must list out adequate version control norms.

Train enumerators

The final step before launching the survey is enumerator training . The research team must complete the previous steps before the training of enumerators starts. This minimizes enumerator effects, which arise due to differences in the way a question is asked to each respondent because different enumerators may share different translations of the same question.

Survey Launch

After preparing the survey, the research team launches the final instrument for field work. This step completes the process of data collection, and sets the stage for the next step in conducting a field evaluation, analysis.


The process of designing and implementing field surveys comes with its own set of challenges. These include:

  • Measurement challenges. Sometimes some questions are sensitive, or aim to measure things that seem hard to quantify. For instance, while employee-satisfaction is very important for various studies to assess labor-market conditions, it is also very hard to measure objectively.
  • Data-quality assurance. It is very important to have a plan for ensuring data quality.
  • Translation errors. If translation is poor or incomplete, the data that is collected might be incorrect, or insufficient.
  • Gaps in enumerator training. This can also lead to improper data collection, and can therefore hamper the results of an evaluation.

Back to Parent

This page redirects from primary data collection.

Additional Resources