Field surveys (or survey instruments/questionnaires) are one of the most commonly used methods used by researchers for the process of primary data collection. When secondary sources of data are either insufficient, field surveys allow researchers to monitor and evaluate the impact of field experiments. For example, consider a study that aims to evaluate the impact of micro-loans on farm output in a small village. It is possible that data on farm output for the last 10 years is not available, or is insufficient. In this case, researchers can conduct a survey among local farmers to collect data on farmer incomes and farm outputs.
- Primary data has become the dominant source for empirical inquiry in development economics.
- With increasing access to specialized ODK-based software like CAPI, survey firms, and standardized field management practices, there are plenty of useful guidelines on streamlining the process of conducting surveys.
- Surveys involve multiple stages from start to finish, like drafting, piloting, programming, with a clear timeline for each step.
- Surveys can either be conducted directly by the research team or by procuring a survey firm.
Stages of a Survey
Draft and pre-pilot
The draft and pre-pilot stage of implementing a survey involves defining rules and guidelines in the form of survey protocols. Clear protocols ensure that fieldwork is carried out consistently across teams and regions, and are important for reproducible research.
Then a pre-pilot is conducted, which is the first component of piloting a survey. This involves answering qualitative questions about the following:
- Selection of respondents.
- Tracking mechanism.
- Number of revisits.
- Dropping and replacement criteria.
Based on the pre-pilot, the research team designs and drafts a questionnaire. In this process, it helps to use existing studies and questionnaires as a resource, to avoid starting from scratch.
The next step is to conduct a content-focused pilot. This stage involves answering questions about the structure and content of the questionnaire. Global best practices recommend conducting the content-focused pilot on paper, to make it easier to revise and refine the survey instrument.
In this stage, it is equally important to test survey protocols like scheduling, survey infrastructure, and sampling methods. DIME Analytics has created the following checklists for this process:
Once the questionnaire content and design have been finalized, the next step is to program the survey instrument. Researchers should not program the instrument before finalizing the design of the questionnaire, otherwise they will waste crucial time and resources in going back and forth.
While there are various tools to do this, SurveyCTO is the most widely used. Researchers must set aside 2-3 weeks for programming, and another 2-3 weeks for debugging and testing.
Before moving forward, the research team must finalize a survey firm. Then the next step is to conduct a data-focused pilot. This step tests the following:
- Survey design and interview flow (re-check design and revisions made earlier)
- Survey programming (check if questions display correctly, and built-in data checks are working)
- Data (whether all variables appear or not, missing data, variance in data)
- High frequency checks
Also refer to the Checklist: Refine the Questionnaire (Data)|DIME checklist for refining data]].
The next step in the process is to translate the questionnaire. Translation will be considered good or complete only when enumerators and respondents have the same understanding for each question. Often, with repeated translations, the research team must ensure adequate version control norms.
These steps must be completed before training enumerators. This helps to minimize enumerator effects, which arise due to differences in the way a question is asked to each respondent because different enumerators may share a different translation of the same question.
The processing of implementing field surveys comes with its own set of challenges. These include:
- Measurement challenges - Sometimes some questions are sensitive, or aim to measure things that seem hard to quantify. For instance, while employee-satisfaction is very important for various studies to assess labor-market conditions, it is also very hard to measure objectively.
- Data-quality assurance- It is very important to have a plan for ensuring data quality.
- Translation errors - If translation is poor or incomplete, the data that is collected might be incorrect, or insufficient.
- Gaps in enumerator training - This can also lead to improper data collection, and can therefore hamper the results of an evaluation.
Back to Parent
This page redirects from primary data collection.