Survey Pilot

Revision as of 14:55, 12 January 2017 by 138.220.205.159 (talk)
Jump to: navigation, search

GUIDELINES FOR SURVEY PILOTING

What is a survey pilot?

  • A field test of the survey instrument and all survey protocols
  • Piloting is not just about the questionnaire: test all protocols and learn about relevant logistics
  • Piloting is done before Enumerator Training. It is not the same as the field practice all enumerators do at the end of Training!! It typically involves significant changes to the survey instrument and/or protocols – which should never be made after enumerator training!
  • A complete survey pilot includes 3 stages (see Figure 1)
Figure 1: A Complete Survey Pilot Includes 3 Stages
Stage 1 - Pre-Pilot Stage 2 - Content-focused Pilot Stage 3 - Data-focused Pilot
Answer broad questions about survey design and context through qualitative interviews and focus group discussions Refine overall order and structure, wording of specific questions, and translations.

Check completeness of answer choice options, response variance, survey length.

Validate programming, export a sample dataset, check dataset structure and completeness, test all data quality checks
Early, printable draft, and/or qualitative instruments A translated, printable, complete draft A translated, programmed, final draft
Pen-and-Paper Pen-and-Paper Tablet/Phone-based

Are all 3 stages of piloting necessary for every survey?

  • Not necessarily. If your survey is …
    • … a brand new survey instrument? 
      • --> always start with Stage 1 – Pre-Pilot
    • … an adaptation of a well-designed questionnaire from reliable source in the same country?
      • --> may start with Stage 2 – Content-focused pilot
    • … an adaption of a survey instrument from a previous data collection for the same project?
      • If significant revisions or additions
        • --> start with Stage 2
      • If no major changes
        • --> skip to Stage 3 – Data -focused pilot

Why do a pen-and-paper pilot for a CAPI survey?

  • Paper questionnaires are a more flexible way to record answers and qualitative observations
    • Record open-ended responses (critical for a pilot) more quickly / easily
    • Draw lines and arrows between questions to suggest restructuring
    • Record their observations and feedback in the margins
    • Make notes of questionnaire wording or translation problems directly in the text
  • A good pilot will provide significant inputs to questionnaire design, and result in significant changes to content and structure. Changing programming is time-consuming and can create bugs (e.g. if order of questions shifts and all skip codes need to be re-programmed)

What is the timeline for a survey pilot?

  • Piloting should start 4-6 months before survey launch. Indicative times are shown in Figure 2. This will vary depending on the complexity of the survey instruments. The earlier you start the better!
    • Remember: pilot needs to be done in time for final questionnaire to get ethical approval
  • Pre-pilot happens before the questionnaire is translated to the local language(s). Good translation is time-consuming (for details see Guidelines for Questionnaire Translation). Making major changes after translation is costly and often leads to version control problems between different languages.
Figure 2: Fitting Piloting into the Overall Survey Timeline
Pre-Pilot 1-2 weeks
Questionnaire Design 4 weeks
Questionnaire Translation 2-3 weeks
Content-focused pilot (& revisions) 2-3 weeks
IRB approval {context dependent, concurrent to programming & data pilot}
Questionnaire Programming 4-6 weeks
Data-focused Pilot 2-3 weeks
Enumerator Training 2 weeks

Beyond the questionnaire, what should be tested during the pilot?

  • All Field Protocols should be tested in Stage 2 – Content-focused pilot. Revise as needed, and then pilot again during Stage 3 – Data-focused pilot. All protocols should be finalized before enumerator training.
  • Interview scheduling
    • Is there a particular time of day when respondents are more / less likely to be available?
    • What is the most effective way to communicate with respondents and schedule appointments?
  • What infrastructure will field teams have access to?
    • Electricity? If blackouts are frequent, are there generators (and fuel)?
    • Will there be any internet access?
    • How is the coverage of mobile phone networks?
  • Does the sampling protocol work well in the field? (for more details on developing the protocol, see Guidelines for Impact Evaluation Sampling)
    • What challenges are enumerators likely to face? What clarifications need to be made?
    • If working from an existing sampling frame, is it up to date?
    • Is the replacement strategy effective?
    • If doing a listing, how much time should be built into the field plan? How will you ensure that no sampling units (e.g. households, firms, traders) are skipped?
  • Geo-data: Will you use the tablet or separate GPS?
    • If GPS unit, test protocol for saving with filenames that include unique ID linked to respondent.
    • How long does it take to lock in a signal? How accurate is the data point?

Who should be involved in a survey pilot?

  • Typically, questionnaire pilots are done before the survey firm is on board.
  • DIME Field Coordinator plays a central role. Ideally other research team members will participate.
  • Include staff from the relevant Government Ministry or implementing partner as much as feasible.
    • If they have experience with surveys, can conduct interviews. Usually best they accompany skilled interviewers and act as observers.
    • If not directly involved, always update them on revisions made as a result of the pilot
Pilot Participants and Characteristics
Stage 1 - Pre-Pilot Stage 2 - Content-focused Pilot Stage 3 - Data-focused Pilot
Respondents Never use any respondents in the final sample. Neighboring villages / schools / firms are good options. Respondents should be as similar as possible to actual respondents.

Include variations in age, gender, education, and income / socioeconomic status.

Test sampling protocol by using it to select respondents.

Helps troubleshoot (e.g. find out sampling frame is outdated) and reduces bias in selection.

Test your sampling protocol, including any revisions based on previous piloting stage
Interviewers Language Fully fluent (read/write/ speak) in local language and language of research team Fully fluent (speaking and reading) in local language and language of research team Fully fluent (speaking) in the local language and language of research team
Experience Experience as qualitative enumerator

Sectoral knowledge

Experience as quantitative enumerator

Sectoral knowledge

Experience as quantitative enumerator

Experience with tablets

Background Diversity in age / race / religion / gender / etc Be mindful of (and learn about) cultural requirements (should women interview women? Can women interview men?) Interviewer pool should reflect decisions regarding composition of final survey team, based on previous stage
Size of Team Very small (~2). Always accompanied by research team member. Small (~2-4). Ideally each enumerator accompanied by research team member or other observer. Larger (~4-8). The more interviews the better for debugging and data checks.
Who? Typically done before survey firm is on board, using local research assistants (STCs hired through the WB or a local PI), local government staff, or university students Typically done before survey firm is on board, using local research assistants (STCs hired through the WB or a local PI), local government staff, or university students Best case scenario: survey firm leads. Staff who will be field supervisors for the upcoming survey act as interviewers. They gain experience with the instrument, which also helps for enumerator training.
Research Team Field Coordinator participates in all stages PI should participate directly. If not possible, plan for daily debriefs and discussions. Research Manager or Impact Evaluation Coordinator should participate directly. Programmer participates directly (to adapt / debug in real time)

How to structure a survey pilot?

  • Develop a clear protocol for each stage of the pilot and get research team approval
  • Plan sufficient training time for interviewers!
    • Length depends completely on complexity of instrument and survey protocols. 1-day minimum
    • Interviewers must be familiar with the instrument and the objectives of the pilot by the end
    • Interviewers will have useful insights and feedback on the survey instrument at the training itself. Plan time to incorporate their feedback/ make revisions before starting the actual pilot
    • Build in a minimum of 1 day between training and the start of the piloting
    • For data-focused pilots, essential to have interviewers do mock interviews with each other to familiarize themselves with programming, and catch any bugs missed in office tests.
  • Check what approvals are needed. Sometimes local IRB approval is required even for a pilot.
    • A letter of support from the relevant Ministry or implementing partner always helps.
  • Plan time for group feedback and discussion sessions at the end of each day.
  • Plan sufficient time to make revisions each evening, and pilot again the next day
    • If logistically feasible, best to pilot every other day (otherwise all-nighters are common)
    • For pen-and-paper pilots, make sure you will have access to a printer to make and share revisions in real time.
      • Depending on context, using research budget to purchase a printer the FC can travel with may be necessary/ cost-effective.
    • Be aware of the need for careful version control: If the survey is in a language the FC doesn't understand, it can be tricky to keep track of daily changes in both local language and English version.
    • Best to work with an assistant who speaks the local language to make the edits in that version of the survey while the FC makes them in the English version.
      • Ideally this is someone other than an enumerator (they should go rest so they're fresh for the pilot on the following day, and end of the day edits can go well into the night if piloting needs to happen the next morning)
    • Where logistically possible, piloting every other day is a better plan
  • Pilot until there are no more substantial changes to be made. Field plan will depend on:
    • Extent of changes to be made
    • Availability of printing facilities (if paper pilot) in the area where you are piloting.
    • Where the pilot location is (if far from home-base, and teams are staying in the area, breaking for a full day may not be practical / cost-effective)
  • If you will be ‘pre-loading’ data during the survey (e.g. from a baseline survey), you will need to simulate this during the pilot.
    • If you have reduced the sample size from previous rounds, you can use non-sampled households for whom baseline data exists.
    • Otherwise you may need to do ‘pre-interviews’ one day ahead to collect basic indicators for pre-loading. This is logistically challenging, but very worthwhile!
  • Ideally the survey firm will participate in the data-focused pilot. This has to be clearly specified in the Terms of Reference (for more details see Guidelines on Survey Firm Procurement). Important points:
    • Timeline includes a few days’ pause between the end of the piloting and enumerator training, to incorporate any all final revisions
    • Distinguish between the data-focused pilot and the field practice during enumerator trainings.
    • Make sure the timeline for the piloting is clear from first discussions.

Tips and Reminders

  • Throughout the questionnaire design process, and discussion with the research team, take notes of what needs to be piloted
    • Are there multiple ways of asking questions that should be tested?
    • Should you learn more about how people think to see what flow makes sense?
      • For example, do people think about their input use at plot-level? by crop? overall?
    • Have any questions been flagged as likely sensitive?
  • Encourage interviewers to probe and follow-up much more than they would in a typical interview
  • Encourage respondents to think out-loud, to understand how they are coming up with their answer.
    • In some cultures, appropriate to ask respondents their feedback at the end of the interview
  • Aim to finish fieldwork early, to have time to debrief
    • You will get better feedback if the team is not exhausted / hungry!
    • Make sure all voices heard at feedback sessions (especially if age/ gender/ ethnicity differences)
  • Take careful notes during the pilot, the discussions and clarifications will be important to include in the enumerator manual and to discuss at training
  • You will get valuable feedback by observing pilot interviews even if you don’t speak the local language.
    • Bring a copy of the questionnaire in English, with the same question numbers / variable labels, so that you can follow along with the interview.
    • Focus on observing the respondent and the enumerator. It may be a good idea to ask the enumerator to explain if a significant discussion starts over any particular question (depends on the timing, if already a long interview may not want to make it longer, and on how confident you are that the interviewer will be able to report back later the substance of the discussion)
  • Consider obtaining IRB approval before the pilot (when an IRB is needed)
    • When you are developing new instruments, there are some opportunities for publication even at the piloting level. The PIs will know when this is a possibility. This requires further planning but it can be well worth it.
  • Do a data-focused pilot for your backcheck questionnaire
  • Hiring a local "mobilizer" to coordinate with respondents can facilitate piloting, particularly in urban areas or settings in which people are particularly busy. They explain the purpose of the survey and get consent, reducing down time between surveys. (though do note this may not be consistent with piloting sampling protocols)

CHECKLISTS FOR FIELD COORDINATORS

Preparing for a Survey Pilot

All stages

[] Have you identified a sufficient number of qualified interviewers?

[] Have you trained the interviewers on the survey instrument?

[] Have you identified a comparable area and population for the pilot?

[] Have you secured all approvals / letters of support needed in your context?

[] Has someone contacted the local leaders in the pilot area to inform them of planned survey activity?

[] Will the team be staying overnight in the pilot area? If so, do you have necessary permissions to travel (from government, World Bank, etc)?

[] Has someone taken care of the logistics (e.g. car rentals, meals or per diem for interviewers)?

[] Do you have a venue reserved for training the interviewers?

Pre-Pilot

[] Do you have a set protocol for identifying participants in focus group discussions?

[] Do you have a form prepared for interviewers to record qualitative observations and notes from discussion?

[] Have you tested interviewers’ note-taking abilities during the training and provided feedback on content and handwriting?

[] If you do not speak the local language(s), do you have a tried-and-true research assistant to accompany you to translate?

Content-focused Pilot

[] Is the questionnaire you are piloting fully translated into the local language?

[] Is the instrument formatted for printing? Make sure there are sufficient margins on all pages for notes

[] Did you print enough copies of the instrument for all interviewers and all people accompanying interviews to have a copy for each planned interview?

[] Does the survey instrument include both the research team language and local language (where different)? If not, remember to print out copies in each language as applicable.

[] Do you have access to a printer to print revisions in real time if significant changes are needed?

[] Have you instructed interviewers (or observers) to note start and end time for each module?

Data-focused Pilot

[] Have you office-tested the final version of the programmed instrument for any bugs, and to ensure that all questions appear as expected and reflect the final translation?

[] Do you have enough tablets for all interviewers and all people accompanying interviews?

[] Have you set up a SurveyCTO server for the pilot (remember, no data can be uploaded to the DIME Test server)?

[] Is the name for the pilot form on the server clearly distinguishable from the final survey?
[] Have you assigned a form id unique to the pilot? (to avoid confusing pilot data with final data)
[] Do all team members who need it have log-on information?

[] Is the SurveyCTO Collect app updated to the version of SurveyCTO your server is running? (see the ‘Collect’ tab on your server for instructions).

[] Best to update your server and app to the latest version of SurveyCTO before starting the pilot.
[] Once you have piloted, avoid updating the app even if a new version is released, to avoid compatibility issues.

[] Are all tablets / phones running the most updated version of their operating system? (or at a minimum, are they all running exactly the same OS?)

[] Are all tablets / phones set to the correct date and time?

[] Is the pilot-form downloaded and ready-to-go on all tablets?

[] Are all tablets fully charged? Do you have battery packs with you in case any batteries run out?

[] Do you have a couple of paper copies of the survey, for observers to use, and/or as a last resort in case of unsolvable tablet problems?

[] Have you built in time to the field plan to make any required revisions to the programming and re-download the revised forms on all tablets?

[] Have you set up Stata do-files for importing and labeling data?

[] Have you set up a Stata do-file for running high frequency checks?

Refining the Questionnaire

Pre-pilot

[] Are the conceptual / structural issues identified in the early questionnaire design process sufficiently explored?

[] How might the composition of the focus (gender, age, religion, caste/socioeconomic status) affect responses?

[] Try to get at how potential respondents think about the key indicators you are trying to measure

[] Do you have a translator providing simultaneous translation? If not, does the amount of note-taking correspond to the amount of discussion?

Content-focused pilot

Survey Design

[] Do the questions make sense to the respondent?

[] Watch how the respondent reacts to each question – any confusion? How is the reaction time?
[] Are there questions that require explanation by enumerator, or clarification from respondent?
[] Follow-up with the enumerator (and possibly the respondent) on questions that seemed problematic: is the issue translation? Phrasing? Conceptual? Cultural?

[] Are answer options comprehensive?

[] Ensure that all ‘other’ responses are specified and recorded

[] Is the enumerator following the scripted translations?

[] If not, ask the enumerator to note any translation issues to discuss with the team.
[] If you do not speak the language, you can still note if interviewer’s questions were noticeably longer/shorter than the written question.

Interview flow and timing

[] How is the flow of the interview?

[] Any pauses? (likely areas where interviewers need more instructions)
[] Are there times when the respondent looks bored? Uncomfortable? Losing interest?

[] Could the order of modules be improved? The order of questions within modules?

[] How long does the interview take?

[] Check length of each module by noting start and stop time.
[] Expect that pilot interviews will take much longer than actual interviews (likely twice as long) – interviewers are expected to do extra probing, take qualitative notes, and record open-ended responses, and the survey instrument may not yet flow well

Data-focused pilot

Survey Design & Interview flow and timing

[] Continue to monitor all above points, especially to modules / questions revised after the previous stage of piloting

Programming

[] Are all skip patterns working as expected?

[] Are questions displaying properly on the screen?

[] Are there any questions that should be grouped / ungrouped?

[] Did all modules appear?

[] Are built-in data checks (for outliers or inconsistent responses) working correctly?

Data

[] Export pilot data from servers to .csv files (if CAPI) and import it into Stata, using either odkmeta or the Stata template provided by Survey CTO.

[] It is extremely important to make sure the export works as you expect and you are able to open and check the dataset in Stata!

[] Do all modules appear? Do all variables in all modules appear?

[] Pay close attention to tables and nested loops.

[] Check labels

[] Are all variables correctly labeled in English (and not too long for Stata)?
[] Check that values for categorical responses are labeled in English (and not too long for Stata)

[] Check that all skip patterns worked as expected

[] Check for (unexpected) missing data by variable

[] Check variance: both high and low.

[] If all pilot respondents give the same answer, the data point may not be informative.
[] High variance may indicate question needs to be more precise or checks built in to the survey instrument to alert enumerator of extreme values in real time.
[] These checks depend on large sample (rough rule of thumb: not informative if n<30).

[] Does all ‘pre-loaded’ data appear as expected?

High frequency checks

[] Use the dataset to program a do-file for high-frequency checks (see DIME template for example)

[] Run the high-frequency do-file and de-bug as needed

[] Export results of checks, and discuss and agree with the survey firm on a final format for communicating and resolving issues discovered in the checks