Difference between revisions of "Back Checks"
Line 13: | Line 13: | ||
*Administer 20% of back-checks within the first two weeks of fieldwork. This helps the research team to identify early whether the [[Questionnaire Design | questionnaire]] is effective, whether enumerators are doing their jobs well, and which changes to make to ensure high quality data collection. | *Administer 20% of back-checks within the first two weeks of fieldwork. This helps the research team to identify early whether the [[Questionnaire Design | questionnaire]] is effective, whether enumerators are doing their jobs well, and which changes to make to ensure high quality data collection. | ||
==Sampling for Back | ==Sampling for Back Checks== | ||
*Aim to back | The following points are important when selecting the sample for the '''back check survey''': | ||
*The back | * '''Sample size.''' Aim to '''back check''' 10-20% of the total observations. | ||
*Include missing respondents in the | * '''Stratified sampling.''' The back check sample should be [[Stratified Random Sample | stratified]], that is it must cover all [[Preparing_for_Field_Data_Collection#Team_setup_and_roles|survey teams]] and enumerators. Back check every team, and every enumerator regularly, and as frequently as possible. | ||
* '''Include missing respondents.''' This is to verify that there is no bias in the sample just because enumerators did not track hard-to-find respondents. | |||
* '''Include other flagged observations.''' Include observations that were flagged in other quality tests like [[High Frequency Checks | high frequency checks]]. Also include respondents who were interviewed by enumerators suspected of cheating. | |||
==Designing the Back Check Survey== | ==Designing the Back Check Survey== |
Revision as of 21:30, 19 November 2020
Back-checks are a quality control method implemented to verify the quality and legitimacy of data collected during a survey. Throughout the course of fieldwork, a back-check team returns to a randomly-selected subset of households for which data has been collected. The back-check team re-interviews these respondents with a short subset of survey questions, otherwise known as a back-check survey. Back-checks are used to verify the quality and legitimacy of key data collected in the actual survey. This page will provide points on how to coordinate, sample for, and design questionnaires for back-checks.
Read First
- Back-checks are an important tool to detect fraud (i.e. enumerators sitting under a tree and filling out questionnaires themselves).
- Back-checks help researchers to assess the accuracy and quality of the data collected.
- Back-checks can be conducted by in-person visits or phone calls. A complementary approach to in-person back checks is conducting Random Audio Audits.
- Problems identified through back checks can be remedied by further training enumerators or replacing low-performing or problematic enumerators.
Coordinating Back-Checks
- The total duration of each back-check survey should be around 10-15 minutes.
- The back-checks should be conducted by a team of specialized back-check enumerators. The back-check enumerators should be experienced, skilled enumerators.
- The back-check team should be independent from the rest of the survey staff. They should be trained separately and have minimal contact with the survey team.
- Administer 20% of back-checks within the first two weeks of fieldwork. This helps the research team to identify early whether the questionnaire is effective, whether enumerators are doing their jobs well, and which changes to make to ensure high quality data collection.
Sampling for Back Checks
The following points are important when selecting the sample for the back check survey:
- Sample size. Aim to back check 10-20% of the total observations.
- Stratified sampling. The back check sample should be stratified, that is it must cover all survey teams and enumerators. Back check every team, and every enumerator regularly, and as frequently as possible.
- Include missing respondents. This is to verify that there is no bias in the sample just because enumerators did not track hard-to-find respondents.
- Include other flagged observations. Include observations that were flagged in other quality tests like high frequency checks. Also include respondents who were interviewed by enumerators suspected of cheating.
Designing the Back Check Survey
Questions for the back check survey (or simply back check) are drawn from the actual questionnaire which is used for data collection. There are four types of questions that should be included in a back check to get a clear idea of the data quality, as well as the enumerator's skills:
- Questions to verify respondent and interview information: Verify the identity of the respondent and check if, when, and where the original survey took place. Useful for verifying reported completion rates.
- Questions to detect fraud: Questions that ask for straightforward information which has no expected variation or room for error. They do not require particularly skilled enumerators, and do not vary over time - especially the time period between the actual interview and the back check. For example, questions about type of dwelling, education level, marital status, occupation etc. The actual questions in this category will depend on the survey instrument and context. If the answers to these questions differ between the actual survey and the backcheck survey, it is a sign of either poor data quality, a serious enumerator problem, and/or potential wrongdoing by the enumerator.
- Questions to detect errors in survey execution: Questions that have complex loops or skip patterns, or check for consistency of recorded answers. For example, if household size is recorded as 4, then the number of repeat groups for household members should not be more than 4. Capable enumerators should get the true answer for these questions. If values for these questions differ between the questionnaire and the backcheck survey, then the enumerator may need more training.
- Questions to detect problems with the questionnaire or key outcomes: Provide additional checks for accuracy, and flag difficulties and/or inconsistencies in the interpretation of the questions by enumerators. If these values differ between the actual survey and the back check, then the enumerator may need more traning. In some cases, the survey instrument may need to be simplified.
- Questions that repeat multiple times: Check whether enumerators are falsifying data to reduce the length of interviews. For example, if there is a long series of questions about each household member, verify that the number of times these questions repeat is equal to the number of household members.
Note that it is important that enumerators do not know what questions will be included in the back check survey. To do so, you may consider randomizing questions, or changing the back check survey regularly during data collection.
bcstats
After completing a back check, you can compare the back check data with the original survey data. You can do this using the Stata command bcstats
, developed by Innovations for Poverty Action. This command produces a dataset that lists the comparisons between the back check and original survey data. The command also allows research teams to perform enumerator checks and stability checks for variables.
The following syntax is used for performing back checks using bcstats
:
ssc install bcstats
bcstats, //
surveydata(filename) bcdata(filename) id(varlist)//
[options]
To learn in more detail about the options for bcstats
and back checks, please type help bcstats
on Stata after installing the command. Listed below are two options that are used most commonly with bcstats
.
Comparing different variable types
As part of the functionalities under [options], bcstats
allows users to compare 3 different types of variables.
t1vars()
: Specifies the list of type 1 variables. These are variables that are expected to stay constant between the survey data and the back check. In case there are differences for these variables, the research team may take action against the enumerator. This option displays variables which have high error rates, and variables with completed the enumerator checks.
t2vars()
: Specifies the list of type 2 variables. These are variables that may be difficult for enumerators to work with. For instance, they may involve complicated skip patterns or complex logic. In this case, if there are differences between the survey data and the back check, it may indicate the need for further training, but will not result in action against the enumerator. This option displays the error rates for these variables, and variables with completed enumerator checks and stability checks.
t3vars()
: Specifies the list of type 3 variables. These are variables whose stability between the survey and back check is of interest to the research team. If there are any differences for these variables between the survey data and back check data, it will not result in action against the enumerator. This option displays the error rates of all variables, and variables with completed stability checks.
Stability
bcstats
also allows users to test for stability by running a paired t-test to compare the sample means for the survey data and the back check data. It also allows users to specify the confidence level for the t-test using the level()
option. By default, it considers a 95% confidence level.
Back to Parent
This article is part of the topic Field Management.
Additional Resources
- DIME Analytics’ Real Time Data Quality Checks
- DIME Analytics’ Data Quality Assurance
- World Health Organization's Quality Assurance in Surveys: standards, guidelines, and procedures. This chapter provides, in detail, the approach and methodology on quality control during surveys.
- bcstats, a Stata program written by an IPA staff member for conducting back checks on survey data.