Primary Data Collection

Jump to: navigation, search

Data Quality Assurance Plan

The research team must draft a data quality assurance plan, and share it with everyone in the research team, as well as the survey firm before starting with data collection. A data quality assurance plan considers everything that could go wrong ahead of time, and makes a plan to resolve these issues. Some of the issues that can affect data quality include errors in programming or translation, attrition (or dropping out of respondents during a survey, and faulty tablets used during computer-assisted personal interviews (CAPI), among others. A comprehensive data quality assurance plan has 3 major components for each of the following stages - before, during, and after data collection.

Before data collection

Before data collection, the research team can include the following in the data quality assurance plan:

  • Enumerator training. Train enumerators and conduct regular feedback sessions with them to refine the survey content and protocols. Wherever possible, conduct pen-and-paper pilots, since in that case it is easier for enumerators to write down the issues they are facing. Make sure enumerators conduct several practice interviews before the actual fieldwork starts.

During data collection

During data collection, the research team can include the following in the data quality assurance plan:

  • Communication and reporting. Clear communication is important to ensure that both enumerators and respondents are able to understand the questions in the survey instrument. It also allows field coordinators to regularly discuss issues faces by enumerators. For instance, enumerators may face issues like faulty equipment or connectivity issues, which can affect the quality of data.
  • Spot checks and field monitoring. Field coordinators (FCs) and supervisors should monitor the performance of enumerators through unannounced spot checks. Spot checkers accompany enumerators and observe the interview, taking detailed notes to provide feedback and troubleshooting tips. The objective is to assess whether the enumerator understands they survey instrument and is following all survey protocols. There should be a clear list of parameters that spotecheckers should use to judge performance. They should share feedback after (not during) the interviews. Spotcheckers should fill in a tracking sheet or a form that records observations about each enumerator who works under a supervisor. Ideally some spot checks are done by people who are independent of the data collection firm, especially in the first weeks of data collection, but supervisors should also conduct spot checks for the duration of the survey.
  • Minimize attrition. There are several reasons for attrition of respondents. For instance, it is possible that the respondent moved away from the location of the study, or refuses to participate. It is important to first identify the reason for attrition. Generally, attrition rates of more than 5% are considered poor, and the research team must try to resolve these issues. High attrition rates can affect the quality of data and introduce bias in the results of a study.
  • Back checks and real-time checks. At the same time, it is also important to constantly monitor quality of every new round of data shared by the field teams. In a back check, an independent survey team asks some selected questions to a randomly selected subset of respondents again to validate the answers.
  • High Frequency checks. While the survey is ongoing, the research team should scrutinize the data in real-time through automated high frequency checks.

After data collection

After data collection ends, the survey firm usually provides a final field report. This report can be used to improve data quality in the last stage of the data collection process. It can provide qualitative information to the research team about everything that could not been captured by the survey instrument, such as :

  • Issues in understanding. Sometimes respondents do not understand a question and answer randomly. This information is especially important for the research team if a study or experiment only shows marginal impact.
  • Limited option choices. Sometimes respondents may convey that the option choices for particular questions were not comprehensive, which can also affect the quality of data.
  • Other feedback. It also allows the research team to understand issues like size and structure of the communities that were part of the sample. Such information is often useful to weight each group within a sample differently, which can improve accuracy of results.