Phone Surveys (CATI)
In phone surveys or computer-assisted telephone interviews (CATI), the enumerator calls the respondent and asks them the questions over telephone. The enumerator enters these responses onto a programmed digital survey, which is then shared electronically with the research team. Phone surveys have 2 major advantages. Firstly, they allow the research team to collect data remotely, which is cost-effective since collecting data through face-to-face interviews can often be expensive. Secondly, they are a great option for collecting data during emergencies or in conflict areas while still retaining the personal touch that researchers prefer.
- DIME Analytics has prepared a guide on remote data collection.
- Before deciding to conduct a phone survey (CATI), the research team must keep in mind concerns about feasibility, preparation, protocols, and data quality.
The research team must keep in mind that in a lot of cases, data collection through phone surveys will not be feasible. In that case, they will have to rely on in-person data collection. The research team can consider the following aspects when deciding if conducting a phone survey is feasible.
The research team must assess if certain (or all) questions can be asked over the phone. Asking questions over the phone can affect the manner in which respondents interpret questions in the survey and affect response rates, which affects the quality of collected data. Examples of questions which cannot be asked over the phone include questions on sensitive topics (such as gender-based violence), and questions with test components (such as a math test for a school-based survey).
- Length of the survey.
General best practices for phone surveys suggest that phone surveys should be short, and the response rates should be at least 50%. Prior studies in development research suggest that phone surveys that are longer than 20 minutes result in lower response rates. If the original questionnaire was designed for a field survey and takes longer than 20 minutes to answer (on average), the research team might need to reduce the number of questions. In cases where it is not possible to reduce the duration of the survey without affecting the outcomes of interest, the research team may need to break up the survey into two or three calls.
- Access to telephones.
This is important because if the target population does not have access to telephones , the survey will introduce a selection bias into the sample. Further, it will lead to lower response rates, increase attrition, and reduce the cost-effectiveness of the survey.
- Availability of contact information.
In the case of a follow-up survey after the first round of surveys, it is likely that the research team already has contact information of the respondents. But in the case of a baseline survey (or the first round), if the research team does not have access to the contact numbers, they can consider digit sampling (RDS). This is a method for selecting people for involvement in telephone statistical surveys by generating or selecting telephone numbers at random.
If the context of the study matches these requirements , then the research team can start preparing for remote data collection.
The process of preparing for phone surveys involves some additional steps compared to the process of preparing for field surveys. At each stage of this process, all members of the research team and the survey firm) must communicate clearly to ensure that the phone survey is conducted smoothly.
The research team must keep in mind that each step leading up to, and including the data collection will take additional time in the case of a phone survey. Therefore they must account for this, and allocate more time for each step, for instance, keeping 3 weeks for a pre-pilot for a phone survey, instead of just 1-2 weeks for a field survey.
- IRB approvals.
In case of a phone survey, the research team must obtain IRB approvals for additional aspects like oral consent, audio audits, and additional incentives for respondents.
Procuring a survey firm involves drafting terms of reference (TOR), and preparing a budget for each component of the survey process. The procurement process should also reflect the fact that the data collection will be conducted over the phone, and not in-person. An example of additional costs in the case of phone surveys can be the cost of setting up a call-center for interviewers.
Designing a phone survey is also slightly different from designing a field survey. In case of a phone survey, the research team must manage the length of the survey, include additional requirements for verbal consent, and allow for audio audits.
- Remote data collection protocols.
Similarly, the protocols for phone surveys are different from protocols for field surveys. Some of these differences are in terms of the team setup, communication protocols and data-quality checks.
- Pilot questionnaire and protocols.
Similar to in-person surveys, a pilot survey should be conducted with a keen eye on testing if the phrasing of the questions work over a phone call along with the time taken per survey. Piloting the protocols to be followed for remote data collection is also a recommended practice.
- Enumerator training.
Training enumerators is more challenging in the case of a work-from-home phone survey. For instance, when a pandemic like COVID-19 prevents the research team from training enumerators in-person, the research team must establish certain logistical and content-based changes to the training method. These changes include sharing recorded training sessions, preparing enumerators for follow-up questions from respondents, and discussing steps to keep respondents engaged.
Note: These steps will also be useful in case the the research team has to transition from an in-person survey to a phone survey, for instance, during a pandemic or natural disaster.
It is important to make sure that all members of the research team communicate frequently and effectively. Communication becomes even more important in the case of remote surveys. Some guidelines to keep in mind are:
- Clear protocols. Set up strong monitoring and communication protocols before starting with the survey. Test them during training and the pilot to identify any gaps that might arise.
- Chain of command. Set up a clear chain of command, and make sure each enumerator knows who their first point of contact (POC) is on the research team.
- Check-in. Schedule regular calls to check-in with each enumerator, and to resolve any concerns they may have.
- Meetings. Conduct team meetings at least once a week.
- Collaboration. Consider using a team-collaboration and communication tool to maintain a 24X7 channel for communication with supervisors in the survey team. The research team can also add enumerators to these channels if the survey team is small.
- Feedback. Monitor data carefully, and on a daily basis. Provide regular feedback to enumerators on the quality of data being collected.
When dealing with phone surveys, the research team must track of outcomes of interest more carefully, because there is an in-built selection bias (or self-selection) with phone surveys. Self selection can result in samples which are not entirely representative of the population. Therefore, in addition to the guidelines for monitoring field data quality, there are some steps that the research team must follow to monitor the quality of data collected during phone surveys:
- Completion rate. Track the progress and the time taken to complete the entire process of data collection closely.
- Constant tracking. Assess data quality throughout the process of data collection. This is a departure from in-person data collection where the emphasis on data quality checks is higher during the initial days of data collection.
- Consistency. Since a phone survey is shorter than a field survey, perform response consistency checks for all questions if possible.
- Back checks. Since a phone survey has fewer questions than a field survey, the supervisor can perform back checks on specific questions as well.
- Enumerator checks. Include more stringent checks on enumerator performance and share this feedback with enumerators regularly.
- Audio audits. Insert audio audits at predefined or random points of the survey. Then ask the supervisors to review the audio audits on a daily basis and cross-check the responses with the submitted forms. One way to do this is to create an audio audits form for the supervisor to fill while reviewing audio audits.
- Conference calls. If conference calls are feasible, the supervisor can shadow the enumerator a few times to ensure that the enumerator is following all guidelines.
- Random tests. Ask the supervisor to call a few respondents at random to test if the enumerator actually called them.
- Alemayehu et. al. (World Bank Group), Reducing Bias in Phone Survey Samples
- Brubaker et. al. (World Bank Group), Representativeness of Individual-Level Data in COVID-19 Phone Surveys
- Dabalen et. al. (World Bank Group), General guide on conducting mobile phone panel surveys (MPPS)
- DIME Analytics (World Bank), Guide to work-from-home data collection
- Özler and Cuevas (World Bank), Blog post on reducing attrition in phone surveys
- Goldstein and Kondylis (World Bank), Blog post on impact Evaluations in the time of Covid-19
- Markus Goldstein (World Bank), Blog post on using phones for repeat surveys
- Herath et al. (J-PAL Africa), Blog post on increasing response rates in surveys
- IDinsight, Reflections on reaching women over the phone in rural India
- IPA, Building Rapport and Trust in Phone Surveys
- IPA, Restarting face-to-face data collection
- IPA, IPA Handbook: Remote Surveys in a Pandemic
- J-PAL, Best practices for conducting phone surveys
- J-PAL, Protocols for phone surveys
- J-PAL South Asia, Checklists and resources for transitioning to CATI
- Sandra V. Rozo (World Bank), Tips for surveying hard-to-reach populations
- SurveyCTO, Adapting to COVID-19
- SurveyCTO, Learn from IDinsight: Reducing Gender Bias in Phone Surveys
- Tavneet Suri, Webinar on adaptations for phone surveys