Administrative and Monitoring Data

Jump to: navigation, search

While impact evaluations most commonly rely on primary data, secondary data can often provide important context for impact evaluation design and data analysis. In some cases, for example administrative data from a program conducted in a district, secondary data is the only source which covers the relevant population for an impact evaluation. Similary, in some cases, monitoring data can help assess who received the treatment, and if this was as per the initial impact evaluation design.

Read First

  • Impact Evaluations rely on many different sources of secondary data - administrative, geospatial, sensors, telecom, and crowd-sourcing.
  • An important step in designing an impact evaluation is to evaluate which of the available data sources are best suited in a particular context.
  • Administrative data is any data collected by national/local governments, ministries or agencies that are outside the context of an impact evaluation.
  • Monitoring data is data that is collected to track the implementation of treatment in a given impact evaluation.

Administrative Data

Administrative data is any data collected by national/ local governments, ministries or government agencies that are outside the context of an impact evaluation. Administrative data can include data from land registries, road networks, infrastructure investments, tax, energy billing, or social transfers.


Generally, administrative data is collected to document or track beneficiaries of a government policy and the general population, and not for research purposes. Research teams should aim to use administrative data in addition to other sources of data - survey data, geospatial, sensors, telecom, and crowd-sourcing. This allows research teams to create sector-specific and country-specific outputs (such as data sets, maps, and figures) that are relevant to a particular policy context.

= Case Study

In this section, we look at an example of a project by DIME in Kenya where the research team digitized administrative data to fill gaps in available data on road safety in Kenya.

In this impact evaluation, the research team obtained administrative data through a data sharing agreement with the National Police Service (NPS) in Kenya, and manually digitized a total of 12,546 crash records for the city of Nairobi over a nine-year period. This data allowed the team to identify the major crash hot spots, that is, regions with the highest number of road crashes. The research team combined this data with crowdsourced data to supplement these records. Further, the research team also accessed private sector data on speed road events, weather conditions, and land use by utilizing the World Bank Development Data Partnership (DDP) initiative. The administrative data was then combined with primary data collected from 200 hot spots, which allowed the research team to generate more than 100 new variables that determin high-risk locations.

Therefore, in this case study, integrating multiple sources into one data set provided unique insights into the factors that lead to more crashes in specific locations. It also allowed the research team to break down a bigger problem into a more manageable research question. For instance, it is now clear that just 200 of the 1,400 crash sites across the city are responsible for over half of road traffic deaths. This in turn means that the government should target 150 kilometers of the total 6,200-kilometer road network for road-safety interventions.


Using administrative data has various advantages for research teams. Some of these are as follows:

  • Quality: It is often more accurate, and therefore of better quality than self-reported survey data. For example, a firm is more likely to accurately report profits to their country's official financial auditors than to a research team.
  • Cost: It is often less expensive to collect or acquire, since it does not involve the various steps involved in conducting field surveys. Note that there might still be some costs involved in obtaining access to the data through a data licensing agreement (DLA).
  • Time: Using administrative data also saves time since this data has already been collected for a purpose outside of the context of an impact evaluation. For example, in the Kenya case study in the previous section, road crash data from Kenya's National Police Service (NPS) had already been collected over a nine year period. In this case, the research team only had to wait until the data license agreement (DLA) was carried out, which was much less than the time it would have taken to conduct a field survey from scratch.
  • Frequency: It is also collected on a regular basis. This allows research teams to evaluate past interventions even if no primary data was collected.
  • Policy impact: Most importantly, as the Kenya case study showed, administrative data can hugely improve the ability of research teams to improve the efficiency of interventions by making them more targeted.


However, it is important to note that administrative data also has its list of challenges. Some of these include:

  • Access: Accessing administrative data requires strong relationships with national and/or local authorities. In some cases, authorities may not agree to share the information.
  • Merging: After obtaining access, the research team must combine the administrative data with data from other sources. This often involves merging different datasets together, which can be tricky if there are no common unique IDs.
  • Quality: Finally, research teams should keep in mind that in some cases, administrative data may be badly reported, incomplete, or not available at all. This is because not all governments have the same capacity to accurately collect this information on a regular basis.

Monitoring Data

Monitoring data is collected to understand the implementation of the assigned treatment in the field. Typically, survey round data helps us understand changes in the outcome variables throughout the duration of the project, and monitoring data helps us understand how these changes are related to the intervention of our treatment. For example, monitoring data could be data on who actually received the treatment and if the treatment was implemented according to the research design. Our analysis might be invalid if we do not have this information and base our analysis only on what was meant by the research team to happen. Monitor data helps us understand what is usually referred to as internal validity.

Back to Parent

This article is part of the topic Secondary Data Sources

Additional Resources

Please include here links to relevant existing resources outside of the wiki