Difference between revisions of "Data Cleaning"

Jump to: navigation, search
Line 61: Line 61:
=== Survey Codes and Missing Values ===
=== Survey Codes and Missing Values ===
=== No Strings ===
=== No Strings ===
All data should be stored in numeric format. There are multiple reasons for this, but the two most important is that it is much more efficiently stored and a lot Stata commands expect values to be stored numerically.
=== Labels ===
=== Labels ===
There are several ways to add helpful descriptive text to a data set in Stata, but the two most common and important ways are variables labels and value labels.
There are several ways to add helpful descriptive text to a data set in Stata, but the two most common and important ways are variables labels and value labels.

Revision as of 17:38, 17 April 2017

Data cleaning is an essential step between data collection and data analysis. The aim is to (i) identify data errors, (ii) correct errors, and (iii) improve data collection process.


Read First

  • See this check list that can be used to make sure that common cleaning actions have been done when applicable.

It is really difficult to have a fully efficient data collection procedure in place that would generate error-free raw data. Any output of raw data needs some level of cleaning, either minor or major. Through the cleaning process, the research team can learn lessons and feed such information into next round's data collection, and to make the whole process more efficient.

Data cleaning becomes essential because without it any analytical work loses validity. Models used in research work assume data to be clean at the least.

Data cleaning is an important aspect of any impact evaluation project. Almost every research team keep research assistant(s) solely for the purpose of data cleaning, hence the additional costs.

The Goal of Cleaning

Picture2.png

There are two main goals when cleaning the data set:

  1. Cleaning individual data points that invalidate or incorrectly bias the analysis
  2. Prepare a clean data set so that it is easy to use to other researcher. Both for researchers inside your team and outside your team.

Cleaning individual data points

In impact evaluations our analysis often come down to test for statistical differences in the mean between the control group and any of the treatment arms. We do so through advance regression analysis where we include control variables, fixed effects, different error estimators among many other tools, but in essence one can think of it as an advanced comparison of means. While this is far from a complete description of impact evaluation analysis it might give the person cleaning a data set for the first time a framework on what cleaning a data set should achieve.

It is difficult to have an intuition for the math behind a regression, but it easy to have an intuition for the math behind a mean. Anything that bias a mean will bias a regression, and while there are many more things that can bias a regression, this is a good place to start for anyone cleaning a data set for the first time. The researcher in charge of the analysis is trained in what else that needs to be done for the specific regression models used. The articles linked to below will go through specific examples, but it is probably obvious to most readers that outliers, typos in data, survey codes (often values like -999 or -888) etc. bias means, so it is never wring to start with those examples.

Prepare a clean data set

The second goal of the data cleaning is to document that data set so that variables, values and anythings else is as self-explanatory as possible. This will help other researchers that you grant access to this data set, but it will also help you and your research team when access the data set in the future. At the time of the data collection or at the time of the data cleaning, you know the data set much better than you will at any time in the future. Carefully documenting this knowledge so that it can be used at the time of analysis is often the difference between a good analysis and a great analysis.

Role Division during Data Cleaning

Spend time identifying and documenting irregularities in the data. It is never bad to suggest corrections to irregularities, but a common mistake RAs do is that they spend too much time on trying to fix irregularities on the expense of having enough time to identify and document as many as possible.

Eventually the PI and the RA will have a common understanding on what corrections call the RA can make, but until then, it's recommended that the RA focus her/his time on identifying and documenting as many issues as possible rather than spending a lot of time on how to fix the issues. One major reason is that different regression models might require different ways to correct issues and this is often a perspective only the PI have.

Import Data

The first step in cleaning the data is to import the data. If you work with secondary data (data prepared by someone else) then this step is often straightforward, but this is a step often underestimated when working with primary data. It is very important that any change, no matter how small, always is done in Stata (or in R or any other scripting language). Even if you know that there are incorrect submission in your raw data (duplicates, pilot data mixed with the main data etc.) those deletions should always be done so that it can be replicated by re-running code. Without this information the analysis might not longer be valid. See the article on raw data folders for more details.

Importing Primary Survey Data

All modern CAPI survey data collections tools provided methods for importing the raw data in a way that drastically reduces the amount of work that needs to be done when cleaning the data. These methods typically includes a Stata do-file that generates labels and much more from the questionnaire code and then applies that to the raw data as it is being imported. If you are working in SurveyCTO see this article on SurveyCTO's Stata Template.

Examples of Data Cleaning Actions

The material in this section has been generated having primary survey data in mind. Although, a lot of these practices are also applicable when cleaning other types of data sets.

Data Cleaning Check List. This is a check list that can be used to make sure that all common aspects of data cleaning has been covered. Note that this is not a exhaustive list. Such a list is impossible to create as the individual data sets and the analysis methods used on them all require different cleaning that in the details depends on the context of that data set.

Incorrect Data and Other Irregularities

Survey Codes and Missing Values

No Strings

All data should be stored in numeric format. There are multiple reasons for this, but the two most important is that it is much more efficiently stored and a lot Stata commands expect values to be stored numerically.

Labels

There are several ways to add helpful descriptive text to a data set in Stata, but the two most common and important ways are variables labels and value labels.

Variable Labels

Value Labels

Additional Resources

  • list here other articles related to this topic, with a brief description and link