Difference between revisions of "Reproducible Research"
Line 23: | Line 23: | ||
=== Documentation tools === | === Documentation tools === | ||
There are various tools available for '''data documentation'''. The Open Science Framework provides one such solution, with integrated file storage, version histories, and collaborative Wiki pages. GitHub provides a transparent task management platform in addition to version histories and Wiki pages, but is less effective for file storage. The exact shape of this process should be agreed on by team members prior to project launch. | |||
== Data Publication == | == Data Publication == |
Revision as of 00:36, 28 April 2020
Reproducible research is the system of documenting and publishing results from a given research study. At the very least, reproducibility allows other researchers to analyze the same data to get the same results as the original study, which strengthens the conclusions of the original study. Reproducible research is based on the idea that the path to research findings is just as important as the findings themselves.
Read First
- DIME Analytics has created the DIME Research Reproducibility Standards.
- DIME Analytics has also conducted a bootcamp on reproducible research, which covers the various aspects of reproducibility.
- Well-written master do-files are critical to transparent, reproducible research.
- GitHub repositories play a major role in making research reproducible.
- Specialized text editing and collaboration tools ensure that output is reproducible.
Replication and Reproducibilty
Replication is a process where different researchers conduct the same study independently in different samples and find similar conclusions. It adds more validity to the conclusions of an empirical study. However, in most field experiments, the research team cannot create the same conditions for replication. Different populations can respond differently to the same treatment, and replication is often too expensive. In such cases, the researchers should still try to achieve reproducibility. There are four key elements of reproducible research - data documentation, data publication, code publication, and output reproducibility.
Data Documentation
Data documentation deals with all aspects of an impact evaluation - sampling, data collection, cleaning, and analysis. Proper data documentation not only produces reproducible research for the future, but also ensures high quality data in the present. For example, a field coordinator (FC) may notice that some respondents do not understand a questionnaire because of reading difficulties. If the field coordinator (FC) does not document this issue, the research assistant will not flag these observations during data cleaning. And if the research assistant does not document why the observations were flagged, and what the flag means, it will affect the results of the analysis.
Guidelines
Accordingly, in the lead up to, and during data collection, the research team should follow these guidelines for data documentation.
- Comments. Use comments in your code to document the reasons for a particular line or group of commands. In [[Stata Coding Practices|Stata], use
*
to insert comments. - Folders. Create separate folders to store all documentation related to the project in separate files. For example, in Github, the research team can store notes about each folder and its contents under README.md.
- Consult data collection teams. Throughout the process of data cleaning, take extensive inputs from the people who are responsible for collecting data. This could be a field team, a government ministry responsible for administrative data, or a technology firm that handles remote sensing data.
- Exploratory analysis. While cleaning the data set, look for issues such as outliers, and data entry errors like missing or duplicate values. Record these observations for use during the process of variable construction and analysis.
- Feedback. When researchers submit codes for review, or release data on a public platform (such as the Microdata Catalog), others may provide feedback, either positive or negative. It is important to document these comments as well, as this can improve the quality of the results of the impact evaluation.
- Corrections. Include records of any corrections made to the data, as well as to the code. For example, based on feedback, the research team may realize that they forgot to drop duplicated entries. Publish these corrections in the documentation folder, along with the communications where theses issues were reported.
- Confidential information. The research team must be careful not to include confidential information, or any information that is not securely stored.
Documentation tools
There are various tools available for data documentation. The Open Science Framework provides one such solution, with integrated file storage, version histories, and collaborative Wiki pages. GitHub provides a transparent task management platform in addition to version histories and Wiki pages, but is less effective for file storage. The exact shape of this process should be agreed on by team members prior to project launch.
Data Publication
Data publication is the public release of all data once the process of data collection and analysis is complete. Ideally, the research team should publish all data that is needed for others to reproduce every step of the original code, from cleaning to analysis. However, this may not always be feasible, since data often contains personally identifiable information (PII) and other confidential information.
Guidelines
The research team must keep the following things in mind to ensure that the data is well-organized before publishing:
- Clean and label. Ensure that the data has been cleaned and is well-labelled.
- Missing variables. Make sure the data contains all variables used during data analysis, and includes uniquely identifying variables.
- De-identify. Careful de-identification is important to maintain the privacy of respondents and to meet research ethics standards. The research team must carefully de-identify any sensitive or personally-identifying information (PII) such as names, locations, or financial records before release.
DIME Analytics has developed the following resources to help researchers store and organize data for public release.
- Iefieldkit.
iefieldkit
is a Stata package which allows the research team to follow best practices for data cleaning. - Ietoolkit.
ietoolkit
is a Stata package which simplifies the process of data management and analysis in impact evaluations. It allows the research team to organize the raw data. - Data management guidelines. The data management guidelines] provide steps on how to organize data for cleaning and analysis.
- DataWork folder. The DataWork folder is a standardized folder template for organizing data in a project folder. The raw de-identified data can be stored in the DataSets folder of the DataWork survey round folder.
- Microdata catalog checklist. The microdata catalog checklist provides instructions on how to prepare data for release using the Microdata catalog of the World Bank. The Microdata Library offers free access to microdata produced not only by the World Bank, but also other international organizations, statistical agencies, and government organizations.
- Data publication standards. The DIME Data Publication Standards provide detailed guidelines for preparing data for release.
Data publication tools
There are several free software tools that allow the research team to publicly release the data and the associated documentation, including GitHub and The Open Science Framework, and Research Gate. Each of these platforms can handle organized directories and can provide a static uniform resource locator (URL) which makes it easy to collaborate with other users.
- Research Gate.
It allows users to assign a digital object identifier (DOI) to published work, which they can then share with external researchers for review or replication. - Open Science Framework.
It is an online platform which allows members of a research team to store all project data, and even publish reports using OSF preprints. - DIME survey data.
DIME also publishes and releases survey data through the Microdata Catalog. However, access to the data may be restricted, and some variables are not allowed to be published.
Code Publication
Code publication is another key element of reproducible research. Sometimes academic journals ask for reproducible code (and data) along with the actual academic paper. Even if they don't, it is a good practice to share codes and data with others. The research team should ensure that external researchers have access to, and can execute the same code and data that was used during the original impact evaluation.
Guidelines
With careful coding, use of master do-files, and adherence to coding best practices the same data and code will yield the same results for any given person. Follow these guidelines when publishing the code:
- Master do-files. The master do-file should set the Stata seed and version to allow replicable sampling and randomization. By nature, the master do-file will run project do-files in a pre-specified order, which strengthens reproducibility. The master do-file can also be used to list assumptions of a study and list all data sets that are used in the study.
- Packages and settings. Install all necessary commands and packages in your master do-file itself. Specify all settings and sort observations frequently to minimize errors. DIME Analytics has created two packages to help researchers in producing reproducible research -
iefieldkit
andietoolkit
. - Globals. Create globals (or global macros) for the root folder and all project folders. Globals should only be specified in the master do-file and can be used standardizing coefficients for the data set that will be used for analysis.
- Shell script. If you use different languages or software in the same project, consider using a shell script, which ensure that other users run the different languages or software in the correct order.
- Comments. Include comments (using
*
) in your code frequently to explain what a line of code (or a group of commands) is doing, and why. For example, if the code drops observations or changes values, explain why this was necessary using comments. This ensures that the code is also easy to understand, and that research is transparent.
Code publication tools
There are several free software tools that allow the research team to publicly release the code, including GitHub and Jupyter Notebook. Users can pick any of these depending on how familiar they are with these tools.
- GitHub
It is a free version-control software. It is popular because users can store every version of every component of a project (like data and code) in repositories which can be accessed by everyone working in a project. With GitHub repositories, users can track changes to code in different programming languages, and create documentation explaining what changes were made and why. The research team can then simply share Git repositories with an external audience which allows others to read and replicate the code as well as the results of an impact evaluation. - Jupyter Notebook
This is another platform where researchers can create and share code in different programming languages, including Python, R, Julia, and Scala.
To learn more about how to use these tools, users can refer to the following resources:
Output Reproducibility
GitHub repositories allow researchers to track changes to the code in different programming languages, create messages explaining the changes, make code publicly available and allow other researchers to read and replicate your code.
Dynamic documents allow researchers to write papers and reports that automatically import or display results. This reduces the amount of manual work involved between analyzing data and publishing the output of this analysis, so there's less room for error and manipulation.
Different software allows for different degrees of automatization. R Markdown, for example, allows users to write, text, and code simultaneously, running analyses in different programming languages and printing results in the final document along with the text. Stata 15 allows users to dyndoc to create similar documents; the output is a file, usually a PDF, that contains text, tables and graphs. With this kind of document, whenever a researcher updates data or changes the analysis, he/she only needs to run one file to generate a new final paper or report. No copy-pasting or manual changes are necessary, which improves reproducibility.
LaTeX is another widely used tool in the scientific community. It is a type-setting system that allows users to reference code outputs such as tables and graphs in order to easily update them in a text document. After you analyze the data in your preferred software, you can export the results into TeX format – R's stargazer is commonly used for this, and Stata has different options such as esttab and outreg2
. The TeX code writes a LaTex document that uploads these outputs. Whenever results are updated, simply recompile the LaTeX document with the press of a button in order to integrate the new graphs and tables. Should you wish to use TeX collaboratively, Overleaf is a web-based platform that facilitates TeX collaboration, and Jupyter Notebook can create dynamic documents in HTML, LaTeX and other formats.
Additional Resources
- DIME Analytics’ Data Management and Cleaning
- DIME Analytic’s Coding for Reproducible Research
- DIME Analytics’ Intro to GitHub
- DIME Analytics’ guides to 1 and 2 to Using Git and GitHub
- DIME Analytics’ Maintaining a GitHub Repository
- DIME Analytics’ Initializing and Synchronizing a Git Repo with GitHub Desktop
- DIME Analytics’ Using Git Flow to Manage Code Projets with GitKraken
- DIME Analytic’s Fundamentals of Scientific Computing with Stata and guides 1 2 and 3 to Stata Coding for Reproducible Research
- Open Science Framework, a web-based project management platform that combines registration, data storage (through Dropbox, Box, Google Drive and other platforms), code version control (through GitHub) and document composition (through Overleaf).
- Data Colada’s 8 tips to make open research more findable and understandable
- The Abul Latif Jameel Poverty Action Lab (JPAL)’s resources on transparency and reproducibility
- Innovations for Policy Action (IPA)’s Reproducible Research: Best Practices for Data and Code Management and Guidelines for data publication
- Randomized Control Trials in the Social Science Dataverse
- Center for Open Science’s Transparency and Openness Guidelines, summarized in a 1-Page Handout
- Berkeley Initiative for Transparency in the Social Sciences (BITSS)’ Manual of Best Practices in Transparent Social Science Research
- Coursera’s course for R, Johns Hopkins' Online Course on Reproducible Research, and Stata, Incorporating Stata into Reproducible Documents
- Matthew Salganik's Open and Reproducible Research: Goals, Obstacles and Solutions