Exporting Analysis

Revision as of 15:07, 18 January 2018 by Kbjarkefur (talk | contribs)
Jump to: navigation, search

This article discuss and give examples on three important concepts in relation to exporting results of analysis:

  1. formatting outputs
  2. replicability
  3. version control.


Read First

  • Outputted research should always be reproducible. See categories below for different levels of replicability

Formatting

Formatting requirements depend on the audience. For example, best practices for communicating results to project beneficiaries or government counterparts are different than those for communicating results to the academic research community.

Policy Output

Fact sheets can be an efficient method to disseminate impact evaluation results to government counterparts and local communities. Here is a great example of a fact sheet used for a DIME project. Regression tables formatted according to journal standards would obviously not work well in this context.

Academic Output

Follow established guidelines, such as:

LaTeX

The best tool for excellent looking and easily-reproducible tables is LaTeX. DIME has created a LaTeX Training with multiple stages targeting the absolute beginner as well as the experienced user. LaTeX has many features that allows you to produce tables that looks exactly like the tables published in top journals. See the LaTeX training for more details on how to use it.

Levels of Replicability of Exporting Analysis

We all know that all our work should be replicable, especially outputs, but exactly how replicable does something need to be? For example, if a report has a table that is outputted by code but formatted manually, is the report replicable? This section will walk you through different levels of replicability and tools for various levels.

  • No replicability. Parts or all of the results are not generated by code that can be run by anyone else, and/or parts or all of the outputted results needs to be manually copied and pasted from the result window or graph window in, for example, Stata.
  • Basic replicability. All results are produced by code and saved to files on disk. However, some copy and pasting between files are required to create the final tables.
  • Good replicability. All results are produced by code and saved to files on disk and no copying and pasting of results are needed between files. However, formatting and other minor changes are needed, and/or the final tables need to be copied and pasted into the document where it will be used.
  • Full replicability. All results are generated by code and exported in a format where no changes needs to be need to finalize them (not even formatting). And the results are also fully automatically imported to, and if the results are changed fully automatically updated in, the documents where it will be used.

No replicability

Anything that needs manual copying and pasting from a Stata or R window to a file saved as disc can never be considered replicable. This applies to graphs as well. Graphs should be saved to file and not be copied and pasted from the window they pop up in in, for example.

This level of replicability is never acceptable for purblished outputs, no matter how small or unimportant the report is. It could be acceptable to do what is described here during the initial exploration of the data, but as soon as output is produced for someone else - even within the team - it should be done with a higher level of replicability. Since analysis should eventually be shown to someone, we strongly recommend that you aim for a higher level of replicability from the start since it will save you time later.

Basic Replicability

The code generates all graphs and tables to the project folder, however, some copying and pasting between files are needed to create the tables, or some very basic math needs to be done in the outputted files. This is not best practice, but it is the minimum acceptable level of replicability.

Graphs. It is easy to satisfy basic replicability for Graphs. In Stata, you simply use thesave() option included in Stata's graph commands. This even satisfies good replicability for many graphs.

Tables. For tables there are no single built in option for saving output similarly to the save() option for graphs. Common commands to output results include outreg and estout. estout is a package of commands that also include esttab, eststo, estadd and estpost. These commands will be explained in more detail in the good replicability section below.

One way to test is that all tables and graphs are exported with basic replicability is to move all tables and graph to a separate folder and run the code again. Make sure that all tables and graph files are re-created again in the folder and that it is possible to make the minor manual actions required to generate the final tables and graphs from these files.

Good replicability

This is the levels that we recommend all DIME projects to aim for. While full replicability is objectively better, we understand that it might not be a model that works for all teams due to, for example, external collaborators are not able or willing to work in the tools full replicability requires.

Graphs. For many graphs, the save() descried in basic replicability is often enough for good replicability. One exception where it is not sufficient is if several graphs are supposed to be combined into one graph. It is not good replicability to combine them manually or to simply put them in the report next to each other. For good replicability this should be done in the code. In Stata there is a command called graph combine that can be used for this purpose.

Tables. For good replicability all tables should be completed by code in Stata or R. For example, extra statistics such as test statistics and means should be added before exporting the table. If multiple tables should be combined to one table, then that should be done by code as well. The estout family of commands provides the functionality needed to accomplish this.

Using estout commands you can store regression results from multiple regressions in one table using eststo

Add estout example here

You can add your own statistics using estadd. If you have a statistic that you want to add to the results of the the regression, for example, the predicted Y, the mean of sub-samples (for example the control observations), the N of sub-samples or any number that you have calculated in your, you can add a row with that statistic using estadd.

These commands also allows you to append tables to each other, and many other features. The estout commands might be a bit difficult to get started with, and the easiest way might be to copy the code used to generate a table that you like. Recycling code is actually the most common way to use the estout commands even for users familiar with the package.

Full replicability

For full replicability all tables and graphs should also be imported automatically into the final report. No formatting or any other type of editing should be needed between running the code and the tables appearing in between the text in the report.

While new tools are starting to be introduced, traditionally the way to achieve this level of replicability have been to use LaTeX. DIME has prepared resources for getting started with LaTeX and how to write fully replicable documents using LaTeX. See the resource here.

For tables and graphs to be possible to import to a report written in LaTeX it has to have been exported in LaTeX format or a format that LaTeX can read. For graphs this only requires that the file type in save() is saved in, for example, .png format. For tables it is also as easy if you are already using the estout family as you can change the format you export the tables in .tex. See the LaTeX resources linked to above to for how to import these files to your report.

While new alternatives based on, for example, Microsoft Word that skip LaTeX altogether are emerging, we are still recommending LaTeX as it is more comprehensive and the resources online are much more well developed than for any of the new tools.

Version control

It is very common while doing research that multiple approaches are tried out before the Principal Investigator decides on a which will be the final analysis. This should be minimized as it could otherwise be regarded as p-hacking, but to some degree it will always be necessary as what we learn about the data set will affect what analysis is required.

During this process we need to have a way to go back to previous results so we can compare the different versions. This is called version control. We can either version control the code generating the results, the results themselves or both.

Version control of code. This is the only way we can do version control in the full extent of the definition of version control as we use GitHub for this.

Version control of results The method suggested here is not version control in the full extent of the definition of version control but it is satisfying the basic need discussed here. Date the files.

Version control of both. It is simply using both of the above techniques.

Back to Parent

This article is part of the topic Data Analysis

Additional Resources

  • list here other articles related to this topic, with a brief description and link