Administrative Data and Evaluation Guides

Administrative Data and Evaluation Guides

J-PAL North America has developed a number of resources to make it easier for researchers, policymakers, and practitioners to design randomized evaluations and produce rigorous evidence in the fight against poverty.

Administrative Data

Open drawer of card catalog

Catalog: Administrative Data Sets

A searchable catalog (html) of key US data sets that is designed to assist researchers in screening potential data sources and documents procedures on how to access each data set.

Brief: The Lessons of Administrative Data

A two-page document (pdf) that highlights examples of landmark studies made possible by administrative data.

Guide: Using Administrative Data for Randomized Evaluations

A guide (pdf) that provides practical guidance on how to obtain and use nonpublic administrative data for a randomized evaluation. A poster (pdf) summarizes some of the key takeaways and visuals.

Guide: Data Security Procedures for Researchers

A guide (pdf) that provides a primer on basic data security themes, provides context on elements of data security that are particularly relevant for randomized evaluations using individual-level administrative and/or survey data, and offers guidance for describing data security procedures to an Institutional Review Board (IRB) or in an application for data use. It does not provide step-by-step instructions for implementation of data security; rather, it compiles resources and links to external guides and/or software.

Evaluation Guides

Pen marks off check boxes

Formalizing Research Partnerships

This resource (pdf) outlines steps to establish and build a strong working relationship with an implementing partner at the beginning of a randomized evaluation. Topics include questions to consider when developing a project scope, timeline, communications strategy, and formal agreements between researchers and implementing partners. This information may be most useful for researchers who have identified an implementing partner, research questions, and experimental design.

Communicating With a Partner About Results 

Communicating about the results of a randomized evaluation with implementing partners and other key stakeholders enables these partners to make direct changes to operations, policy, or processes, and shape the direction of future programs. These partners typically have influence on whether and how to interpret, disseminate, or act on evidence generated by the evaluation. Thoughtful communication – considering what, when, and how to share results – is is one element involved in fostering strong relationships and thereby paving a pathway to policy impact. This document (pdf) provides guidance for researchers on when and how to communicate with partners about results and progress measures of randomized evaluations.

Guide: Six Rules of Thumb for Determining Sample Size and Statistical Power

A guide (pdf) for policymakers and practitioners that outlines the main factors that affect statistical power and sample size, and demonstrates how to design a high-powered randomized evaluation.

Guide: Implementing Randomized Evaluations in Government

A guide (pdf) that provides practical guidance for state and local governments on how to identify good opportunities for randomized evaluations, how randomized evaluations can be feasibly embedded into the implementation of a program or policy, and how to overcome some of the common challenges in designing and carrying out randomized evaluations.

Checklist: Administrative Steps for Launching a Randomized Evaluation in the United States

A checklist (pdf) that provides guidance on the logistical and administrative steps that are necessary to launch a randomized evaluation that adheres to legal regulations, follows transparency guidelines required by many academic journals, and complies with security procedures required by regulatory or ethical standards.

Brief: Common Questions and Concerns about Randomized Evaluations

A two-page document (pdf) that addresses concerns potential evaluators may have about the logistical, ethical, and financial implications of running a randomized evaluation.

Brief: The Danger of Underpowered Evaluations

A one-page document (pdf) that highlights the risks associated with running an evaluation that is not designed to detect a meaningful impact of a program.

Brief: Impact Evaluation Methods

A table (pdf) that describes and compares different evaluation methodologies and indicates when each one is valid.

Code: Power Calculations in Stata

A comprehensive tutorial that walks users through how to run parametric and non-parametric power calculations using the statistical software Stata (.zip).

Guide: Real-World Challenges to Randomization and Their Solutions

A guide (pdf) to help researchers and implementing partners develop evaluation designs that fit their program’s context. Using real examples from ongoing and completed randomized evaluations, the document describes multiple research designs that accommodate existing programs, mitigate foreseeable implementation challenges, and demonstrate the flexibility of randomized evaluations across contexts. A poster (pdf) summarizes the key takeaways and visuals.

Brief: Why Randomize

A one-page document (pdf) and video that summarize the rationale behind why randomized evaluations are a powerful way to credibly evaluate the impact of a policy or program.

J-PAL's Research Resources section has a broader set of resources developed both within the J-PAL network and externally. For information and questions, or if you have an idea for a resource you’d like to see, please contact Rohit Naimpally ([email protected]).