Administrative Data and Evaluation Guides
J-PAL North America has developed a number of resources to make it easier for researchers, policymakers, and practitioners to design randomized evaluations and produce rigorous evidence in the fight against poverty.
A searchable catalog (html) of key US data sets that is designed to assist researchers in screening potential data sources and documents procedures on how to access each data set.
A two-page document (pdf) that highlights examples of landmark studies made possible by administrative data.
A guide (pdf) that provides a primer on basic data security themes, provides context on elements of data security that are particularly relevant for randomized evaluations using individual-level administrative and/or survey data, and offers guidance for describing data security procedures to an Institutional Review Board (IRB) or in an application for data use. It does not provide step-by-step instructions for implementation of data security; rather, it compiles resources and links to external guides and/or software.
This resource (pdf) outlines steps to establish and build a strong working relationship with an implementing partner at the beginning of a randomized evaluation. Topics include questions to consider when developing a project scope, timeline, communications strategy, and formal agreements between researchers and implementing partners. This information may be most useful for researchers who have identified an implementing partner, research questions, and experimental design.
Communicating about the results of a randomized evaluation with implementing partners and other key stakeholders enables these partners to make direct changes to operations, policy, or processes, and shape the direction of future programs. These partners typically have influence on whether and how to interpret, disseminate, or act on evidence generated by the evaluation. Thoughtful communication – considering what, when, and how to share results – is is one element involved in fostering strong relationships and thereby paving a pathway to policy impact. This document (pdf) provides guidance for researchers on when and how to communicate with partners about results and progress measures of randomized evaluations.
A guide (pdf) for policymakers and practitioners that outlines the main factors that affect statistical power and sample size, and demonstrates how to design a high-powered randomized evaluation.
A guide (pdf) that provides practical guidance for state and local governments on how to identify good opportunities for randomized evaluations, how randomized evaluations can be feasibly embedded into the implementation of a program or policy, and how to overcome some of the common challenges in designing and carrying out randomized evaluations.
A checklist (pdf) that provides guidance on the logistical and administrative steps that are necessary to launch a randomized evaluation that adheres to legal regulations, follows transparency guidelines required by many academic journals, and complies with security procedures required by regulatory or ethical standards.
A two-page document (pdf) that addresses concerns potential evaluators may have about the logistical, ethical, and financial implications of running a randomized evaluation.
A one-page document (pdf) that highlights the risks associated with running an evaluation that is not designed to detect a meaningful impact of a program.
A table (pdf) that describes and compares different evaluation methodologies and indicates when each one is valid.
A comprehensive tutorial that walks users through how to run parametric and non-parametric power calculations using the statistical software Stata (.zip).
A guide (pdf) to help researchers and implementing partners develop evaluation designs that fit their program’s context. Using real examples from ongoing and completed randomized evaluations, the document describes multiple research designs that accommodate existing programs, mitigate foreseeable implementation challenges, and demonstrate the flexibility of randomized evaluations across contexts. A poster (pdf) summarizes the key takeaways and visuals.
J-PAL's Research Resources section has a broader set of resources developed both within the J-PAL network and externally. For information and questions, or if you have an idea for a resource you’d like to see, please contact Rohit Naimpally ([email protected]) or Elisabeth O'Toole ([email protected]).