Research Resources

Teaching resources on randomized evaluations

Summary

Over the last 15 years, J-PAL has offered its Evaluating Social Programs course in a number of different locations worldwide. This five-day program on evaluating social programs provides a thorough understanding of randomized evaluations and pragmatic step-by-step training for conducting one’s own evaluation. In addition to offering the course in person, J-PAL also offers a free, online version of the course. The teaching materials from these and other J-PAL courses are available in this section.

What is Evaluation?

This lecture will provide an introduction to impact evaluation, from the types of questions we can answer to how we can ensure impact evaluations build on our theories of change.

Lecture materials:

Measurement: Outcomes, Impacts, and Indicators

This lecture will explore how we define and measure our key outcomes, types of data and where we can find it, along with potential biases in our data.

Lecture materials:

Case studies on developing a theory of change and identifying appropriate outcomes and indicators: 

  • Cognitive Behavioral Therapy in the US (Case Study)
  • Village Financial Services in India (Case Study)

Why Randomize

In this lecture we present different impact evaluation methodologies, discuss the advantages of randomized evaluations, as well as which factors influence the choice of one impact evaluation method over another.

Lecture materials:

Case studies illustrating the benefits of random assignment, compared to other identification strategies:

How to Randomize

This lecture illustrates how the research question determines the randomization strategy and shows how multiple research questions can be answered within one study. In addition, this lecture explores how a number of potential issues and pitfalls can be anticipated at the design stage.

Lecture materials:

Case studies illustrating how the randomization design of an evaluation can be tailored based on the research question(s):

  • Extra Teacher Program in Kenya (Case Study)
  • Labor Displacement from Job Counseling Programs in France (Case Study)

Power and Sample Size

Power and Sample size introduces the concept of statistical power and walks through the factors which influence it. The lecture illustrates how the design choices introduced in How to Randomize, along with the outcomes determined in Measurement: Outcomes, Impacts, and Indicators influence the power of a study.

Lecture materials:

Exercise on power calculations:

  • An exercise (and associated dataset) that explains the trade-offs in designing a well powered randomized trial, using the EGAP web application.

Threats and Analysis

This lecture illustrates some of the challenges that can occur while conducting a randomized evaluation and how researchers handle these challenges. 

Lecture materials:

Case studies illustrating how to analyze and interpret the results from a randomized evaluation in the presence of various threats to analysis, such as attrition and non-compliance:

Generalizability

How can results from one context inform policies in another? This lecture will provide a framework for how to apply evidence across contexts.

Lecture materials:

Please note that the teaching resources referenced here were curated for specific research and training needs and are made available for informational purposes only. Please email us for more information.

Last updated July 2020.

These resources are a collaborative effort. If you notice a bug or have a suggestion for additional content, please fill out this form.

Acknowledgments

We thank the Training teams past and present of all J-PAL offices for their assistance in creating these materials. Any errors are our own. 

    In this resource