Evaluating Social Programs Webinar Series

Webinar
Timeline:
to
Location:
Zoom webinar (Time Zone EDT)

Event Description

While our in-person course in Cambridge has been postponed indefinitely due to the Coronavirus pandemic, we are excited to offer this free webinar series during the originally scheduled course dates. Join us daily from 11am-12:30pm EDT for this week-long webinar training series on Evaluating Social Programs. Throughout the week, these webinars will provide an introduction to why and how randomized evaluations can be used to rigorously measure social impact. 

Register here >>

The interactive sessions will combine lectures by J-PAL-affiliated professors and senior J-PAL staff with case study breakout sessions led by our training team. While we strongly encourage participants to commit to attending all five sessions to gain a comprehensive overview of randomized evaluations, participants can register for any combination of the sessions throughout the week:

Day 1: Theory of Change and Measurement: An Interactive Case Study
June 8, 11am-12:30pm EDT
This lecture will provide an introduction to impact evaluation, from the types of questions we can answer to how we can ensure impact evaluations build on our theories of change. In the accompanying interactive case study, we will further explore how we define and measure our key outcomes.

Speakers: Toby Chaiken, Policy and Training Manager, J-PAL North America and Ben Morse, Senior Research, Education, and Training Manager, J-PAL Global
Format: Lecture + interactive case study breakout 

Day 2: Why Randomize? An Interactive Case Study
June 9, 11am-12:30pm EDT
In this lecture and case study, we will present different impact evaluation methodologies and discuss the advantages of randomized evaluations. Building on day 1, participants will gain a deeper understanding of what influences the choice of one impact evaluation method over another.

Speaker: Damon Jones, Associate Professor, University of Chicago Harris School of Public Policy
Format: Lecture + interactive case study breakout

Day 3: Ethics of Randomized Evaluations
June 10, 11am-12:30pm EDT
This session will discuss the framework researchers use when thinking about ethics in study design and implementation, and how to apply that framework in various real-world examples.

Speaker: Laura Feeney, Associate Director of Research and Training, J-PAL North America
Format: Lecture + interactive breakout discussion

Day 4: Building Effective Research-Policy Partnerships
June 11, 11am-12:30pm EDT
Bringing together a researcher and implementing partner from a real-world evaluation, this panel will offer insights into how researchers and practitioners can work together to carry out a successful project and forge a partnership for evidence-based decision making. The session will be moderated by a J-PAL staff member and have an interactive Q&A at the end. 

Speaker(s): TBA
Format: Panel discussion + moderated Q&A

Day 5: The Generalizability Puzzle
June 12, 11am-12:30pm EDT
How can results from one context inform policies in another? This lecture will provide a framework for how to apply evidence across contexts.

Speaker: Mary Ann Bates, Executive Director, J-PAL North America
Format: Lecture + moderated Q&A

Stay tuned for further details on each session in the coming weeks.

    To receive email announcements on other upcoming J-PAL courses, subscribe to our Course Announcements newsletter.

    Overview and Objectives

    Participants can expect to:

    • Gain a clear understanding of why and when researchers and policymakers might choose to conduct randomized evaluations and how randomized evaluations are designed in real-world settings;

    • Engage with experienced J-PAL staff on how to generate rigorous evidence to inform decision making;
    • Engage with coursework designed to help participants apply learnings at their home organizations through real-world examples and practice exercises; 
    • Participate in small group breakouts to work through material with J-PAL staff members who have expertise in the design and implementation of randomized evaluations; and
    • Learn strategies to maximize policy impact and assess the generalizability of research findings.