Evaluating Social Programs Webinar Series
While our in-person course in Cambridge has been postponed indefinitely due to the Coronavirus pandemic, we are excited to offer this free webinar series during the originally scheduled course dates. Join us daily from 11am-12:30pm EDT for this week-long webinar training series on Evaluating Social Programs. Throughout the week, these webinars will provide an introduction to why and how randomized evaluations can be used to rigorously measure social impact.
The interactive sessions will combine lectures by J-PAL-affiliated professors and senior J-PAL staff with case studies led by our training team. While we strongly encourage participants to commit to attending all five sessions to gain a comprehensive overview of randomized evaluations, participants can register for any combination of the sessions throughout the week:
Day 1: Theory of Change and Measurement: An Interactive Case Study
June 8, 11am-12:30pm EDT
This lecture will provide an introduction to impact evaluation, from the types of questions we can answer to how we can ensure impact evaluations build on our theories of change. In the accompanying interactive case study, we will further explore how we define and measure our key outcomes.
Day 2: Why Randomize? An Interactive Case Study
June 9, 11am-12:30pm EDT
In this lecture and case study, we will present different impact evaluation methodologies and discuss the advantages of randomized evaluations. Building on day 1, participants will gain a deeper understanding of what influences the choice of one impact evaluation method over another.
Speakers: Ben Morse, Senior Research, Education, and Training Manager, J-PAL Global, with special guest Damon Jones, Associate Professor, University of Chicago Harris School of Public Policy
Format: Lecture, interactive case study, + moderated Q&A
Day 3: Ethics of Randomized Evaluations
June 10, 11am-12:30pm EDT
This session will discuss the framework researchers use when thinking about ethics in study design and implementation, and how to apply that framework in various real-world examples.
Speaker: Laura Feeney, Associate Director of Research and Training, J-PAL North America
Format: Lecture + discussion
Day 4: Building Effective Research-Practitioner Partnerships
June 11, 11am-12:30pm EDT
Bringing together a researcher and implementing partner from a real-world evaluation, this panel will offer insights into how researchers and practitioners can work together to forge a partnership for evidence-based decision making that has real-world impact. This panel session will be moderated by a J-PAL staff member and have an interactive Q&A at the end.
The session will highlight READI Chicago, an innovative response to gun violence in Chicago. The program connects people most highly impacted by gun violence to paid transitional jobs, cognitive behavioral therapy, and wrap-around support services to help them create a viable path for a different future and reduce their serious violence involvement. Researchers from the University of Chicago Crime and Poverty Labs and University of Michigan are rigorously evaluating READI Chicago to assess its effectiveness and impact on participants' violence involvement.
Speakers: Sara Heller, Assistant Professor of Economics, University of Michigan, Monica Bhatt, Senior Research Director, UChicago Crime and Education Labs, and Chasda Martin, Director of Programs, READI Chicago; moderated by Toby Chaiken, Policy and Training Manager, J-PAL North America
Format: Panel discussion + moderated Q&A
Day 5: The Generalizability Puzzle
June 12, 11am-12:30pm EDT
How can results from one context inform policies in another? This lecture will provide a framework for how to apply evidence across contexts.
Speaker: Mary Ann Bates, Executive Director, J-PAL North America
Format: Lecture + moderated Q&A
To receive email announcements on other upcoming J-PAL courses, subscribe to our Course Announcements newsletter.
Overview and Objectives
Participants can expect to:
Gain a clear understanding of why and when researchers and policymakers might choose to conduct randomized evaluations and how randomized evaluations are designed in real-world settings;
- Engage with experienced J-PAL staff on how to generate rigorous evidence to inform decision making;
- Engage with coursework designed to help participants apply learnings at their home organizations through real-world examples and practice exercises; and
- Learn strategies to maximize policy impact and assess the generalizability of research findings.