Reflecting on a decade of impactful research at J-PAL North America: Credibly identifying program impact

Posted on:
Three people talking around table
J-PAL staff speak with AJ Gutierrez, co-founder of Saga Education, a leader in high-impact tutoring.

In part four of J-PAL North America’s ten-year anniversary blog series, we discuss how credible evidence from randomized evaluations is informative in identifying effective strategies to reduce poverty, regardless of the impact estimate. We share what we have learned from evaluations with positive results and from evaluations with null results to provide key lessons for making all research results meaningful.

The power of randomized evaluations to generate credible, clear evidence has transformed the policymaking space, where policymakers, government leaders, and practitioners increasingly prioritize evidence-based policymaking. Randomized evaluations credibly measure the impact of programs and policies because random assignment ensures that systematic differences, such as income or gender, do not drive changes in outcomes between people who do and do not receive a program. Because of this, randomization allows for causal conclusions, which means researchers can credibly say that a program causes observed changes in outcomes. Randomization also minimizes the need for complex statistical methods to estimate impact, so results from randomized evaluations provide clear, easy to understand evidence that can be applied to policies and programs. 

Credible results from randomized evaluations can equip decision-makers to go beyond trends in social policy and meaningfully improve lives. J-PAL North America has supported nearly 200 randomized evaluations on a variety of policy-relevant topics to build a credible body of evidence on what strategies actually improve lives and what do not. Though positive results from randomized evaluations are exciting, sometimes we learn that interventions do not work as intended. While this can be challenging, null results, as well as negative results, are critical in building our understanding of what works in reducing poverty. 

New evidence can transform the narrative around a programmatic area

Whether randomized evaluations show that a program or policy causes positive impact or has no impact, new evidence can transform theories of change about what works in a particular area, such as health, education, or criminal justice.

In the Baby’s First Years study, researchers found that providing monthly cash payments to families experiencing poverty positively impacted infant brain activity. Prior to this study, research in both social science and neuroscience theorized about the impact of poverty on child development, but evidence was scarce on developmental outcomes for children under the age of three. Qualitative work also supported defining the scope of child-related purchases and quantifying household size, which further contextualizes theories of change about child development, household income, and poverty. Evidence from this evaluation to date has demonstrated that providing unrestricted cash, instead of targeting other factors associated with poverty or targeting personal choice, can change infant brain activity patterns—patterns shown to be associated with the development of cognitive skills.

Additionally, null results can promote innovation by giving insight into what mechanisms of a program work and why. In the case of Health Care Hotspotting, researchers investigated the impact of intensive wrap-around support on hospital readmission rates among individuals with high health care usage who have complex medical needs. While this model has received national attention as a promising intervention, researchers ultimately found no impact of this intensive support program on six-month hospital readmission rates. This could be because the intervention served a younger population with more complex and diverse needs than other programs that had been previously evaluated. While these null results are challenging, they ultimately prompted an institutional shift in thinking by the implementing organization, Camden Coalition, where program leaders are currently thinking through what mechanisms of their program work for their particular population and why. 

Credible positive and null results guide strategic scaling

Sometimes new evidence prompts scaling back of programs and policies. In the past decade, J-PAL North America supported evaluations of workplace wellness programs, where researchers sought to understand the impact of these programs on employee health and health care costs. Previous observational analyses suggested varying degrees of positive effects, but rigorous evidence was needed to build validity for these estimates. Ultimately, randomized evaluations found that a fairly typical workplace wellness program delivered across many worksites by a large employer had limited to no impact within the first year, and follow-up analyses found similar results after three years. While disappointing, this insight can guide policymakers in decision-making about scaling incentives, such as tax subsidies for these types of programs, and propel exploration of more effective approaches to bolster employee health.

On the other hand, new evidence can be a catalyst for scaling up effective programs and policies. In the education sector, there are numerous debates on what to prioritize to improve student learning outcomes. Randomized evaluations can be particularly effective in politicized contexts like this because clear and credible results communicate across political divides to share what interventions actually work. By 2020, randomized evaluations had consistently provided evidence of the effectiveness of tutoring on improving student learning, so J-PAL North America conducted an evidence review of 96 tutoring randomized evaluations, synthesizing existing evidence to inform the education sector about which programmatic elements of tutoring programs create high impact for students.

Leveraging actionable insights from the evidence review, J-PAL North America supports scaling tutoring programs through researcher-practitioner partnerships and policy. The recently launched Tutoring Evaluation Accelerator (TEA) will support tutoring programs across the United States to implement evidence-based programs and conduct evaluations in new contexts. Disseminating results about the impact of tutoring resulted in policy actions that increased access to tutoring. Citing J-PAL North America research, the White House encouraged states to allocate American Rescue Plan funds for tutoring programs. Following conversations between J-PAL North America staff and the California Governor’s office about the impressive potential of high-dosage tutoring, the state passed a bill in 2021 that included $460 million for hiring paraprofessional tutors. This evidence also influenced the 2021 creation of a $5 million high-impact tutoring program in Colorado, where advocates drew from J-PAL North America’s evidence review to inform the characteristics of tutoring outlined in the legislation.

Results from randomized evaluations can also provide insights for effective program implementation by improving understanding of which elements of programs are effective. Positive results can help the iteration of program delivery to maximize effectiveness. In the criminal legal space, J-PAL North America supported researchers in New York City in an evaluation that found text message reminders decreased court non appearance by 26 percent. As a result of this evidence, New York City courts now send text message reminders to all summons recipients who provide a cell phone number.  

Trusting relationships ensure that credible evidence creates positive change 

All credible research results can be used to champion innovation and learning. Because null and negative results may be disappointing or even challenge our beliefs, they must be shared thoughtfully with practitioners and policy makers. This underscores the importance of building trusting relationships between researchers and partners, as discussed in part three of this blog series. Ensuring researchers and partners are invested in partnerships plays a key role in ensuring that all members are open to learning from results once they become available. J-PAL supports teams in sharing null results with partners and disseminating null results publicly, so that lessons learned can inform effective decision making. 

Sharing null results also contributes to creating a culture in the policymaking space that celebrates and learns from failure. In reflecting on results, AJ Gutierrez, co-founder of Saga Education shares, “After evaluation, it can be disheartening to find no evidence of impact, but when we don’t know the effects of an intervention these null results provide new information for learning that builds on what researchers know. J-PAL’s courage in sharing results helps build momentum in policy spaces and tells stories that need to be told.” 

In J-PAL North America’s ten-year anniversary blog series, we reflect on some of the most impactful randomized evaluations and bodies of research that our organization has supported over the past decade. We also celebrate the tremendous contributions of our researcher network and the policymakers and practitioners who have made this research possible. Part one kicks off the series with reflections from our scientific leadership. Part two explores the role of study design and implementation. Part three dives into effective collaboration between researchers and practitioners. Part five considers how policy can be informed at scale.

Authored By