Promoting evidence and evaluation in the American Rescue Plan Act

Posted on:
Close-up shot of the US Department of Treasure columns and roof
Photo: Shutterstock.com

Earlier this year, state, local, territorial, and tribal governments received shares of the $350 billion Coronavirus State and Local Fiscal Recovery Funds (SLFRF) program created under the American Rescue Plan Act. Recipients have broad flexibility to help disproportionately impacted communities and tailor economic recovery to their context-specific needs.

Even amid the grim realities of the pandemic, this moment presents an opportunity for cities and states to implement services that could make a big difference in the lives of communities that have been most affected by the pandemic—and one that requires a series of tough decisions about priorities and trade-offs. As we have written previously, evidence is a valuable tool governments can use to anchor those decisions by spending on existing evidence-informed programs and building evaluations into new initiatives.

Recently, J-PAL North America’s Senior Research and Policy Manager Rohit Naimpally spoke on a US Department of Treasury technical assistance panel for governments receiving SLFRF awards. Here, we share key lessons on evaluation and evidence-based interventions from the panel.

The American Rescue Plan Act encourages and enables evaluation

Choosing to build evaluation into a new program or policy can be a significant decision. An evaluation takes additional time and resources to plan and execute beyond what the program itself requires. And by their nature, the results of evaluations are unpredictable: they may show that a program has the intended impact, they may find unexpected outcomes, or they may find few impacts at all. Similarly, parsing through existing evidence on programs can be time consuming and require careful interpretation and inference about what will translate to a new context. So why should governments pursue evaluation and evidence in their plans? Is it even worth it?

Through the structure of the SLFRF, the Treasury has explicitly signaled that evidence and evaluation are powerful tools for governments to use when designing their recovery plans. Treasury reporting and compliance guidelines state that plans should specify how jurisdictions are using funds for evidence-based interventions or programs that will include an impact evaluation. Further, funding can be used to pay for the costs of evaluating programs—so the administrative and financial barriers to building new evidence through recovery plans are lower than they often are.

The value of evidence: confirming course and changing paths

More broadly, evaluations can provide valuable, actionable information both when they affirm the expected outcomes of a program and when they don’t. In Chicago, researchers partnered with a local non-profit to test a cognitive behavioral therapy intervention intended to help male high-school students examine and rethink automatic thought processes and responses. A series of randomized evaluations found that participants were less likely to be arrested and more likely to graduate on time than peers who weren’t in the program.

As a result of these rigorous evaluations, the program received significant attention from policymakers, including then-Mayor of Chicago Rahm Emanuel and President of the United States Barack Obama, and additional funding from philanthropists. Here, an evaluation gave leaders confidence that the intervention was an effective option for achieving important outcomes, and it served as the basis for an expansion to serve more people.

Evaluations can also give decision-makers grounds to reconsider, redesign, or pivot if a program isn’t working as expected. In one example, researchers examined workplace wellness programs in the United States. Such programs are popular—more than half of employers have one—but evidence of their effectiveness was unclear. Two separate rigorous evaluations found that the workplace wellness programs they studied had no impact on employee health outcomes or healthcare spending.

While these results may not be what proponents of such programs expected, they are still quite valuable. Knowing that an intervention doesn’t work as intended means decision-makers can shift gears to a different approach that might do a better job of achieving the impact they’re pursuing.

Evidence and evaluation are team sports

Governments interested in spurring new evaluations or learning from previous ones don’t have to do it on their own. Researchers, evidence clearinghouses, and like-minded governments can all be valuable resources or collaborators. In particular, leaders may be asking themselves who can conduct the evaluation, where to find support, and which programs have already been vetted for underlying evidence.

Who can do evaluations?

Potential partners can include government evaluation groups, like Minnesota’s Management and Budget office or Washington, DC’s The Lab @ DC, whose mandates include championing evidence and supporting other agencies in their jurisdictions interested in evaluation.

Local universities and academic policy labs can also be a good source of expertise and experience in running evaluations. Additionally, J-PAL North America’s research network includes researchers across the country who have backgrounds in conducting rigorous, policy-relevant evaluations. If a government agency is interested in exploring randomized evaluations and is looking for a research partner, J-PAL’s State and Local Innovation Initiative can help match agencies with research experts and provide input into the evaluation design process.

What resources exist to support evaluations?

The federal government maintains several databases cataloging interventions that have been studied, study results, and the strength of the evidence. These databases span multiple topics, such as the What Works Clearinghouse for education, the Pathways to Work Evidence Clearinghouse, and others. Other groups in the evidence-informed policy space, like Results for America, also maintain evidence resources and case studies.

At J-PAL, our catalog of evaluation summaries can offer a starting point to understanding evidence produced by randomized evaluations in the United States on a variety of topics. We also offer several other resources including training and workshops, toolkits and guides, and technical assistance. Additionally, we have partnered with policymakers and researchers to create a learning agenda for future evaluations on economic mobility that summarizes ongoing and completed evaluations, research priorities, shared questions, and opportunities for new research.

Key points from webinar participants

Finally, participating in the Treasury’s webinar reinforced for us how many people are engaging in the valuable work of integrating evidence into policy across the country. Below, we highlight a key takeaway from each of the other webinar speakers.

Pete Bernardy from Minnesota Management and Budget outlined several steps his team is taking to empower decision-makers across Minnesota to pursue evidence and evaluation, such as helping agencies build evidence-use requirements into Request For Proposal processes, demonstrating the important role that internal government groups can play in enabling this work.

Grace Simrall from the Louisville Office of Civic Innovation & Technology shared the story of her team’s work on an evaluation of a program aimed at reducing recidivism among people returning home from prison, including the practical importance of uniting multiple sources of administrative data to get a full picture of the program and its participants.

Diana Epstein from the Office of Management and Budget’s Evidence Team emphasized the goal of evaluation as ongoing learning to achieve important policy goals that go beyond static assessment of a program’s value.

Sara Dube from the Pew Charitable Trusts Results First Initiative highlighted the wealth of existing resources available to support governments in evidence-based policymaking, including the Results First Clearinghouse Database that aggregates information about program effectiveness from different evidence clearinghouses.

Join us

If you are interested in joining us in this work, exploring how J-PAL North America could support an evaluation in your jurisdiction, or learning more about existing evidence, please reach out to SLII Initiative Manager Rohit Naimpally at [email protected].