Partner Spotlight: Henry Fitts from the City of Rochester, NY on designing and implementing the Bridges to Success program evaluation

Posted on:
Authors:
Henry Fitts
Photo: Shutterstock.com

Henry Fitts is the Director of Innovation at the Mayor’s office in Rochester, New York. The City of Rochester has partnered with the Wilson Sheehan Lab for Economic Opportunities (LEO) and J-PAL North America through the State and Local Innovation Initiative to develop and implement a randomized evaluation of the local Bridges to Success program.

We spoke with Henry to learn more about his experience designing and implementing an evaluation of Bridges to Success, including why they decided to pursue a randomized evaluation, how this research partnership developed, and how forthcoming research results may inform policy decisions in Rochester and beyond.

  1. What is the history of the Bridges to Success program and what is it trying to achieve?

In 2013, Rochester, NY ranked fifth in the nation for the highest poverty rate. Rochester’s standing has continued to worsen in recent years, with the city now ranking third for overall poverty rate, first for child poverty rate, and first for extreme poverty, or people living below 50 percent of the poverty line. This has been a galvanizing issue for the local community, and stakeholders have been developing new and innovative approaches to respond to Rochester’s poverty crisis for the last several years. 

In 2015, an unprecedented coalition of local stakeholders launched the Rochester Monroe Anti-Poverty Initiative (RMAPI) to challenge the community to think differently and shift efforts to improve long-term economic outcomes rather than serve short-term basic needs. RMAPI has also identified the need to restructure services around a holistic view of a family’s barriers to economic mobility rather than provide siloed services for specific issues. RMAPI operates using the Collective Impact model and includes representation from state, county, and city government, all major non-profits, local universities, neighborhood leaders, and individuals impacted by poverty.

Following an extensive community planning process, RMAPI recommended the piloting of two adult mentor/navigator pilot programs to solve these issues–one of which was Bridges to Success. The program supports participants in setting and achieving goals across each major area of their life and connecting them to services to assist in those goals.  The program also provides participants with $1200 in incentives to recognize participant achievements or to remove barriers to community resources, such as application fees for services or school. Bridges to Success ultimately aims to increase earned income and reduce reliance on public financial assistance among project participants.

  1. What first made you interested in pursuing a randomized evaluation of your program?

RMAPI helped focus the local community on implementing more rigorous evaluations of program and initiative outcomes, especially metrics associated with long-term economic mobility. During its planning phase, RMAPI stakeholders researched evidence-based interventions in other communities and found promising results from the Economic Mobility Pathways (EMPath) model in Boston. We connected with them and visited an implementation of their model in Fort Worth, Texas. In Fort Worth, we connected with LEO, who gave an inspiring presentation on their randomized evaluation of the Texas EMPath program. As we began to structure our own detailed program proposal, LEO sent us information about the J-PAL State and Local Innovation Initiative and the funding available to help structure the evaluation. The rigor of a randomized evaluation aligned well with the RMAPI goals for enhancing evaluation and metrics—we felt like it was a perfect fit.

  1. What has been exciting, as well as challenging, about the evaluation design process?

The design of the actual evaluation was relatively simple. What was more challenging was crafting program operations around it and trying to preserve flexibility to deal with unforeseen circumstances. We were targeting this program in a defined set of neighborhoods in Rochester, and there was a lot of anxiety about whether there was going to be difficulty in recruiting an extra 150 people to account for the comparison group. We agreed that easing the eligibility criteria and expanding the geographic limitation would be two ‘levers’ we could pull if we encountered difficulty in recruitment, without compromising the evaluation. We ultimately had to use both, and we were fortunate to have RMAPI’s support.

The other big challenge we experienced was establishing data sharing of administrative records, which can reduce the cost of evaluation by removing the need for costly follow up surveys. Working with our partners in the New York State government to establish data sharing was quite difficult. It took a substantial amount of time to connect with the right people at the Office of Temporary and Disability Assistance, but we did achieve the final data sharing agreement. The main issue was a lack of a clear process and point of contact for these kinds of requests. I’d advocate for state governments to mandate that all of their organizations have an open and transparent process for data sharing requests for academic evaluation.

  1. What comes next in the evaluation process, and how do you plan to use the evaluation results?

LEO is currently in the process of collecting participant and comparison group data on outcomes one year into the program. They are working with the University of Wisconsin, which has a team that specializes in follow up surveys and has a great success rate in locating participants, which can be especially difficult for the comparison group. We recently received the first set of interim results and will have the full results for participants at ‘one year in’ this summer. Participants are in the program for two years, so we won’t have final data at graduation until late 2020. We will also be surveying participants at least one year after graduation, which will be a critical measure of whether the program’s impacts last. We will also use New York State administrative data to monitor earnings and public benefits utilization several years out, which won’t require intensive follow up surveys.

We plan to use the results to judge whether the program actually produced lasting economic mobility outcomes. The primary indicator will be increased household earnings relative to any increases seen by the comparison group. We also plan to use the data to produce a rigorous cost-benefit analysis. If we see positive results from both of these aspects, we will advocate for expansion of the program and use the evidence as part of our case for funding.

  1. What advice do you have for other state and local agencies who may be considering pursuing a randomized evaluation?

Be sure to build the amount of time it takes to enroll in the program into the participant engagement duration. For example, our intervention was two years in length, but the last enrollments didn’t occur until after the end of the first year. This made the two-year intervention from the client’s perspective into a three-year program in terms of operations. This needs to be budgeted for accordingly. 

And finally, build a partnership with an academic team as early on in the process as possible. They will be extremely valuable to have on board even as you are conceptualizing the project. It is much easier to build in the evaluation from the start rather than fit it in later. They can even help you brainstorm what questions you are looking to answer and what areas will be a good fit for an evaluation. They can also help with grant writing and other critical steps to getting the program and evaluation funded.  Through our partnership with LEO and support from J-PAL’s State and Local Innovation Initiative, we are excited to continue to conduct rigorous impact evaluations of local programs and ultimately use insights to improve the lives of our community members.

Authored By

  • J-PAL logo

    Henry Fitts