Partner Spotlight: Pauline Abernathy from Benefits Data Trust on the value of research partnerships and understanding your impact
In 2014, Benefits Data Trust (BDT) partnered with J-PAL North America staff and affiliated researchers Amy Finkelstein (MIT) and Matt Notowidigdo (Northwestern) to design and implement a randomized evaluation of their Supplemental Nutrition Assistance Program (SNAP) outreach and assistance program.
We spoke with Pauline Abernathy, BDT’s Chief Strategy Officer, to learn more about the nonprofit’s experience conducting a randomized evaluation in partnership with J-PAL North America, including why they decided to pursue a randomized evaluation and how the results have informed their current and future work.
What is the history of BDT and what is its mission?
BDT is a national nonprofit that helps people live healthier, more independent lives by creating smarter ways to access essential benefits and services. Each year, billions of dollars in benefits go untapped either because people aren’t aware that they’re eligible for them, or they aren’t sure how to access them. Our mission is to create smarter ways to connect people to benefits that will help them live better lives.
What motivated BDT to pursue a randomized evaluation of the SNAP take-up program?
Our team—and our funders—wanted to understand the real impact of BDT’s work. We knew that the only way to truly know our causal impact was through a randomized controlled trial. If the findings showed that what we were doing wasn’t having an impact, we wanted to know that. Ultimately, the reason why we’re in this work is to improve people’s lives and if the research had found that it didn’t, it was important that we know.
What excites you about the study findings and were there any surprises?
We learned that BDT’s services are having an impact. For those who received our targeted outreach and assistance enrolling, we were able to triple SNAP enrollment. The average household we helped received $1200 in food assistance, and the study calculated a 20:1 return on investment, which was incredibly exciting. What was surprising was that the group that received targeted outreach without assistance saw an 81 percent increase in their enrollment. This told us that for some, lack of information is a barrier, and for others, both the lack of information and the application process itself are barriers to enrolling.
The study showed that we were vastly underestimating our impact because we were only counting the people who enrolled in a benefit as a result of both our outreach and assistance efforts. We weren’t looking at the people who were enrolling based on our outreach alone.
How have the study results informed or affected BDT’s work since the evaluation completed?
The results led us to think about how we can improve our work by tailoring outreach depending on individual needs. The results also reinforced our belief that people want to be served in different ways. It’s helped inform the development of a new BDT tool called Benefits Launch, which enables people to screen themselves. The tool then points users to where they can apply either online, in-person, or over the phone. We also send follow-up texts to further help participants if they run into roadblocks.
Likewise, we’ve developed our first machine learning model to help identify which clients we should encourage to get assistance with application documents, and which people likely could submit their documents on their own and therefore get enrolled even faster.
What advice do you have for other organizations who are interested in conducting a randomized evaluation?
You must be prepared for good or bad news. You need to be comfortable with the concept of a control group, that there will be people who, at least during the time being studied, aren’t receiving the treatment. For us, that was easy because we couldn’t reach out to everyone at once. A way that I would encourage people to think about randomization is not that people are not receiving a treatment, but they’re not receiving it at that time—there’s simply a delay. The study results might help inform what treatment they receive, so they might ultimately receive something that’s more helpful for them.
The other piece is to ensure that you really know your program because you’re going to have to answer a lot of questions. You need staff with time to answer questions to both help design the study and to analyze the results. It’s important to realize that it takes time and knowledge to conduct randomized evaluations correctly.
Any last thoughts about the overall experience of partnering with J-PAL North America?
I can’t underscore enough how helpful the research findings have been. We just received two large grants, and I don’t think we would have received the funding without partnering with J-PAL North America on this research. The funders are investing because they’re confident that we have a measurable and significant impact, that our model works, and that it can be scaled.
The experience also underscored how much we can glean by further analyzing our data. We have since hired additional outreach analysts to dig into our data to help us understand who is responding to what so that we can better tailor our outreach and assistance. The whole evaluation experience encouraged us to build out a dedicated research team to do more work with external researchers like those at J-PAL. We look forward to conducting further research so we can continue to improve on how we serve our clients and the impact we make.