How policymakers can ensure they’re supporting housing policies that work: Demystifying randomized evaluations

Posted on:
Authors:
Katie Fallon
Two individuals holding pens, looking at charts and clipboards
Photo Credi: create jobs 51/Shutterstock

This post was originally published on Housing Matters, an initiative of the Urban Institute.

Understanding the impact and effectiveness of housing programs and policies is critical to ensuring they achieve their goals and help the people they serve. For example, these considerations include which types of housing and housing modifications should be prioritized to help support older adults as they age.

To improve people’s outcomes, policymakers should be investing in evidence-based programs. Randomized evaluations are one rigorous method researchers often use to evaluate the impact of certain policies.

To demystify randomized evaluations and underscore their value for policymakers and practitioners, we spoke with Bridget Mercier, policy and training manager at J-PAL North America, a regional office of the Abdul Latif Jameel Poverty Action Lab (J-PAL), a research lab based at the Massachusetts Institute of Technology that focuses on this type of research. J-PAL helps to match researchers and practitioners who are interested in impact evaluation, and they help build capacity for organizations hoping to do randomized evaluations.

What is unique about randomized evaluations as a method?
Randomized evaluations are often considered the most rigorous type of impact evaluation. Generally, studies try to find cause and effect, but this is difficult because you can’t observe what would have happened if a program or policy did not take place. Being able to estimate this difference helps us compare the impacts for a group that received an intervention with those that did not, establishing what is known as a counterfactual.

In randomized evaluations, people are randomly assigned to receive the program or to a comparison group, and because they’re assigned at random, we can be confident they don’t differ systematically at the start of the study. This means both groups will have similar characteristics, including those we can easily measure and those we can’t. So we know we can attribute the outcomes to the intervention itself, rather than any underlying factors.

As part of our work in the California Bay Area investigating the impact of cash transfers on housing stability, randomization ensures the group that receives the cash transfer isn’t different in terms of observable characteristics, like age or household size, than the group that didn’t receive the cash transfer, or in characteristics such as motivation or self-selection, which are more difficult to observe. When we have relevant policy questions, randomization can give us trustworthy answers. Researchers also work to ensure their randomized evaluations are optimized to detect impact.

How can randomized evaluation contribute to the evidence base on housing?
The scale of homelessness is immense; nearly 600,000 people are experiencing homelessness on a given night, and more than a million access shelter services over the course of a year. Yet lots of folks are working tirelessly to end homelessness. Those working on issues of homelessness and housing stability need to know if what they’re doing is working. Specifically, randomized evaluations can generate rigorous evidence to determine which strategies are most effective at preventing or reducing homelessness. While randomized evaluations aren’t the best for every housing question, there are many ways that they have been helpful to shifting narratives, especially on issues of housing instability and homelessness.

In J-PAL North America’s evidence review on reducing and preventing homelessness, we summarize results from 40 rigorous evaluations related to housing and homelessness. The review highlights strategies proven to be effective, such as Housing First, which prioritizes immediate housing without preconditions. Evidence has played a fundamental role in the shift toward the Housing First approach, with several randomized evaluations showing that a Housing First approach and permanent supportive housing can more effectively help people who are experiencing chronic homelessness than the traditional shelter system or transitional housing.

Our evidence review also describes where more research is needed, such as identifying which bundles of services and housing supports are most effective, and for whom. There are many open and testable questions in the field and opportunities for randomized evaluations.

Why should policymakers care about randomized evaluations?
Policymakers should care about randomized evaluations because they are one of the most powerful tools to determine if a policy or program is effective. Understanding program impact is vital to ensuring money is spent on programs that make a difference, especially given the size and scope of homelessness. We also know systematic inequality puts certain people at a disadvantage—such as people of color, members of the LGBTQ community, and survivors of domestic violence—and the findings from randomized evaluations may help us advance more equitable policies by assessing how specific policies or programs may be more or less effective for different populations.

Randomized evaluations can also help us understand why we saw certain results by examining mechanisms for impact. This can help policymakers refine their models and help them apply learnings to new contexts.

How have the results from randomized evaluation affected people they study?
Insights gained from randomized evaluations have shifted narratives and policy agendas, such as the Housing First approach and permanent supportive housing, as well as housing choice vouchers. In the case of Housing First, cities and states began implementing Housing First and permanent housing on a wider scale after randomized evaluations demonstrated reductions in homelessness. Randomized evaluations have also demonstrated that housing vouchers can be effective in both reducing homelessness and increasing opportunities for economic mobility. For example, long-term findings from the 1990 Moving To Opportunity study led to policy changes in recent years at both the local and federal levels, expanding to channel over $50 million to housing mobility services to help families move to neighborhoods with greater economic opportunities and expand housing choices for thousands of low-income families in the United States.

As we learn more about what works to reduce and prevent homelessness, J-PAL aims to translate research into action by supporting the expansion and scale up of effective programs to reach more people and have a greater impact.

What are some of the challenges of randomized evaluations?
Ethics are a top priority for any research, and there are ethical considerations when conducting a randomized evaluation. For example, the comparison group shouldn’t be blocked from a program or support they would otherwise have access to (especially one that we already know is effective based on prior research). The comparison group receives “treatment as usual,” or can receive a different version of the intervention rather than nothing at all. When resources are scarce and it isn’t possible to serve everyone, using a lottery may be a fair and ethical way to allocate services, thus creating the opportunity for a comparison group.

Not every research question can be effectively or ethically answered using a randomized evaluation, and we believe the research question itself should determine the best method of evaluation. We don’t encourage randomized evaluations when they aren’t appropriate, such as when there are enough existing resources to serve everyone and we already know the program works.

However, in many cases we don’t know if an intervention is effective, and there are often not enough resources to serve everyone. In these instances, a randomized evaluation can be helpful to provide this evidence or shift resources away from ineffective programs, depending on the results. But of foremost importance is that the rights and welfare of study participants must be maintained.

What recommendations do you have for nonresearchers reading studies employing randomized evaluations? What would you tell people to look out for to understand the findings?
Focus on the applicability of findings to your setting. Consider the context of the evaluation at hand, especially when reviewing studies to inform local policy. It’s important to think about generalizability from one place to another. A framework we use at J-PAL asks a number of important questions, such as is there a similar problem and why did this solution work? What are the local conditions and underlying behavioral factors? What does local implementation look like? Using these questions can help us understand what the policy and program implications of each study may be in a different context.

Statistical power is also critical in randomized evaluations; it tells you how likely you are to detect meaningful changes. Without adequate statistical power in a study, we won’t learn very much, so we need to consider the statistical power that each study brings. A key element of statistical power is sample size, or the total number of people or units in a study. Larger samples are more likely to be representative and more likely to capture impacts of the population. When the sample size is too small, we’re not able to tell if the observed outcome is a result of the program or pure chance.

Authored By