Strengthening randomized evaluations with qualitative research: Baby’s First Years household measurement
Building on J-PAL North America’s qualitative research blog series on incorporating qualitative research into randomized evaluations, this post features the perspectives of researchers conducting the Baby’s First Years (BFY) study. BFY is a J-PAL-funded research project assessing the impact of poverty reduction on family life and infant and toddlers’ cognitive, emotional, and brain development. In this post, the researchers describe what we can gain from triangulating with qualitative and quantitative data on household rosters and how it should encourage us to be cautious in interpreting our results.
In the Baby’s First Years study, we bring together qualitative and quantitative data to examine how seemingly simple and objective facts—like who lives in a household—are captured by different types of data collection and what this means for how we should interpret results.
A mother of four in Louisiana told us that her family was recently evicted, leaving them to stay with family and friends. During her interview, she and her children were living with her cousin’s family, though during the week, her seven-year-old stayed with kin closer to school. Mom said that after an argument with her children’s father the day before, he went to stay with his brother. Depending on the day, therefore, there could be up to three adults and six children in this household. Two weeks later, in the survey, she described a household of herself and her children, as well as another adult (neither her cousin nor her children’s father). With her variable housing circumstances, there may not be one right answer about this mother’s household membership.
In Minnesota, we met a mom who lived with her baby at her parents’ house, where her adult cousin stayed part-time, but she and her child also spent about half the nights of the week at her child’s father’s house. While we can choose one as the primary household to report on, doing so obscures the reality of the adults and resources to which this child has access—there are more than either household’s roster would capture, but they are also not available full time.
In 2018 and 2019, the BFY study recruited 1,000 mothers in four US cities whose household incomes were below the federal poverty line when they gave birth. Mothers agreed to be randomly assigned to receive a large ($333) or small ($20) cash gift each month until their child was around four years old. Our qualitative companion study, BFY: Mothers’ Voices, invited a stratified random sample of 80 mothers to participate in repeated, in-depth qualitative interviews each year for the duration of the study. Recently released results from the BFY study showed that babies in the high-cash-gift group were more likely to have brain activity patterns that previous research has associated with future learning and cognitive skills.
When we collected survey data and conducted interviews with moms when the focal children were around one year old, both instruments asked moms about who else lived in her household (for other research on household measurement, see Clark 2017; Waller and Jones 2014).
In addition to providing descriptive information about families, such data is key to calculating measures core to the study. This includes the number of adults in the household who could contribute financially to or invest their time and care in children, and the number of children in the household in need of adult investments and attention. Additionally, who lives in the household and their relationships feed into key measures of economic well-being that matter for safety net programs, like where the household falls relative to the federal poverty threshold—a calculation based on income and family size. As we explore the mechanisms driving the recent brain activity findings, understanding these issues—who is in the house with the child, who is contributing to and drawing on household resources—all come into play.
During the survey, we asked mothers about each of the people living in their household, defined as “anyone who has been living with her and is related to her baby through blood, marriage, domestic partnership, or adoption.” In the qualitative interviews, we asked broadly “Who all lives here?” and then followed up about adults and children to make sure everyone was captured. In the interview, we also asked mothers about people who lived with her only part-time.
Our biggest takeaway was that the household roster data collected through surveys and semi-structured interviews were largely consistent:
| Qualitative Interviews | Quantitative Survey | |||
|---|---|---|---|---|
| Total Number of Unique Individuals Captured | 403 | 389 | ||
| Average Household Size | 5.04 | 4.86 | ||
| Median Household Size | 5 | 5 | ||
| Minimum Household Size | 2 | 2 | ||
| Maximum Household Size | 12 | 10 | ||
| N | % | N | % | |
| Biological Parent of Focal Child | 112 | 27.8 | 111 | 28.5 |
| Biological Child of Mother | 194 | 48.1 | 190 | 48.8 |
| Other Adult | 54 | 13.4 | 54 | 13.9 |
| Other Child | 43 | 10.7 | 34 | 8.7 |
However, differences revealed interesting new information. Across both survey and interview data, 56 people were mentioned in only one of the two studies. Twenty-one individuals listed in the quantitative study were not identified in the qualitative study, only one of whom was a child. Thirty-five individuals described in the interviews were not identified in the survey, with fourteen (40 percent) being children. We did not see a strong relationship between length of time between the survey and interview and there being discordance in household rosters.
For about one in five of the individuals missing in either the survey or interview, we identified a life event such as a baby being born, a release from incarceration, a move, or a change in relationship status that explained the difference in household rosters. For another one in five, their part-time residence in the household may explain the discrepancy between interview and survey (although the survey did capture some individuals reported in the interviews as part-time residents). This means that for most individuals who were only reported in one of the two data collection instruments, there was not a readily identifiable reason for them to be missing. While the survey was more likely to miss household residents, both types of data collection missed some people. Yet both largely produced similar accounts. Why might these differences occur, and what do they mean?
No data collection is perfect, and we are not arguing that surveys or interviews are the “right” way to collect household rosters or other such data. Rather, pairing data collection methods helped reveal what we were seeing and missing with each instrument. While collecting a household roster in this detailed way claimed a substantial amount of survey “real estate,” it also had results largely concordant with those produced through open-ended responses.
Both the interviews and the discrepancies we see in household rosters offer insight into the fluid nature of relationships within and across households; there is more instability than point-in-time estimates allow us to see. Triangulating with multiple data collection methods can bolster our confidence and reveal when seeming instances of measurement error are actually illuminating household and family complexities that our research must take into account. Greater understanding of supposedly simple information about households can help us explore the mechanisms that drive RCT impacts.
In this post, the researchers on the Baby's First Years study reflect on the value of qualitative research in both providing a deeper understanding of each participant’s background and context and painting a fuller picture of mothers’ experiences in the study.
In part five of our blog series on incorporating qualitative research into randomized evaluations, we feature the perspectives of researchers conducting the Baby’s First Years (BFY) study. BFY is a J-PAL-funded research project assessing the impact of poverty reduction on family life and infant and toddlers’ cognitive, emotional, and brain development. In this post, the researchers reflect on the value of qualitative research in both providing a deeper understanding of each participant’s background and context and painting a fuller picture of mothers’ experiences in the study.
Randomized evaluations are great for telling us the answers to the questions we ask; they are less useful for helping us answer questions we didn’t know to ask.
As researchers from the BFY study, we recognized that survey and debit card transaction data would show how mothers spent their monthly cash gifts, but not why they did so—that is, what meaning and motivation lay behind spending decisions and what power dynamics or negotiations within families may surround these decisions. Therefore, we pursued a multi-method approach to understanding the impact of monthly unconditional cash transfers distributed to families with low incomes for the first several years of their focal child’s life.
In 2018 and 2019, the BFY study recruited 1,000 mothers in four US cities whose household incomes were below the federal poverty line when they gave birth. Mothers agreed to be randomly assigned to receive a large (US$333) or small (US$20) cash gift on a debit card each month until their child was around four years old.
Our qualitative companion study, BFY: Mothers’ Voices, invited a stratified random sample of eighty mothers to participate in repeated, in-depth qualitative interviews each year for the duration of the study. Both BFY and BFY: Mothers’ Voices are currently in the field, surveying and interviewing mothers by phone to find out how they and their now-three-year-old children are doing.
Our qualitative study has two distinct contributions to offer from typical survey data and related methods in understanding the impacts of the cash transfers. First, the qualitative approach starts from a position that we, as researchers, do not necessarily know all the important questions to ask or the response categories that make sense. So, while in BFY: Mothers’ Voices we use an interview guide to think through topics and ways to word questions, we also ask open-ended and follow-up questions to allow mothers to lead us where they think it is important to go. If we ask about the joys of motherhood, and a mom shares a story about her children playing together, or she starts talking about a recent stressful event, each answer is equally welcome and insightful.
Qualitative research opens the door to a fuller appraisal and exploration of mothers’ experiences that consider their dreams and fears in ways that are not pre-supposed to fit into validated scales. There are some things we simply cannot make sense of if we only ask our own questions, provide our own answer options, and, therefore, stay rooted in our own perspectives and experiences.
Nina, a 26-year-old, Black, high-gift-group mother of four from New Orleans, described using the first installment of the BFY money shortly after she gave birth (as is standard, we use pseudonyms to protect mothers’ identities). “I could have just cried because it was a total relief. Because first of all we went in the hospital flat broke. We was flat broke in the hospital.” You can hear not just Nina’s relief but also her joy: “We got food, a lot of food. We put food in the house. I even went and got the kids a gift. That's how happy I was. I was like, ‘Let me get the children something.’ So, I even got them a toy at the store. I got some cleaning supplies to make sure it was really sanitary for when I brought [my daughter] home.” Multiple mothers described wanting to bring their babies home from the hospital to clean houses—these are moments that are rich with the symbolism of having a fresh start.
Our open-ended interviews give us a chance to capture cash uses we didn’t anticipate. Because money is a distinct intervention in that it can be used in nearly an infinite number of ways, it’s just not possible to ask about them all on a survey. As a simple example, the BFY survey queried mothers about purchasing common goods for infants, such as highchairs and car seats. But, as with Nina, in the interviews we learned about purchases mothers made intentionally to benefit their children—such as cleaning supplies—that we did not know to ask about as child-related items. This changes our conceptualization of what counts as a “child-related purchase.”
The qualitative approach also provides a rich understanding of the background and context for each person. This contextualized approach may be particularly important in studying a topic like unconditional cash transfers, since mothers’ financial allocation decisions are likely embedded in the complex context of their childhood experiences, parenting values, family material circumstances, dreams for their children’s futures, and conceptualization of the cash transfer and what they see as its “proper” use.
When we center this complex context in our analyses, we may understand socioeconomic behavior differently. Following the typical frameworks of economics, buying a fast-food kid’s meal when you’re on a tight budget might make little sense, since you don’t have the money to spare and the burger and treat will not be an investment in developing your child’s human capital. To a mom who is scraping by, the skip in her child’s step and the smile on her child’s face as they leave the restaurant may be great reasons for such a financial allocation decision. And her motivation for doing so could be found in her experiences as a child, either relishing similar memories of her own or wishing to give her children more happy moments than what dominated her upbringing. We could learn how the deprivation parents want their children to avoid may extend beyond the food, clothing, and shelter parameters that often constitute material hardship in research to include social and emotional experiences as well.
Scholars’ existing measures of financial disadvantage don’t typically capture the small, daily expenses that can also be central to helping parents feel like they are doing right by their children. The qualitative work in BFY allows our team to think differently about how the cash gift affects allocations that support children—expanding the important but also narrow expenditure types captured in the survey to include popsicles that offer a toddler joy or cleaning supplies that help a mom feel she is providing a safe place for her baby. These are things that we may discover are important ingredients in child investment and can have ripple effects for children’s development.
Part one of this blog series highlights the value of incorporating qualitative methods into randomized evaluations and outlined specific tips for researchers. Part two talks about how qualitative research helped motivate and shaped the central question and hypothesis for a study on racial concordance between physicians and patients. Part three looks at how Creating Moves to Opportunity randomized evaluation embedded qualitative research methods into its study design. Part four discusses how qualitative research helped the Oregon Health Insurance Experiment research team make sense of some of the study’s results.
In part three of our qualitative research blog series on incorporating qualitative research into randomized evaluations, we learn more about how researchers conducting the Creating Moves to Opportunity (CMTO) project embedded qualitative research methods into their study and what factors made conducting high-quality, interdisciplinary research feasible.
In part three of our qualitative research blog series on incorporating qualitative research into randomized evaluations, we learn more about how researchers conducting the Creating Moves to Opportunity (CMTO) project embedded qualitative research methods into their study and what factors made conducting high-quality, interdisciplinary research feasible.
Since 2018, MDRC, J-PAL affiliates Peter Bergman, Raj Chetty, Nathaniel Hendren, Lawrence Katz, and Christopher Palmer, along with sociologist and qualitative research expert Stefanie DeLuca, have been conducting a randomized evaluation of Creating Moves to Opportunity, a housing mobility program in Seattle and King County, Washington. The study, conducted with approximately 1,300 families, aims to understand CMTO’s impact on helping families move to neighborhoods with lower rates of poverty and more opportunities for upward income mobility.
In Phase One of the study, qualified low-income families with at least one child under fifteen were drawn from the Housing Choice Vouchers waitlist and offered an opportunity to enroll in the study. Participants were then randomly selected to receive CMTO services—including customized search and landlord engagement assistance from family and housing navigators and short-term financial assistance—or receive the housing authorities’ standard services.
Results from Phase One of the study found that CMTO-participating families were more likely to move to higher-opportunity neighborhoods than families who only received standard services. To help interpret these results and understand participating families’ experiences, the research team conducted qualitative analyses to complement the quantitative results. Led by DeLuca, a team of research staff carried out in-depth interviews with 161 participating families (from both the treatment and control groups) using an approach that borrowed some elements typically used in quantitative research to ensure the collection of high-quality, representative data.
“Using the program administrative data, we pulled a stratified random sample of families participating in CMTO to interview and achieved an 80 percent response rate. Those are two relatively unusual things to do in interview studies, but it helped ensure that the particular patterns that emerged from the data were representative of the families in the program.”
According to DeLuca, having a research team with the capacity and flexibility to follow up and be on-site when necessary was critical to the success of the qualitative research component of CMTO.
“Achieving an 80 percent response rate took more than just well-spaced regular phone calls. It required a research team with the capacity to be on-site and commit to really connecting with the study participants, especially with the door-knocking component of recruitment. It makes a huge different for people to see you as interested in their stories, not as telemarketers. With interviews taking up to two to four hours, being on-site also meant we could conduct follow-up interviews in case the interviewer wasn’t able to get through everything the first time. It also allowed us to collect ethnographic observations of the neighborhood and see the rhythm of the household, meet family members and friends—even run errands with families when needed.
We also used pairs when possible for interviews and had team members memorize the interview guide. With one team member conducting the interview without looking at a script and another paying attention to anything that got missed and noting that at the end, we had the benefit of an organic, more natural conversation alongside systematic data collection. While team members trained coders and designed codebooks, the bulk of the coding was also done by people who weren't interviewers, so the data wasn’t coded through the lens of assumptions and preconceived notions that someone present at the interview might have. Finally, we had several reliability procedures in place, including having the interviews coded by at least two different people, and a third person to note inconsistencies to be resolved.”
Participant interview findings helped to highlight the aspects of the program that led families to move to higher-opportunity neighborhoods.
“Through the interviews, we were able to better understand what was happening from the participants’ point of view and what mechanisms were at play. Previously, many believed that interventions focused on providing more information and monetary resources would encourage families to move to neighborhoods with more opportunities for upward income mobility. But what we see from CMTO is that those two types of resources are not sufficient to explain the success of this program. What came through so clearly in the narratives of the participants was how important it was to have that support from the housing navigators who could boost confidence and provide customized assistance based on the specific needs of the families.”
Ultimately, for DeLuca, incorporating qualitative research into randomized evaluations is about providing opportunities to check assumptions and to see a bigger picture of what might be driving the impact of a program or policy.
“The joining of disciplines like economics and sociology for randomized evaluations can provide an opportunity to see something you otherwise wouldn’t see. Sociologists are often trained to think about barriers to social mobility and wellbeing and tend to focus a bit less on the decision-making process of individuals. In contrast, economists tend to emphasize decision-making quite a bit and might give less attention to the context in which the decisions are being made. By bringing the two together, researchers can leverage the theories and tools of each discipline and, hopefully, conduct a more policy-relevant and consequential study.”
Part one of this four-part blog series highlights the value of incorporating qualitative methods into randomized evaluations and outlined specific tips for researchers. Part two talks about how qualitative research helped motivate and shaped the central question and hypothesis for a study on racial concordance between physicians and patients. Part four discusses how qualitative research helped the Oregon Health Insurance Experiment research team make sense of some of the study’s results. Part five highlights the value of qualitative research in providing a deeper understanding of mothers' experiences in the Baby's First Years study.
J-PAL North America reflects on using qualitative research methods in randomized evaluations, and summarize a few practical tips for those interested in integrating a qualitative approach into their studies.
Randomized evaluations allow researchers to measure the impact of programs and policies on a range of outcomes. Using this approach in North America, J-PAL researchers have recently examined a wide range of topics, including the effects of Medicaid on rates of health care utilization and the impact of a housing mobility program on the likelihood of families moving to lower-poverty neighborhoods.
But what mechanisms are driving the effects of these programs and policies? How did the context, design, and implementation of the program or policy influence the result? If replicated in a different context, will the program have the same effects? Is the study asking the right question?
Researchers can often collect quantitative data and design evaluations to shed light on these types of questions, but there’s always more to learn. Qualitative methods, such as direct observation, in-depth interviews, and focus groups, allow researchers to dive into these questions by examining participants’ beliefs, attitudes, experiences, and perspectives. Data gleaned from these methods can help researchers gain insight into potential mechanisms or barriers, generate new hypotheses and questions, and understand the stories behind the quantitative results.
For decades, social science scholars within anthropology, sociology, and psychology have employed qualitative methods. In recent years, many researchers within the traditionally quantitative field of economics have also incorporated qualitative methods into their studies and built teams with qualitative expertise to strengthen their research.
From our conversations with several researchers who've conducted and relied on qualitative research methods as part of J-PAL-supported randomized evaluations, we've summarized a few practical tips for those interested in integrating a qualitative approach into their studies:
- While developing your randomized evaluation, don't discount questions that can be best addressed through qualitative methods. These questions may challenge certain assumptions or shed light on mechanisms, contexts, or outcomes that quantitative methods may not fully capture. For example, researchers may want to gain insight into the experience of staff implementing a particular program to identify the challenges and barriers they faced, understand their perception of the program’s successes or shortcomings, and identify potential obstacles to longer-term implementation or scale-up. While this may be difficult to assess in a survey, focus groups and qualitative interviews could provide valuable insights.
- Account for qualitative research in study proposals and budgets. Qualitative research can require a high time commitment and can benefit from the support of specialized team members.
- Cultivate relationships with implementing partners. Forming a strong relationship with implementing partners is one key component to a successful and policy-relevant study and can help build a foundation for conducting qualitative research. Implementing organizations interact closely with study participants and often play instrumental roles in shaping the design and implementation of randomized evaluations. They are also well-placed to help researchers determine the best approaches to carrying out the qualitative parts of a study.
- Diversify your research team. Consider building a research team of individuals from different disciplines. Scholars of psychology, anthropology, sociology, and social work often have extensive experience with qualitative methods and bring valued perspectives that economists may be missing.
This blog series highlights three examples of J-PAL research teams using qualitative research methods to inform and strengthen the design, implementation, and analysis of their randomized evaluations. For part two of the series, we interviewed Professor of Public Policy and US Health Care Delivery Initiative Co-Chair Dr. Marcella Alsan about how qualitative research helped motivate and shaped the central question and hypothesis for a study on racial concordance between physicians and patients. In part three, we spoke with Professor of Sociology & Social Policy Stefanie Deluca about how the Creating Moves to Opportunity randomized evaluation, a study she co-led, embedded qualitative research methods into its study design. Part four features a conversation with Associate Professor of Social Work and Oregon Health Insurance Experiment co-author, Heidi Allen, on how qualitative research helped the research team make sense of some of the study’s results. The series concludes with part five, where we spoke with researchers from the the Baby's First Years study about the value of qualitative research in providing a deeper understanding of mothers' experiences.