Researching racial equity: Administrative data bias
In J-PAL North America’s researching racial equity blog series, we discuss how research plays a critical role in identifying structural inequities in systems and policies that disproportionately affect communities of color. In part six of this blog series, we identify sources of bias in administrative data and describe these within the educational context.
Linking randomized evaluations to administrative data (such as medical records, education records, and tax records already collected by organizations for operational purposes) has greatly expanded the ability of researchers to examine program impacts on a wide range of outcomes, over longer periods of time, and at lower cost, with less burden on study participants. This has led to an increased fraction of academic studies published in top economic journals using administrative data over the past several decades. Administrative data is often considered less biased than survey data because it minimizes the risk of social desirability, enumerator, and recall biases. However, administrative data is not free from bias, and randomization — which ensures that systematic differences between intervention and comparison groups do not drive differences in outcomes — does not remove all bias inherent in existing datasets.
Administrative datasets are incredibly useful to researchers conducting randomized evaluations, but using administrative data for impact evaluation partially outsources choices over what and how to measure. These choices are subjective (reflecting an individual’s personal biases and perspectives) and reflect a researcher’s positionality (where one is situated in society based on their identities). In this post, we discuss how subjectivity and structural inequity can affect administrative data, provide examples from the educational context, and identify potential solutions applicable in education and other sectors.
Choosing what counts: How subjectivity shapes administrative records
What researchers choose to measure and how they choose to measure it shapes what we perceive as important and influences research priorities. These choices can reflect subjectivity that is often unexamined and unquestioned. For administrative data, these choices have important implications on representativeness by centering or decentering measurement methods that affect groups of people differently.
One example is the creation of gross national product (GNP) by Simon Kuznets in the 1930s. Tasked with developing a method to measure a nation’s income, Kuznets chose to exclude unpaid household labor, overlooking how domestic labor underpins population growth, employment, and business growth. This exclusion may have resulted from Kuznet’s personal bias, constraints in collecting household data, or a combination of these factors. Because of this choice, household labor is not explicitly reflected in macroeconomic datasets and, until childcare shortages became extreme due to Covid-19, has not been a focus of policy conversations about national growth. By omitting the economic contributions of groups predominantly involved in unpaid labor, notably women, this measurement choice might inadvertently undervalue their role. Economic policies and research, as a result, may not fully recognize or support the needs of these groups.
Examples from education
In the field of education, the influence of subjective choices is evident in standardized assessments. The development of educational standardized assessments was shaped by the views of individuals such as Carl Brigham, whose work was driven by an underlying racist belief: the notion that non-white students were intellectually inferior. A body of education literature has since lamented the biased assumptions that standardized tests are based on, and how results from tests inform critical policies, such as closing the racial and ethnic achievement gap. Despite concerns, standardized testing is a mainstay of the American education system. Understanding what assessments actually measure, what data is excluded, and how this affects groups of students differently, can help ensure that research works to mitigate bias instead of entrenching it.
Researchers can explore alternative non-test score measurements to complement or substitute for testing data. A recent study highlights an example of non-test score measurements, where teacher impact on student behaviors, measured by absenteeism, grade repetition rates, and suspension rates, are more influential on student long-term outcomes than teacher impacts on test scores. In this case, non-test data tell a different story than test data alone. Additional measures, such as family engagement rates, attendance, student discourse, and classroom engagement, can provide further insight into schools’ role in driving learning.
When relying on data from standardized tests, researchers can investigate the testing instrument itself for bias. The Center for Measurement Justice offers a validity framework to analyze bias intrinsic in an assessment. Considering the degree of bias can guide decisions on how to interpret research results and when to select additional data sources.
Structural Inequities: The hidden variable in administrative data
Organizations collecting administrative data exist within the context of systemic, institutionalized racism, where programming and the data collected from it reflect existing racial inequities in society. For example, administrative datasets on crime record only the individuals that police choose to investigate, not the individuals that police observe (or do not observe), a process subject to systemic bias through over-policing of neighborhoods and interpersonal bias from individual officers. This can lead to overrepresentation and underrepresentation of groups of people by race, ethnicity, and socioeconomic status in existing datasets. It is important to understand the context in which data is collected and research is conducted, so that outcomes that differ by racial and ethnic subgroups are analyzed and interpreted without relying on racist assumptions or stereotypes.
Examples from education
Education evaluations often use learning disability status as a criterion for defining study eligibility. Educators, parents, doctors, and others play instrumental roles in identifying and assessing students with learning disabilities, who are then eligible to receive additional classroom support. Identification and diagnosis processes are prone to conscious and unconscious bias and are embedded within unequal systems. Significant research has been devoted to understanding bias in special education placements, with some evidence suggesting that Black, Latino/a/e, and non-white students are over-referred for special education placements, while other evidence suggests these groups of students are under-referred, particularly for ADHD diagnoses, once other factors are taken into account.
Over and under representation in administrative data are important when considering program effects, particularly for subgroups defined by those data. Although randomized evaluations generate internally valid treatment effects, understanding how the sample and any subgroups are defined is important for interpretation, generalizability, and comparing subgroups. For example, the impact of a tutoring program may be over or understated because students eligible for participation may have a different distribution of learning disabilities than the data implies. When administrative categories are used for subgroup analyses, differential coverage in the data may explain observed differences in treatment effects. Care should be taken not to essentialize these differences as related to race, for example, rather than racism or factors that co-vary with race (see the section on Pursuing Rigor in our previous post on stratification economics).
Understanding implementation contexts can help assess whether eligibility criteria based on administrative data constructs are biased. The Institute of Education Sciences provides an implementation research toolkit so that researchers can understand implementation contexts and connect these to evidence generated from research.
Tools for navigating administrative data bias
To navigate bias, researchers should critically scrutinize data choices and embrace tools and frameworks that prioritize equity. Below are tips for navigating administrative data bias, along with tools and resources.
Be careful mapping indicators to constructs
Be explicit when theorizing connections between the indicators available in the data and the broader constructs you hope to capture. Problems can arise when researchers are too quick to connect indicators to constructs that reinforce rather than reject existing biases. Does a test score measure aptitude, learning, or resources? Disciplinary records may be a biased measure of behavior if discipline is not equitably distributed, but infractions may still be worth measuring if they are an outcome an intervention hopes to reduce.
Don't outsource outcome decisions to what is available in the data
While some researchers are genuinely interested in improving test scores, others may default to measuring learning in terms of test scores simply because they are available. This data equity framework gives examples and key considerations when choosing outcomes.
Understand where randomization mitigates (or doesn’t mitigate) bias
Bias that correlates with treatment status is most damaging to the ability to estimate causal effects. Systemic biases that are present in the data for both treatment and comparison groups will not affect the ability to estimate unbiased average treatment effects, usually computed as the differences in means between groups. However, more care needs to be taken to interpret the actual levels of an outcome for a particular group.
As noted above, the ability to measure outcomes for a particular group will be influenced by how that group appears in administrative data and may have implications for program targeting. A program to reduce disruptive behavior, for example, may not generate measurable effects in administrative data if that group of students is rarely the target of enforcement. Comparing a variety of characteristics of the underlying population, potentially eligible sample frame, and actual study sample can ensure that results are contextualized and generalizable. This resource also provides a framework for assessing racial and ethnic bias in administrative data.
Group identities themselves are both socially constructed and may be further limited by choices available in administrative data. Keep these facts in mind when interpreting results for subgroups in particular.
Consider supplementing administrative data with primary data collection
Asking participants to self-report identities like race and gender with thoughtfully created categories can address limitations often inherent in equivalent administrative data and may be relatively simple to include in an intake and consent process.
For guidance on these types of questions, see J-PAL’s inclusive language guide and We All Count’s data biography template.
Acknowledge limitations
Not every source of bias is correctable, but researchers should think critically about the potential for bias, discuss the limitations inherent in their research design, and take these into account when interpreting results and communicating with stakeholders and policymakers.
The researching racial equity blog series features the contributions of researchers and partners in examining and addressing racial inequities and offers resources and tools for further learning. Part one shares an example of evaluating racial discrimination in employment. Part two features work quantifying housing discrimination. Part three gives an overview of stratification economics in the context of evaluations. Part four discusses how to center lived experiences throughout the research process and in impact evaluations. Part five shares guidance for incorporating inclusive and asset-based framing throughout the research cycle. Lastly, in part seven, Damon Jones and J-PAL staff share progress on researching racial equity and future areas of work.
In part four of J-PAL North America’s researching racial equity blog series, we sit down with Anthony Barrows, Managing partner and founder of the Center for Behavioral Design and Social Justice, to understand how to center lived experiences throughout the research process and in impact evaluations.
In J-PAL North America’s researching racial equity blog series, we discuss how research plays a critical role in identifying structural inequities in systems and policies that disproportionately affect communities of color. In part four, we sit down with Anthony Barrows, Managing Partner and Founder of the Center for Behavioral Design and Social Justice, to understand how to center lived experiences throughout the research process and in impact evaluations.
Defining lived experience and what it means to center this experience
Lived experience refers to individuals’ first-hand experiences with a program, policy, or problem. This could include people who are delivering a program (e.g. social workers) or people who are receiving a program (e.g. foster parents). Centering lived experience means creating space for people to share their expertise and for that expertise to be valued and incorporated into decision-making. This is especially important for people receiving an intervention since they often have the least opportunity to share their knowledge, concerns, and experiences with researchers.
People with relevant lived experience are often not intentionally included in the research and policymaking process. Researchers may feel that including lived experience goes against the “objective” and data-driven approach that they strive to take, or that having direct experience with a program or policy somehow discounts the objectivity of that experience. However, centering people with lived experience throughout the research process can improve the relevance of research and the ability of research to affect meaningful change.
Centering lived experience helps researchers ask better questions and design better interventions
People with lived experience bring knowledge that is often invisible to those outside communities where interventions take place, yet this knowledge is essential for designing effective programs and evaluations. When designing interventions with the New York City Housing Authority (NYCHA), ideas42 listened to NYCHA residents and key stakeholders to understand their concerns about improper disposal of waste on NYCHA grounds. But the engagement didn’t stop with these initial conversations. A member of the project team, and former NYCHA resident, was able to share first-hand knowledge of how residents refer to their housing developments that people unfamiliar with public housing were unaware of. By using this language rather than the formal names used by NYCHA administrators the team was able to build trust among NYCHA residents and increase NYCHA resident engagement with the new intervention.
Centering lived experience can make research more ethical
To respect the autonomy and dignity of human participants in research, they must be included in the research process. Power imbalances and researchers' lack of familiarity with study contexts are barriers to fully realizing these ethical principles. By centering lived experiences, researchers can mitigate power imbalances and ensure that participants are respected, benefitting from participation, and treated fairly. Salma Mousa, a researcher in the J-PAL network, demonstrates how centering lived experience can make research more ethical in her study that tests the impact of contact across religious lines on social cohesion in post-ISIS Iraq. In Mousa’s study, the research team and soccer league staff were displaced Christians with ties to the local community. Having a study team whose lived experience matched that of participants minimized power imbalances and created open lines of communication between the community and researchers. Staff contributed to decisions on recruitment, inclusion and exclusion criteria, and treatment intensity (the number of Muslim players added to Christian teams) to ensure that participants would feel safe and their perspectives were respected.
Practical guidance for researchers interested in centering lived experience in their own research
The following strategies should be adopted before a research question is developed and are intended to create an environment to involve communities in the research process, from establishing the research question to communicating and implementing results:
- Define who people with lived experience are in the context of your work.
- Recruit research partners with lived experience to support the research process and make sure they are in an environment where they can succeed. This includes creating space where partners with lived experience can share their direct experiences without having their objectivity or the value of their contributions questioned.
- Actively engage people with lived experience throughout the research process, and address reasons why communities of color, particularly Black, Latino/a/e, and Indigenous communities, distrust the research process. Pre-work is needed to build and rebuild trust in communities. There is no shortcut to this process. It takes time and it is worth the investment. Your research plan should account for this extra time and: (1) consider the representativeness of who shows up, (2) involve outreach to include people who may not show up as readily, and (3) account for heterogeneity within racial and ethnic groups.
- Be mindful that the people most willing to share their experiences may not be fully representative of the population of interest, and that those who are not showing up have valuable experiences to share. Being purposeful about soliciting a wide range of experiences can help ensure representation across demographics (e.g. gender, race) as well as qualitative experiences (e.g. people who hate the program, people who love the program).
- Invest money by seeking out funding and paying people with lived experience for their time and expertise. The funding environment isn’t designed to cover these expenses over the time period that is needed, so ongoing conversations between the research community and the funding community are needed. Through explaining the importance of including those with lived experience in the research process, we can work towards creating new funding norms. As an example, the Office of Equity in Washington State developed interim guidelines and best practices for compensating individuals with lived expertise.
- Share ownership. This means not helicoptering into a community, asking for help, and then helicoptering out with the results. True collaboration could include everything from shared development of research questions to opportunities for data ownership and co-authorship.
Selected resources for further reading:
Arnstein, Sherry R. "A ladder of citizen participation." Journal of the American Institute of planners 35.4 (1969): 216-224.
Chicago Beyond. “Why am I always being researched? [Guidebook].” (2019).
This resource examines the unequal power distribution in research studies and provides guidance for how researchers, community partners, and funders can engage in more balanced research practices that promote shared decision making to strengthen research practices.
Hawn Nelson, A., Jenkins, D., Zanti, S., Katz, M., Berkowitz, E., et al. (2020). A Toolkit for Centering Racial Equity Throughout Data Integration. Actionable Intelligence for Social Policy. University of Pennsylvania.
This resource outlines how data can be collected, used, analyzed, and shared to benefit communities and avoid harmful practices that promote bias.
NCAI Policy Research Center and MSU Center for Native Health Partnerships. (2012). ‘Walk softly and listen carefully’: Building research relationships with tribal communities. Washington, DC, and Bozeman, MT: Authors.
This resource was produced in collaboration with tribal leaders and those involved in tribal research and focuses on how to build effective partnerships with Native communities.
J-PAL also has a series of research resources that provide researchers and research staff with information and guidance for:
The researching racial equity blog series features the contributions of researchers and partners in examining and addressing racial inequities and offers resources and tools for further learning. Part one shares an example of evaluating racial discrimination in employment. Part two features work quantifying housing discrimination. Part three gives an overview of stratification economics in the context of evaluations. Part five shares guidance for incorporating inclusive and asset-based framing throughout the research cycle. Part six examines sources of bias in administrative data bias. Lastly, in part seven, Damon Jones and J-PAL staff share progress on researching racial equity and future areas of work.
In J-PAL North America’s researching racial equity blog series, we discuss how research plays a critical role in identifying structural inequities in systems and policies that disproportionately affect communities of color. In part one of this series, J-PAL staff interview Amanda Agan to discuss her 2018 evaluation of "Ban the Box" policies on employment outcomes, finding disparate impacts by race, and explore the role of randomized evaluations in advancing racial equity.
In J-PAL North America’s researching racial equity blog series, we discuss how research plays a critical role in identifying structural inequities in systems and policies that disproportionately affect communities of color. In part one, we interview Dr. Agan to discuss the evaluation and explore the role of randomized evaluations in advancing racial equity.
Can you tell us a bit about the “Ban the Box” study and the goals of this research?
Millions of people across the United States acquire criminal records each year, and these records can be a barrier to employment. Black men are overrepresented in all steps of the criminal-legal process from stops, to arrests, to charges, to convictions, and sentencing.
In an effort to increase opportunities for people with records, jurisdictions across the country have implemented “Ban the Box” (BTB), a set of policies restricting employers from asking about applicants’ criminal histories on job applications. These policies are often presented as a means of reducing unemployment among Black men. In our BTB study, we wanted to understand how employers reacted to job applicants from different demographic groups once the criminal record question was removed from the application. Would they start to rely on other observable characteristics as proxies for criminal justice contact? In particular, would they use perceived race to stereotype applicants?
To answer these questions, we sent out a total of 15,000 fictitious job applications both before and after BTB went into effect in New Jersey and New York City in 2015. On each application, we randomized 1) the perceived race of the applicant by using names associated with certain races, 2) whether the applicant had a criminal history, 3) whether the applicant had a GED or a high school diploma, and 4) whether the applicant had a one-year employment gap or not. Before BTB, there was little racial disparity in callback rates among applicants with similar records, though employers were 63 percent more likely to call an applicant with no record than one with a record. After BTB, however, when the employers could no longer ascertain criminal history from the job application, employers were 43 percent more likely to call back an applicant perceived to be white than one perceived to be Black. Employers were clearly stereotyping Black applicants as more likely to have a criminal record, harming Black applicants who did not have a criminal record. Interestingly, they did not seem to react differentially to applicants with one-year employment gaps or GEDs, even though those characteristics are also correlated with criminal records.
What steps did your team take to disseminate findings after results were finalized? Did the study result in any policy change that you know of?
We shared early versions of our findings with President Obama's Council of Economic Advisors when they approached us about potentially implementing a Federal BTB policy. We also spoke with several media outlets, including NPR and US News, to advertise the findings. In addition, we participated in an online policy debate on BTB hosted by the Urban Institute that featured several academics and policymakers.
I am not aware of any jurisdiction that repealed BTB laws due to this research. But, I do hope it has given policymakers pause as they look for policies that can help improve opportunities for individuals with records so that we do so in a way that does not inadvertently harm Black job seekers who do not have criminal records.
From your perspective, how did your BTB work address racial inequities?
BTB was meant to increase interviews and opportunities for people with records. We could have simply randomized criminal record status without disaggregating by race, and likely would have found, as we did, that BTB "worked" in reducing the impact of having a criminal record on employment opportunities. But given what we knew about the racial statistical discrimination and stereotyping that happens in employment and other domains, we decided to explore this aspect in our research as well, which uncovered a harm that we believe was important to document.
BTB policies appear to have exacerbated racial inequities in employer callback rates because employers held the stereotypical belief that a young, Black, male applicant was more likely to have a criminal record than a white applicant. The research implies that simply omitting information will not eliminate the negative labor impacts of criminal legal contact and may harm Black applicants without criminal histories.
What role can randomized evaluations play in promoting racial equity?
Randomized evaluations are really good at pinpointing the causal impact of a (manipulatable) characteristic, policy, or treatment on measurable outcomes. In certain instances, one can even try to pinpoint the direct impacts of race by manipulating perceived race as we did or as in a previous study by J-PAL affiliates Marianne Bertrand and Sendhil Mullainathan. By demonstrating that racism is, at least in part, driving disparate outcomes in certain fields (e.g., hiring), audit studies and other randomized evaluations have the potential to effect change by informing policy.
However, randomized evaluations that can manipulate perceived race are usually estimating the impacts of race while holding other variables constant. Systemic racial inequality means that there are many differences between Black applicants and white applicants besides their (perceived) race, some of which reflect the direct and indirect impacts of racial discrimination in other domains or at previous time periods. These complexities are harder to measure and address in a randomized evaluation. Integrating the results of randomized evaluations with other methods, both empirical and qualitative, as well as bringing in the voices of directly impacted community members and scholars, will likely give us the strongest path forward to improve policy and outcomes.
The researching racial equity blog series features the contributions of researchers and partners in examining and addressing racial inequities and offers resources and tools for further learning. Part two features work quantifying housing discrimination. Part three gives an overview of stratification economics in the context of evaluations. Part four discusses how to center lived experiences throughout the research process and in impact evaluations. Part five shares guidance for incorporating inclusive and asset-based framing throughout the research cycle. Part six examines sources of bias in administrative data bias. Lastly, in part seven, Damon Jones and J-PAL staff share progress on researching racial equity and future areas of work.
In this interview with J-PAL staff, J-PAL affiliated professor Peter Christensen (University of Illinois, Urbana-Champaign) discusses his ongoing series of evaluations, including a 2021 paper on housing discrimination, and the role randomized evaluations can play in addressing racial inequities.
In J-PAL North America’s researching racial equity blog series, we discuss how research plays a critical role in identifying structural inequities in systems and policies that disproportionately affect communities of color. A team of researchers, including J-PAL affiliated professors Peter Christensen (University of Illinois, Urbana-Champaign) and Christopher Timmins (Duke), are investigating the connections between racial discrimination in the housing market and environmental exposure risks. In part two, J-PAL staff interview Peter to discuss his ongoing series of evaluations, including a 2021 paper on housing discrimination, and the role randomized evaluations can play in addressing racial inequities.
Can you describe the motivation behind your research on housing discrimination and how your research seeks to increase equity?
Our research questions are driven by the experiences of people who are most impacted by this work. Engaging in public forums that bring together local housing and fair housing enforcement agencies, researchers, and representatives has helped us understand what is happening on the ground so that we can create our research designs to better identify and study these issues.
We know that there are racial and economic disparities in pollution exposure that are often tied to the neighborhoods where people live—these have long been documented by the environmental justice field. Bringing disparities to light is an important first step, but in and of itself might not lead to actionable policy change. Our research is really focused on disentangling the underlying mechanisms—what’s causing people to live in residential areas with higher pollution? And we know that neighborhoods impact more than pollution exposure, so we’re also interested in understanding the array of amenities and disamenities (e.g., schools, jobs, transportation) available in different areas.
There is a lot of other important research that aims to capture why people choose to live in different neighborhoods. We’re looking at a slightly different question, which is what factors constrain housing choices. One key piece of this work is to look at persistent income inequality, which one could easily assume is driving disparities in pollution exposures, because, of course, budgets affect who can live in which neighborhoods.
However, we also wanted to see if racial discrimination further constrains the choices of households of color, even with the same budget constraints as white households. That discrimination piece—where some groups have more choice constraints than others—is a very different policy question with different policy implications.
What do you see as the main policy implications of this research?
From a policy perspective, it’s important to understand the cause of a problem in order to best address it. For instance, if systematic differences in income are the primary cause of these disparities, then policy solutions should focus on addressing income inequality—a critical and challenging policy agenda in and of itself. Addressing discrimination that imposes constraints on choices in less polluted neighborhoods, on the other hand, requires enforcement of fair housing legislation and coordinated efforts between the Department of Housing and Urban Development (HUD) and the Environmental Protection Agency. These policy efforts are different from those that focus specifically on income mobility and have received less attention in recent years. So that’s why we’re motivated to understand the underlying mechanisms behind these disparities in neighborhood and pollution exposure, and this initial randomized evaluation on housing discrimination allowed us to do that.
That’s a great transition to looking at the role of randomized evaluations. What is the value of using a randomized evaluation—in this study and more generally—to address racial inequity, particularly systemically?
First, as I mentioned, is the ability to identify the underlying mechanisms. By manipulating one factor of a rental inquiry, the inquirer’s perceived race, we can meaningfully disentangle racial discrimination from other potential causes of disparate response rates. Understanding mechanisms can lead to changes in policy, and quantifying the scope of that mechanism can help justify spending public dollars to address the issue.
Second, the results of randomized evaluations are transparent. They do not require the same assumptions as other quasi-experimental methods. It’s helpful to be able to say to supporters and skeptics alike that “this is what we observed in a large-scale experiment using a familiar search platform.”
Finally, on the systemic piece, if studies like ours can help us understand patterns of behavior, they can help us begin to understand what guardrails to put in place. This study demonstrated that discrimination is occurring—whether people are cognizant of it or not—on digital housing platforms. So now we can ask: how can we reduce discrimination in the same digital markets?
We also have a new paper coming soon that evaluates the causal effects of historic and contemporary segregation on choices and choice constraints today. In this paper, we’re calculating the dollar estimates of the damages caused by discrimination, which could have not only policy but also legal implications. These estimates are only possible to obtain with an experiment.
A lot of your work seems driven by the potential policy implications of the research. What steps has your team taken to share your findings?
As one example, I was asked to speak about this work with MSNBC. We’ve also participated in HUD’s public forums to share our methodologies, discuss the results, and better understand what’s happening on the ground. That’s another way that we can make sure our research is consistent with what’s happening on the ground and is informing various efforts.
In these and other dissemination efforts, we try to help people understand the mechanisms of discrimination and also to explain the scale and heterogeneity of the problem—that discrimination facing renters from certain groups is stronger in some locations than others. So even on the national stage, we think it’s important to identify where households of color are facing the greatest constraints and begin to understand why.
In addition to coverage in national publications, and given that rates of discrimination varied by location, has your research been picked up at the local level as well?
Our hope is that through our dissemination at major national news outlets, we can provide evidence to support local agencies and community leaders in these areas with higher levels of discrimination.
That said, I have been interviewed by local news stations where reports of housing discrimination have increased recently and am contacted by individuals asking how to interpret the results in their contexts. And while there are some limitations of what I can say about specific neighborhoods, I’ll explain how they can accurately interpret the results to, say, a local representative. So in that sense there’s dissemination happening at kind of an individual level.
I also get emails from people challenging the results, saying that they follow HUD guidelines and fair housing laws. Since our method yields transparent results, I just say “this is what we found.” I try to explain the results in ways that help people understand the methodology and share related research that helps illustrate the nuances of discrimination and that it can happen subconsciously. I hope there’s some learning that can happen through that. And it also takes us back to some of the benefits of randomized evaluations: the results are the results.
The researching racial equity blog series features the contributions of researchers and partners in examining and addressing racial inequities and offers resources and tools for further learning. Part one shares an example of evaluating racial discrimination in employment. Part three gives an overview of stratification economics in the context of evaluations. Part four discusses how to center lived experiences throughout the research process and in impact evaluations. Part five shares guidance for incorporating inclusive and asset-based framing throughout the research cycle. Part six examines sources of bias in administrative data bias. Lastly, in part seven, Damon Jones and J-PAL staff share progress on researching racial equity and future areas of work.