Researching racial equity: Building capacity for research and practice
In J-PAL North America’s researching racial equity blog series, we discuss how research plays a critical role in identifying structural inequities in systems and policies that disproportionately affect communities of color. Noreen Giga, Racial Equity Project Lead, and Damon Jones, J-PAL affiliated professor (University of Chicago) and Scientific Advisor for J-PAL North America’s Racial Equity Project, reflect on J-PAL North America’s work to advance rigorous research on racial equity to date and discuss priorities for growing this capacity in the future.
While there is an extensive history of research focused on racial equity, one might argue that in recent times, even more attention has been paid to this topic across a number of fields. Accordingly, there is room to expand the body of rigorous exploration of the structural and historical causes of racial inequity in mainstream economics. As an organization, J-PAL North America has an opportunity to fill a critical gap and support the generation of credible evidence on racial equity that rigorously investigates the root causes of poverty and racial disparities in order to inform potential solutions.
Over the past year, the Alfred P. Sloan Foundation has supported our planning for how to effectively and intentionally advance research that addresses racial equity. In this post, we reflect on key milestones and takeaways from this work and share our plans for the future.
Forming a committee of racial equity advisors
Our first priority involved convening a group of scholars with expertise on advancing research related to racial equity. We brought together researchers from within the J-PAL network, outside the J-PAL network, and outside the field of economics, to challenge and inform our understanding of research on race and racial equity and the potential role RCTs can have (or should not have) in this space. We ensured the inclusion of Black, Latino/a, and Indigenous researchers who may be underrepresented in economics but are driving thought leadership on racial equity research. Our committee members included Randall Akee, Courtney Bonam, Gerald Daniels, Dania Francis, Corinne Low, and Silvia Robles. Their expertise in understanding how race and racism is studied in the social sciences, as well as their deep commitment to diversifying the field of economics, informed our focus. Over the past six months, we’ve grappled with a variety of questions: how do we define racial equity within research? How do we conduct ethical randomized evaluations related to racial equity? What support and information do researchers need to do this work well? What research collaborations and connections are needed to diversify J-PAL’s researcher network and encourage a wider set of questions and research agendas?
Defining racial equity
To begin to answer these questions, we drafted a working definition of racial equity, based off of a definition from Actionable Intelligence for Social Policy’s Centering Racial Equity Toolkit. We define racial and ethnic equity as the process of ensuring that race is no longer used to reinforce social hierarchies. Racial equity does not imply the absence of racial group identities, communities, or cultural traditions, but that such aspects are not used against individuals or groups in social, political, and legal domains. This process involves acknowledging and addressing historic harms and racial injustices, making amends, working to create racially just systems, policies, practices, attitudes, and cultural messages, and eliminating structures that reinforce differential outcomes by race. We used this definition in our updated Request for Proposals (RFPs) to encourage proposals that go beyond looking at differences in outcomes by race, and acknowledge, address, and advance racial equity. We note, of course, that no single definition of racial equity exists, but found the exercise nonetheless useful in guiding our work.
A blog series on conducting racial equity research
We also developed a blog series on researching racial equity to expand our own understanding of what it means to conduct research on this topic and share those lessons with researchers and research staff. Throughout the last year, we published six blog posts as a part of this series:
- Evaluating “Ban the Box” policies: In part one, Amanda Agan (Rutgers) discusses her randomized evaluation with Sonja Starr investigating the impact of "Ban the Box" policies that disproportionately affect Black candidates. Results indicate that “Ban the Box” policies increased racial disparities in callback rates, harming Black applicants without criminal records.
- Racial discrimination, choice constraints, and policy implications: Part two of this series investigates the connections between racial discrimination in the housing market and environmental exposure risks. Peter Christensen (University of Illinois) discusses his ongoing series of evaluations, including a 2021 paper on housing discrimination.
- Stratification economics: In part three, Dania Francis (UMass Boston) provides an overview of stratification economics and how the tenets of this framework can be applied to impact evaluations to examine systems, group membership, and relative power of groups across race, class, and gender.
- The value of centering lived experiences in the research process: In part four, Anthony Barrows, Managing Partner and Founder of the Center for Behavioral Design and Social Justice, discusses how to center lived experiences throughout the research process and in impact evaluations and shares practical tips and resources for researchers.
- Integrating inclusive and asset-based communication throughout the research cycle: Part five summarizes our workshop on inclusive and asset-based communication in research that was delivered at the Association for Public Policy Analysis and Management’s (APPAM) 2023 annual conference. Key principles for inclusive and asset-based communication and frameworks for embedding this in all stages of research are highlighted.
- Administrative data bias in education: Part six discusses how subjectivity and structural inequity create bias in administrative data, using examples from the educational context, and offers potential solutions to navigate data and measurement bias in research.
Looking to the future
With the support of the racial equity advisory committee, we identified five main areas to guide J-PAL North America’s strategic vision for advancing racial equity in economics and economic research over the next three years.
- Bringing more rigor to assessing and defining high-quality research related to racial equity.
- Supporting researchers in developing high-quality RCT proposals related to racial equity.
- Collaborating with researchers with expertise in racial equity research.
- Increasing the representativeness of J-PAL’s researcher network.
- Providing support and resources in developing high-quality randomized evaluation proposals to researchers, particularly those who are under-resourced.
To inform this work, we will conduct an academic literature review on racial equity research and the scientific value of randomized evaluations in investigating and addressing the causes and consequences of racism. This literature review will guide the development of a racial equity research agenda across J-PAL North America’s topic-specific initiatives and updates to our RFPs to encourage research related to racial equity. We will develop and grow our collaboration with researchers, including with researchers of color and researchers with racial equity expertise through targeted outreach and research support.
With continued support from the Alfred P. Sloan Foundation, we are excited to build upon the work we started this year and continue to explore how J-PAL North America can advance research related to racial equity, as well as support a more inclusive climate within economics. We look forward to continuing to learn from and connect with others in this space.
The researching racial equity blog series features the contributions of researchers and partners in examining and addressing racial inequities and offers resources and tools for further learning. Part one shares an example of evaluating racial discrimination in employment. Part two features work quantifying housing discrimination. Part three gives an overview of stratification economics in the context of evaluations. Part four discusses how to center lived experiences throughout the research process and in impact evaluations. Part five shares guidance for incorporating inclusive and asset-based framing throughout the research cycle. Part six examines sources of bias in administrative data bias.
In part six of J-PAL North America’s researching racial equity blog series, we identify sources of bias in administrative data and describe these within the educational context.
In J-PAL North America’s researching racial equity blog series, we discuss how research plays a critical role in identifying structural inequities in systems and policies that disproportionately affect communities of color. In part six of this blog series, we identify sources of bias in administrative data and describe these within the educational context.
Linking randomized evaluations to administrative data (such as medical records, education records, and tax records already collected by organizations for operational purposes) has greatly expanded the ability of researchers to examine program impacts on a wide range of outcomes, over longer periods of time, and at lower cost, with less burden on study participants. This has led to an increased fraction of academic studies published in top economic journals using administrative data over the past several decades. Administrative data is often considered less biased than survey data because it minimizes the risk of social desirability, enumerator, and recall biases. However, administrative data is not free from bias, and randomization — which ensures that systematic differences between intervention and comparison groups do not drive differences in outcomes — does not remove all bias inherent in existing datasets.
Administrative datasets are incredibly useful to researchers conducting randomized evaluations, but using administrative data for impact evaluation partially outsources choices over what and how to measure. These choices are subjective (reflecting an individual’s personal biases and perspectives) and reflect a researcher’s positionality (where one is situated in society based on their identities). In this post, we discuss how subjectivity and structural inequity can affect administrative data, provide examples from the educational context, and identify potential solutions applicable in education and other sectors.
Choosing what counts: How subjectivity shapes administrative records
What researchers choose to measure and how they choose to measure it shapes what we perceive as important and influences research priorities. These choices can reflect subjectivity that is often unexamined and unquestioned. For administrative data, these choices have important implications on representativeness by centering or decentering measurement methods that affect groups of people differently.
One example is the creation of gross national product (GNP) by Simon Kuznets in the 1930s. Tasked with developing a method to measure a nation’s income, Kuznets chose to exclude unpaid household labor, overlooking how domestic labor underpins population growth, employment, and business growth. This exclusion may have resulted from Kuznet’s personal bias, constraints in collecting household data, or a combination of these factors. Because of this choice, household labor is not explicitly reflected in macroeconomic datasets and, until childcare shortages became extreme due to Covid-19, has not been a focus of policy conversations about national growth. By omitting the economic contributions of groups predominantly involved in unpaid labor, notably women, this measurement choice might inadvertently undervalue their role. Economic policies and research, as a result, may not fully recognize or support the needs of these groups.
Examples from education
In the field of education, the influence of subjective choices is evident in standardized assessments. The development of educational standardized assessments was shaped by the views of individuals such as Carl Brigham, whose work was driven by an underlying racist belief: the notion that non-white students were intellectually inferior. A body of education literature has since lamented the biased assumptions that standardized tests are based on, and how results from tests inform critical policies, such as closing the racial and ethnic achievement gap. Despite concerns, standardized testing is a mainstay of the American education system. Understanding what assessments actually measure, what data is excluded, and how this affects groups of students differently, can help ensure that research works to mitigate bias instead of entrenching it.
Researchers can explore alternative non-test score measurements to complement or substitute for testing data. A recent study highlights an example of non-test score measurements, where teacher impact on student behaviors, measured by absenteeism, grade repetition rates, and suspension rates, are more influential on student long-term outcomes than teacher impacts on test scores. In this case, non-test data tell a different story than test data alone. Additional measures, such as family engagement rates, attendance, student discourse, and classroom engagement, can provide further insight into schools’ role in driving learning.
When relying on data from standardized tests, researchers can investigate the testing instrument itself for bias. The Center for Measurement Justice offers a validity framework to analyze bias intrinsic in an assessment. Considering the degree of bias can guide decisions on how to interpret research results and when to select additional data sources.
Structural Inequities: The hidden variable in administrative data
Organizations collecting administrative data exist within the context of systemic, institutionalized racism, where programming and the data collected from it reflect existing racial inequities in society. For example, administrative datasets on crime record only the individuals that police choose to investigate, not the individuals that police observe (or do not observe), a process subject to systemic bias through over-policing of neighborhoods and interpersonal bias from individual officers. This can lead to overrepresentation and underrepresentation of groups of people by race, ethnicity, and socioeconomic status in existing datasets. It is important to understand the context in which data is collected and research is conducted, so that outcomes that differ by racial and ethnic subgroups are analyzed and interpreted without relying on racist assumptions or stereotypes.
Examples from education
Education evaluations often use learning disability status as a criterion for defining study eligibility. Educators, parents, doctors, and others play instrumental roles in identifying and assessing students with learning disabilities, who are then eligible to receive additional classroom support. Identification and diagnosis processes are prone to conscious and unconscious bias and are embedded within unequal systems. Significant research has been devoted to understanding bias in special education placements, with some evidence suggesting that Black, Latino/a/e, and non-white students are over-referred for special education placements, while other evidence suggests these groups of students are under-referred, particularly for ADHD diagnoses, once other factors are taken into account.
Over and under representation in administrative data are important when considering program effects, particularly for subgroups defined by those data. Although randomized evaluations generate internally valid treatment effects, understanding how the sample and any subgroups are defined is important for interpretation, generalizability, and comparing subgroups. For example, the impact of a tutoring program may be over or understated because students eligible for participation may have a different distribution of learning disabilities than the data implies. When administrative categories are used for subgroup analyses, differential coverage in the data may explain observed differences in treatment effects. Care should be taken not to essentialize these differences as related to race, for example, rather than racism or factors that co-vary with race (see the section on Pursuing Rigor in our previous post on stratification economics).
Understanding implementation contexts can help assess whether eligibility criteria based on administrative data constructs are biased. The Institute of Education Sciences provides an implementation research toolkit so that researchers can understand implementation contexts and connect these to evidence generated from research.
Tools for navigating administrative data bias
To navigate bias, researchers should critically scrutinize data choices and embrace tools and frameworks that prioritize equity. Below are tips for navigating administrative data bias, along with tools and resources.
Be careful mapping indicators to constructs
Be explicit when theorizing connections between the indicators available in the data and the broader constructs you hope to capture. Problems can arise when researchers are too quick to connect indicators to constructs that reinforce rather than reject existing biases. Does a test score measure aptitude, learning, or resources? Disciplinary records may be a biased measure of behavior if discipline is not equitably distributed, but infractions may still be worth measuring if they are an outcome an intervention hopes to reduce.
Don't outsource outcome decisions to what is available in the data
While some researchers are genuinely interested in improving test scores, others may default to measuring learning in terms of test scores simply because they are available. This data equity framework gives examples and key considerations when choosing outcomes.
Understand where randomization mitigates (or doesn’t mitigate) bias
Bias that correlates with treatment status is most damaging to the ability to estimate causal effects. Systemic biases that are present in the data for both treatment and comparison groups will not affect the ability to estimate unbiased average treatment effects, usually computed as the differences in means between groups. However, more care needs to be taken to interpret the actual levels of an outcome for a particular group.
As noted above, the ability to measure outcomes for a particular group will be influenced by how that group appears in administrative data and may have implications for program targeting. A program to reduce disruptive behavior, for example, may not generate measurable effects in administrative data if that group of students is rarely the target of enforcement. Comparing a variety of characteristics of the underlying population, potentially eligible sample frame, and actual study sample can ensure that results are contextualized and generalizable. This resource also provides a framework for assessing racial and ethnic bias in administrative data.
Group identities themselves are both socially constructed and may be further limited by choices available in administrative data. Keep these facts in mind when interpreting results for subgroups in particular.
Consider supplementing administrative data with primary data collection
Asking participants to self-report identities like race and gender with thoughtfully created categories can address limitations often inherent in equivalent administrative data and may be relatively simple to include in an intake and consent process.
For guidance on these types of questions, see J-PAL’s inclusive language guide and We All Count’s data biography template.
Acknowledge limitations
Not every source of bias is correctable, but researchers should think critically about the potential for bias, discuss the limitations inherent in their research design, and take these into account when interpreting results and communicating with stakeholders and policymakers.
The researching racial equity blog series features the contributions of researchers and partners in examining and addressing racial inequities and offers resources and tools for further learning. Part one shares an example of evaluating racial discrimination in employment. Part two features work quantifying housing discrimination. Part three gives an overview of stratification economics in the context of evaluations. Part four discusses how to center lived experiences throughout the research process and in impact evaluations. Part five shares guidance for incorporating inclusive and asset-based framing throughout the research cycle. Lastly, in part seven, Damon Jones and J-PAL staff share progress on researching racial equity and future areas of work.
In part four of J-PAL North America’s researching racial equity blog series, we sit down with Anthony Barrows, Managing partner and founder of the Center for Behavioral Design and Social Justice, to understand how to center lived experiences throughout the research process and in impact evaluations.
In J-PAL North America’s researching racial equity blog series, we discuss how research plays a critical role in identifying structural inequities in systems and policies that disproportionately affect communities of color. In part four, we sit down with Anthony Barrows, Managing Partner and Founder of the Center for Behavioral Design and Social Justice, to understand how to center lived experiences throughout the research process and in impact evaluations.
Defining lived experience and what it means to center this experience
Lived experience refers to individuals’ first-hand experiences with a program, policy, or problem. This could include people who are delivering a program (e.g. social workers) or people who are receiving a program (e.g. foster parents). Centering lived experience means creating space for people to share their expertise and for that expertise to be valued and incorporated into decision-making. This is especially important for people receiving an intervention since they often have the least opportunity to share their knowledge, concerns, and experiences with researchers.
People with relevant lived experience are often not intentionally included in the research and policymaking process. Researchers may feel that including lived experience goes against the “objective” and data-driven approach that they strive to take, or that having direct experience with a program or policy somehow discounts the objectivity of that experience. However, centering people with lived experience throughout the research process can improve the relevance of research and the ability of research to affect meaningful change.
Centering lived experience helps researchers ask better questions and design better interventions
People with lived experience bring knowledge that is often invisible to those outside communities where interventions take place, yet this knowledge is essential for designing effective programs and evaluations. When designing interventions with the New York City Housing Authority (NYCHA), ideas42 listened to NYCHA residents and key stakeholders to understand their concerns about improper disposal of waste on NYCHA grounds. But the engagement didn’t stop with these initial conversations. A member of the project team, and former NYCHA resident, was able to share first-hand knowledge of how residents refer to their housing developments that people unfamiliar with public housing were unaware of. By using this language rather than the formal names used by NYCHA administrators the team was able to build trust among NYCHA residents and increase NYCHA resident engagement with the new intervention.
Centering lived experience can make research more ethical
To respect the autonomy and dignity of human participants in research, they must be included in the research process. Power imbalances and researchers' lack of familiarity with study contexts are barriers to fully realizing these ethical principles. By centering lived experiences, researchers can mitigate power imbalances and ensure that participants are respected, benefitting from participation, and treated fairly. Salma Mousa, a researcher in the J-PAL network, demonstrates how centering lived experience can make research more ethical in her study that tests the impact of contact across religious lines on social cohesion in post-ISIS Iraq. In Mousa’s study, the research team and soccer league staff were displaced Christians with ties to the local community. Having a study team whose lived experience matched that of participants minimized power imbalances and created open lines of communication between the community and researchers. Staff contributed to decisions on recruitment, inclusion and exclusion criteria, and treatment intensity (the number of Muslim players added to Christian teams) to ensure that participants would feel safe and their perspectives were respected.
Practical guidance for researchers interested in centering lived experience in their own research
The following strategies should be adopted before a research question is developed and are intended to create an environment to involve communities in the research process, from establishing the research question to communicating and implementing results:
- Define who people with lived experience are in the context of your work.
- Recruit research partners with lived experience to support the research process and make sure they are in an environment where they can succeed. This includes creating space where partners with lived experience can share their direct experiences without having their objectivity or the value of their contributions questioned.
- Actively engage people with lived experience throughout the research process, and address reasons why communities of color, particularly Black, Latino/a/e, and Indigenous communities, distrust the research process. Pre-work is needed to build and rebuild trust in communities. There is no shortcut to this process. It takes time and it is worth the investment. Your research plan should account for this extra time and: (1) consider the representativeness of who shows up, (2) involve outreach to include people who may not show up as readily, and (3) account for heterogeneity within racial and ethnic groups.
- Be mindful that the people most willing to share their experiences may not be fully representative of the population of interest, and that those who are not showing up have valuable experiences to share. Being purposeful about soliciting a wide range of experiences can help ensure representation across demographics (e.g. gender, race) as well as qualitative experiences (e.g. people who hate the program, people who love the program).
- Invest money by seeking out funding and paying people with lived experience for their time and expertise. The funding environment isn’t designed to cover these expenses over the time period that is needed, so ongoing conversations between the research community and the funding community are needed. Through explaining the importance of including those with lived experience in the research process, we can work towards creating new funding norms. As an example, the Office of Equity in Washington State developed interim guidelines and best practices for compensating individuals with lived expertise.
- Share ownership. This means not helicoptering into a community, asking for help, and then helicoptering out with the results. True collaboration could include everything from shared development of research questions to opportunities for data ownership and co-authorship.
Selected resources for further reading:
Arnstein, Sherry R. "A ladder of citizen participation." Journal of the American Institute of planners 35.4 (1969): 216-224.
Chicago Beyond. “Why am I always being researched? [Guidebook].” (2019).
This resource examines the unequal power distribution in research studies and provides guidance for how researchers, community partners, and funders can engage in more balanced research practices that promote shared decision making to strengthen research practices.
Hawn Nelson, A., Jenkins, D., Zanti, S., Katz, M., Berkowitz, E., et al. (2020). A Toolkit for Centering Racial Equity Throughout Data Integration. Actionable Intelligence for Social Policy. University of Pennsylvania.
This resource outlines how data can be collected, used, analyzed, and shared to benefit communities and avoid harmful practices that promote bias.
NCAI Policy Research Center and MSU Center for Native Health Partnerships. (2012). ‘Walk softly and listen carefully’: Building research relationships with tribal communities. Washington, DC, and Bozeman, MT: Authors.
This resource was produced in collaboration with tribal leaders and those involved in tribal research and focuses on how to build effective partnerships with Native communities.
J-PAL also has a series of research resources that provide researchers and research staff with information and guidance for:
The researching racial equity blog series features the contributions of researchers and partners in examining and addressing racial inequities and offers resources and tools for further learning. Part one shares an example of evaluating racial discrimination in employment. Part two features work quantifying housing discrimination. Part three gives an overview of stratification economics in the context of evaluations. Part five shares guidance for incorporating inclusive and asset-based framing throughout the research cycle. Part six examines sources of bias in administrative data bias. Lastly, in part seven, Damon Jones and J-PAL staff share progress on researching racial equity and future areas of work.
In J-PAL North America’s researching racial equity blog series, we discuss how research plays a critical role in identifying structural inequities in systems and policies that disproportionately affect communities of color. In part three, Dania Francis (UMass Boston), a researcher in the J-PAL network, provides an overview of stratification economics and how the tenets of this framework can be applied to impact evaluations.
In part three of J-PAL North America’s researching racial equity blog series, Dania Francis (UMass Boston), a researcher in the J-PAL network, provides an overview of stratification economics and how the tenets of this framework can be applied to impact evaluations.
Introduction to stratification economics
Advancing equity through research requires not only quantifying disparities but also rigorously investigating 1) why these disparities exist and 2) how to address them. Stratification economics is a framework that addresses these questions through examinations of systems, group membership (i.e., stratum), and the relative power of groups across various domains (e.g., race, class, gender). This framework is built upon four interrelated tenets:
Understanding that research is not value-neutral
Like any field, economics is not exempt from bias and normativity. Stratification economics recognizes that the market does not correct for prejudice on its own—it is up to individuals and institutions to actively pursue equity. An important first step is to understand that no one can be 100 percent objective. Instead, stratification economics calls upon researchers to note our biases. Putting forward multiple explanations and conclusions about an observed phenomenon is one way to challenge a researcher’s own assumptions.
Pursuing rigor
It is easy to “undertheorize” (i.e., put forward the simplest explanation) when explaining observed economic and social disparities, particularly when those disparities are tied to race. Concluding that racial disparities are due to race may feel more straightforward than attributing them to racism. However, this tends to lead to circular logic in which people are blamed for their circumstances simply because they are in those circumstances. In stratification economics, researchers dig deeper into the details of what is occurring and the mechanisms behind what is occurring, often using both quantitative and qualitative methods. By posing additional questions and explanations—and testing them through multiple means—stratification economists deepen the complexity and rigor of this work.
Expanding beyond individual human capital
Human capital theory—a popular theory in economic research—posits that people can increase their social and economic standing by harnessing skills and knowledge valued by the market. This theory focuses on individuals in the present without considering historic and contemporary policies and systems that a) create opportunities to build human capital for some but not others, and b) provide differential returns on one’s investment in human capital depending on their strata. Stratification economics aims to more fully account for historic endowments (e.g., access to property for white people) and disendowments (e.g., redlining against Black people) and the power conferred to those with more assets. In doing so, this framework takes people out of a vacuum of individual choices and situates them in the reality of a larger ecosystem of policies and practices.
Centering freedom and agency
Stratification economics positions people in larger systems of institutions and power structures that create or constrain choices and opportunities. Understanding that some groups of people face constraints upon their agency is a critical step in identifying ways to advance equity. Stratification economists are therefore focused on developing and evaluating strategies that enable people to engage freely with economic and social systems and foster mobility across social and economic strata.
Benefits and applications of stratification economics
Stratification economists focus our theories and research questions around the systemic mechanisms that underlie observed disparities. This focus helps us avoid two potential pitfalls: 1) concluding that disparities are due simply to cultural differences (which tend to be incomplete explanations at best and inaccurate ones at worst) and 2) drawing on deficit-based circular reasoning to explain disparities. Stratification economists seek a holistic and accurate understanding of disparities and how to address them.
For example, some of my work addresses the fact that fewer Black students take Advanced Placement (AP) coursework than white students. Some researchers have theorized that this disparity may be due to under-investment in education on the part of Black students themselves—that their culture does not value education and choosing AP courses would be akin to “acting white.” In contrast, my co-investigators and I theorized that systemic choice constraints, such as fear of racial isolation (i.e., concerns about being the only Black student in an AP class), may better explain this phenomenon.
We began testing this hypothesis using quasi-experimental methods and found that the likelihood that a Black student would take AP math in the future was greater in schools that already had more Black students taking AP math. This finding—that AP enrollment depended on a student’s context—is more consistent with theories about racial isolation (systemic choice constraints) than “acting white” (cultural norms). Given these results, solutions that aim to reduce racial isolation will be more effective at increasing Black student enrollment in AP courses than solutions that focus on modifying the behaviors of individual Black students. We are now in the process of developing a randomized evaluation to pinpoint additional factors that may constrain Black students’ ability to choose AP courses.
Tools for getting started
Tools that are key to stratification economics can also be useful to researchers in other economic and social science disciplines. For example, stratification economists pose research questions using asset-based framing, centering people’s strengths and aspirations as opposed to their needs or deficits. This framing enables us to look for ways to make systemic changes that broaden opportunities to leverage strengths and achieve aspirations, rather than for ways to shape individual behaviors without tackling the broader forces that constrain choices.
My work is guided by two questions that I encourage others to ask as well:
- What happens to a person (e.g., a program participant) if they make all the “right” choices? Often even when someone from a marginalized group or strata does everything society would want them to, they still do not achieve the same outcomes as one from a group with more power. This reality forces us to question why.
- Why? Asking why some people who make socially desirable choices don’t always end up with the same resources as others who make the same choices forces us to move beyond questions of behavior. I tell my students to harness their inner five-year-old and ask why over and over—not to be satisfied with one explanation, but to keep thinking bigger and more holistically.
Individual choices and cultural norms tend to be easier to conceptualize than larger systems, but are only one piece of a much larger story. Stratification economics seeks to broaden our understanding of social and economic inequities so that we may address them more holistically and effectively.
Suggested resources for future reading
Books and articles:
- Chelwa, Grieve, Darrick Hamilton, and James Stewart. “Stratification Economics: Core Constructs and Policy Implications.” Journal of Economic Literature 60, no. 2 (2022): 377-99.
- Darity, William A., Darrick Hamilton, and James B. Stewart. 2015. “A Tour de Force in Understanding Intergroup Inequality: An Introduction to Stratification Economics.” Review of Black Political Economy 42 (1–2): 1–6.
- Mason, Patrick. The Economics of Structural Racism: Stratification Economics and US Labor Markets. Cambridge: Cambridge University Press, 2023.
Journals:
The researching racial equity blog series features the contributions of researchers and partners in examining and addressing racial inequities and offers resources and tools for further learning. Part one shares an example of evaluating racial discrimination in employment. Part two features work quantifying housing discrimination. Part four discusses how to center lived experiences throughout the research process and in impact evaluations. Part five shares guidance for incorporating inclusive and asset-based framing throughout the research cycle. Part six examines sources of bias in administrative data bias. Lastly, in part seven, Damon Jones and J-PAL staff share progress on researching racial equity and future areas of work.