Taking an evidence-based approach to governance policy in the face of COVID-19
This blog was cross-posted from the UN Public Administration Network blog.
Achieving the 2030 Agenda for Sustainable Development while addressing the enormous challenges caused by the COVID-19 pandemic requires identifying, implementing, and scaling the most effective social policies and programs. Too often, however, this key principle—effectiveness—can get lost amid other considerations.
Determining which social policies and programs are truly effective, and parsing out approaches that do not work, requires rigorous, policy-oriented research. Focusing on research alone, of course, is not enough: Research needs to make it into the hands of decision-makers, and decision-makers have to understand how to apply lessons from research to their unique contexts.
This is a tall order—but not impossible. At J-PAL, we work to bridge the gap between researchers and policymakers. This is a two-way street: Researchers must prioritize designing studies that answer policy-relevant questions, and policymakers must be open to learning about program effectiveness and acting on research findings.
An experimental approach
We focus on a specific type of research: randomized evaluations, also known as randomized controlled trials, or RCTs. The 2019 Nobel Prize in Economics, awarded to J-PAL co-founders Abhijit Banerjee and Esther Duflo and longtime J-PAL affiliate Michael Kremer, recognized this methodology as transformative to the field of international development.
This “experimental approach” was relatively scarce in the field of development even two decades ago, but now is now at the core of a growing global ecosystem of hundreds of researchers, research institutions, and NGOs, many of which have worked with J-PAL affiliates to produce thousands of randomized evaluations. This work informs policymaking in almost all sectors of development, from agriculture to crime and conflict to governance.
Randomized evaluations break down large development problems into specific questions. For example, “How do we reduce the spread of COVID-19?” can instead translate to, “How do we design information campaigns to increase uptake of preventative health behaviors,” and “How can we improve the delivery of health products and services,” among dozens of other related questions. These questions individually and collectively generate lessons and insights that can help reduce the spread of the pandemic and lessen its economic effects.
Experimental evidence in governance
In governance, there has been a significant accumulation of experimental evidence across three core areas: Increasing political participation, reducing corruption and leakages, and improving state capacity for service delivery.
These interconnected themes are cross-cutting across the SDGs, many of which rely on effective, accountable, and transparent government—in particular SDG 16, which calls for peaceful and inclusive societies for sustainable development.
Randomized evaluations in these areas have taught us several lessons about how to best strengthen the role of government in poverty reduction. We have learned, for instance, that giving households information about the social benefits they are entitled to can help shift the balance of power between citizens and local officials, reducing opportunities for corruption and increasing access to social services.
We have learned from multiple studies that gender quotas for women in local government bodies can improve women’s representation in politics, increase provision of public services, and improve perceptions of women as leaders. Many RCTs have also been conducted to measure the effects of voter information campaigns during elections, showing that providing voters with information about candidates, when information is widely disseminated and credible, can lead to more qualified and accountable candidates being elected..
These are just some areas within governance where randomized evaluations have provided useful insights, with hundreds of other completed and ongoing projects answering questions related to taxation, civil service recruitment and reform, community-driven development, and many more topics.
RCTs in governance and related topics can also provide important insights for policymakers responding to the COVID-19 pandemic.
Research in Indonesia, for example, has found that leveraging community knowledge and on-demand applications to identify the poor when distributing social programs— a key question for governments rolling out new benefits in response to COVID-19—can provide more flexibility, without large costs in accuracy, compared to other traditional ways of identifying the poor. Experimental research during the 2014 Ebola outbreak in Sierra Leone showed that holding meetings between community members and health staff to discuss and tackle service delivery issues, or giving status awards to clinic staff, can improve the community’s trust in the health system—an issue of paramount importance during the outbreak of an infectious disease.
Individual studies certainly don’t always provide lessons that are generalizable across different contexts. However, evidence from an RCT can often generalize when we are thoughtful about contextual factors and understand what drives the impacts of the program or policy being evaluated. When studies are designed with these mechanisms in mind, they can provide important considerations for policymakers looking to incorporate research into their decisions.
The potential (and limitations) of randomized evaluations to inform policy decisions
While randomized evaluations are excellent tools for learning about the impact of a policy or program, they are not always appropriate for all situations. For example, randomization may not be politically or logistically feasible, and it may be unethical to provide a certain intervention to a treatment group or withhold it from a comparison group if past research has demonstrated its effectiveness.
In such instances, it’s worth emphasizing that randomized evaluations are just one tool in a policymakers’ toolbox. Quasi-experimental or other quantitative analysis, descriptive research, analysis of administrative data, and direct feedback from participants are all important for well-rounded decision-making.
But when feasible and ethical to carry out, this rigorous research has shed light on important findings that have led to more effective policies and improved lives: more than 400 million people around the world have been reached by programs that were scaled up after they were shown to be effective through evaluations by researchers in the J-PAL network.
Ultimately, and perhaps now more than ever, putting the world back on track to achieve the target of less than 3 percent of its population living in extreme poverty by 2030 will require effective, evidence-based social programs that address the needs of those left behind. There is no silver bullet to achieve this goal—but through rigorous research, strong research-policy partnerships, and a commitment to use evidence in policy design, it is within our grasp.