The potential for evidence-based policymaking in the Middle East and North Africa: Answering frequently asked questions
Most people we meet don’t know that J-PAL has regional offices around the world, based at universities in Chile, France, India, Indonesia, South Africa, and a US office also based at MIT. These offices lead policy outreach and support our affiliated professors’ research projects in those regions.
Over the last few years we’ve been working to expand J-PAL’s work in the Middle East and North Africa (MENA), in part as a response to demand from MENA governments and regional actors for evidence to help inform policies and programs.
In support of these efforts, J-PAL co-founder and director Abhijit Banerjee, Professor of Economics at MIT, delivered a public lecture at the American University in Cairo (AUC) on Tuesday, November 14.
The talk, entitled “Reducing Poverty and Promoting Social Development: How Evidence Can Inform Better Policy in MENA,” was introduced by Dr. Mona Said, Chair of the Department of Economics at AUC, and followed by a discussion moderated by Dr. Ragui Assaad, Professor of Economics at the University of Minnesota and Distinguished Visiting Professor at AUC.
At the event, audience members were invited to submit questions on notecards. We received many thoughtful questions from the audience but were not able to address all of them during the lecture—so we respond to a few of the most frequently asked questions below.
Is J-PAL only interested in RCTs related to education and employment [highlighted in Abhijit’s talk], or do you investigate programs that tackle other social issues?
Our affiliated researchers investigate many social issues related to poverty reduction. In over 80 countries, J-PAL affiliates have conducted over 850 randomized evaluations of programs or policies broadly focusing on poverty reduction. Most evaluations fall into one of J-PAL’s eight thematic sectors, including education and labor markets, but also agriculture; crime, violence, & conflict; environment & energy; finance; governance, and health. Results of this research are summarized in lessons that can help inform policy decisions.
Has J-PAL done any research in Egypt on promoting employability in labor markets?
Yes—for example, J-PAL affiliates are currently evaluating the impact of a youth entrepreneurship reality TV show and entrepreneurial support activities on viewers’ attitudes, business practices, and employment status. J-PAL affiliates are also testing the impact of job fairs on matching job seekers and firms across Egypt, and job training and access to credit on entrepreneurship in Upper Egypt. Worldwide, our affiliates have 119 completed and ongoing evaluations of programs related to labor markets and/or employability, many of which provide relevant lessons for the Egyptian context.
What are some of the challenges and shortcomings associated with carrying out randomized evaluations in development?
The main purpose of randomized evaluations is to determine whether a program has an impact, and to quantify how large that impact is. Randomized evaluations do this by allocating resources, running programs, or applying policies to a randomly assigned “treatment group” (of individuals, communities, schools, etc.), and comparing the outcomes against a similar of group who did not access those resources or programs. Despite the relative simplicity of the design of randomized evaluations, implementing these evaluations can be challenging. However, many of these challenges exist for other types of impact evaluations as well.
One specific challenge of running randomized evaluations in developing countries is that it requires working closely with implementing organizations to make sure the randomization is done properly and at the right time. This requires a lot of organization and coordination with people who are familiar with the related costs and logistical challenges, especially in places without easily accessible administrative data.
Some critics claim that because randomized evaluations identify a comparison group of those who don’t receive the program, they are basically “withholding” services from that group. It’s important to remember, however, that implementing a program that has not been rigorously evaluated is in itself risky. As well-intentioned as it might be, the program might have no impacts at all—or worse, have unintended negative impacts. Without measuring this, it’s impossible to know. A more detailed discussion of RCT methodology and J-PAL’s commitment to ethical research is on our research resources page.
To what extent do results and lessons from an impact evaluation in one country apply to another? What about one village to another?
All contexts are different—but this doesn’t mean that lessons can’t apply across contexts. Human behavior tends to be similar no matter where you live (this is the foundation of behavioral economics). To help program managers and policymakers understand whether a particular program makes sense in their context, J-PAL uses a practical generalizability framework to assess the primary factors that can influence a program’s success.
The key to our framework is that it breaks down the question “Will this program, which was evaluated somewhere else, work here?” into a series of questions based on the theory behind a program. Different types of evidence—from institutional knowledge to descriptive data, evidence on general behavioral principles, and results from impact evaluations—are used in each step of the assessment (see infographic below, demonstrating the framework applied to a child immunization program).
The framework involves four steps:
Step 1: What is the disaggregated theory behind the program? In other words, at its core, why do we think the program worked in its original context? In the example above, steps 1-9 are the theory of change of the incentives for immunization program.
Step 2: Do the local conditions hold for that theory to apply? Is there local demand for the program? Are there physical barriers, like poor roads, or political or cultural barriers that might prevent people from accessing the program? Are relevant local institutions, like schools, health clinics, or court systems, logistically prepared for and interested in participating?
Step 3: How strong is the evidence for the required general behavioral change? If the program involves changing behavior, is there evidence that the same behavioral condition exists in the absence of the program? For instance, in the example above, we would look for data to help us understand whether parents who start their child’s series of immunizations rarely complete the series. And is there any other research that supports our theory about why we think the program should effectively change behaviors?
Step 4: What is the evidence that the implementation process can be carried out well? Is the implementing partner fully staffed? Can individuals delivering the program be fully trained? How will we monitor program implementation?
The key to answering the original question asked by our audience member is recognizing that we have to break practical policy questions into parts. Some parts of the problem will be answered with local institutional knowledge and descriptive data, and some will be answered with evidence from impact evaluations in other contexts. By using this framework, lessons from J-PAL affiliates’ research in other contexts can apply in MENA countries.
This framework is not foolproof, but is one tool to help decision-makers understand how to apply lessons from randomized evaluations. (J-PAL Executive Director Rachel Glennerster and J-PAL North America Executive Director Mary Ann Bates describe the framework in more detail and give examples of how it can be applied in a recent Stanford Social Innovation Review paper.)