M&E in practice: Local government challenges in implementing outcome-based planning and budgeting in Indonesia
Monitoring and evaluation (M&E) is a process to assess, put simply, what works and does not work. It is a feedback mechanism key to understanding the impact, implementation, and cost-effectiveness of a program that can inform its improvement, scaling up (or down), or follow-up.
J-PAL Southeast Asia is committed to strengthening the M&E capacity of development sector stakeholders in Indonesia in order to support a culture of evidence-based policymaking. To better understand current M&E practices within the Government of Indonesia, J-PAL SEA invited Dr. Evi Noor Afifah, a lecturer in the Department of Economics and head of the Economics Lab at Universitas Gadjah Mada (UGM), who specializes in development planning and outcome-based program planning, to share her perspectives.
While Indonesia has established a regulatory framework on outcome-based policymaking, challenges persist in the varying quality of implementation across local governments. This blog post summarizes learnings from a knowledge sharing session held in November 2021 in which Dr. Evi shared her experiences providing technical M&E assistance for local governments in Indonesia.
Absence of theories of change: Challenges aligning programs and development goals
Having a decentralized government, development planning and budgeting in Indonesia are conducted at both the central and local government level. At the local government level, planning is guided by national priority goals and programs planned for a certain year should be aligned with these long-term goals.
However, there are cases where planning at the local level prioritizes “what programs we should do next year” over “what impacts we want to achieve in the next few years.” The former can hinder effective policymaking. Instead of selecting programs that are designed to achieve specific outcomes, some local governments implement programs first and then attempt to match what has been done with that period’s priority outcomes during reporting. Thus, implemented programs may only have a loose connection to long-term development goals.
Increasing use of theories of change in program planning can help local policymakers better frame their priorities around key outcomes. Developing a theory of change for a specific program leads decision-makers to consider, “If [inputs] and [activities] produce [outputs], this should lead to [outcomes] which will ultimately contribute to [national goal].” This helps ensure that programs are implemented with an end goal in mind, and creates a structure to map out intermediate milestones and identify threats that may hinder the success of the program, from the beginning of the planning process.
J-PAL courses for policymakers include theory of change as the foundation of program evaluation. However, local governments may face time constraints that make it difficult to follow an ideal set of steps for program planning (such as having to submit program budgets within a certain deadline with little time to plan). We are further exploring how principles of theory of change can be applied within these constraints.
Challenges in measurement: Differentiating between output and outcome indicators
Local governments are provided with a list of indicators to aid in their program design and evaluation, based on the Ministry of Home Affairs’ regulation on regional development planning. In practice, local government staff may not be trained or have access to resources to help them understand the differences between input, output, and outcome indicators. Often, output indicators are used to measure achievement of a final outcome—for instance, frequency of health campaigns may be used to measure improvement in child health.
What are the right indicators to measure success? There is no single answer to this question. As an example, to measure improvement in child health, one may use infant mortality rate as an indicator. But one can also use other child health indicators such as nutrition outcomes, immunization completion rates, anemia prevalence, diarrhea incidence, and health service usage. The most appropriate indicator(s) will depend on local context, data accessibility, and outcome goals, among other factors.
In addition to strengthening understanding of the theory of change to clarify the distinction between output and outcome indicators, what can be improved is the capacity of evaluators to assess an indicator's validity and reliability. Validity refers to how well the indicator maps to the outcome (whether we are measuring the right thing), and reliability refers to how precise the measure is (whether the measurement is “noisy”). Capacity building on this aspect can draw from J-PAL’s extensive resources on measurement. However we realize challenges may also arise from the limited resources available for data collection.
Improving collection and management of administrative data
Whether or not M&E provides useful information for decision-making depends on the quality and relevance of the data being used. Data collection is one of the most costly and time-consuming aspects of conducting impact evaluations. Hence, local governments often have limited capacity to collect primary data specifically for the purpose of program evaluation, resulting in poor estimation of a program’s impact and poor budget projection.
With these time and budget constraints, turning to administrative data can be a practical option. When properly collected and managed, administrative data can expedite and reduce the cost of a rigorous evaluation by reducing the need for surveys. To this end, local governments have access to an “e-monev” system, a digital platform for storing administrative data meant to ease data management and promote its use for M&E. However, this data management system is yet to be optimally used, due to the limited capacity and awareness of local government staff to periodically input data.
This presents an opportunity for J-PAL SEA to support government agencies with technical assistance on collection and management of admin data. We have extensive experience in supporting evaluations that use admin data in various ways, such as measuring outcomes, monitoring program implementation, and determining program beneficiaries. These case studies are described in more detail in the IDEA handbook, published by J-PAL’s Innovations in Data and Experiments for Action Initiative (IDEA).
Since 2013, J-PAL Southeast Asia has conducted more than 35 training courses, reaching more than 750 academics, government officials, program managers, and NGO staff from across Indonesia. In our next phase of capacity building efforts, we aim to provide more technical support for government agencies and support institutions that are key to the strengthening of Indonesia’s M&E ecosystem. Learn more about impact evaluations in Southeast Asia.