When to Conduct an Evaluation?

PDF version

The value added by rigorously evaluating a program or policy changes depending on when in the program or policy life cycle the evaluation is conducted. The evaluation should not come too soon: when the program is still taking shape, and kinks are being ironed out. And the evaluation should not come too late: after money has been allocated, and the program, rolled out, so that there is no longer space for a control group.

An ideal time is during the pilot phase of a program or before scaling up. During these phases there are often important questions that an evaluator would like to answer: How effective is the program? Is it effective among different populations?  Are certain aspects are working better than others, and can “the others” be improved? Is it effective when it reaches a larger population?

During the pilot phase, the effects of a program on a particular population are unknown. The program itself may be new or it may be an established program that is targeting a new population.  In both cases program heads and policymakers may wish to better understand the effectiveness of a program and how it might be improved.  Almost by definition, the pilot program will reach only a portion of the target population, making it possible to conduct a randomized evaluation. After the pilot phase, if the program is shown to be effective, leading to increased support, and in turn, more resources allocated, it can be replicated or scaled up to reach the remaining target population.

One example of a well-timed evaluation is that of PROGRESA, a conditional cash transfer program in Mexico launched in 1997. The policy gave mothers cash grants for their family as long as they ensured their children attended school regularly and received scheduled vaccinations. The political party, which had been in power for the prior 68 years, the Institutional Revolutionary Party (PRI), was facing inevitable defeat in the up-coming elections. A probable outcome of electoral defeat was the dismantling of incumbent programs such as PROGRESA. To build support for the program’s survival, PRI planned to clearly demonstrate the policy’s effectiveness in improving child health and education outcomes.

PROGRESA was first introduced as a pilot program in rural areas of seven states. Out of 506 communities sampled by the Mexican government for the pilot, 320 were randomly assigned to treatment and 186 to the comparison.  Comparing treatment and control groups after one year, it was found to successfully improve these child-level outcomes. As hoped, the program’s popularity expanded from its initial supporters and direct beneficiaries to the entire nation.

Following the widely-predicted defeat of PRI in the 2000 elections, the new political party, PAN took power and inherited an immensely popular program. Instead of dismantling PROGRESA, PAN changed the program’s name to OPORTUNIDADES, and expanded it nation-wide.

The program was soon replicated in other countries, such Nicaragua, Ecuador, and Honduras. And following Mexico’s lead, these new countries conducted pilot studies to test the impact of PROGRESA-like programs on their populations before scaling up.