Scaling back an evaluated program

three arrows pointing down
Innovate, test, reassess: Partners have scaled down, redesigned, or decided to not move forward with programs that were evaluated and found to be ineffective.

Incorporating evidence into decision-making is not only about scaling up effective programs. Results that show a program doesn’t work can be just as critical. We can learn a great deal from null results: they can change our beliefs or reveal implementation issues, and learning why a social program was not effective can be equally important for policy. Scaling down, changing, or deciding to not scale up an intervention that has been shown to have null or negative effects can free up valuable time and resources and create the opportunity to try something new.

For example, a program in Karnataka, India aimed to reduce health worker absenteeism by introducing a biometric monitoring system that provided attendance data to supervisors in real time, combined with a system of incentives and penalties for unauthorized absences. While the government expected the system to act as a deterrent to absent doctors, researchers found that, due to imperfect enforcement, the monitoring system and the data it generated had limited impacts on attendance. These results contributed to the government’s decision to cancel the planned scale-up of the program, saving millions of dollars in costs and countless hours of staff time needed to run it.

Limits of technological solutions to provider monitoring

Based on evidence that biometric monitoring technology did not increase doctors' attendance at primary health centers, the government of Karnataka decided to end the program, saving taxpayers millions of dollars.
Stack of resumes

Unintended effects of anonymous resumes

The French government abandoned a policy that would have required firms to make recruitment decisions based on anonymized resumes after research showed that a voluntary, pilot scheme actually harmed minority applicants’ employment chances.