Increasing Test Score Performance

What interventions are most effective at increasing student learning?

In 2010, over 90 percent of primary school age children around the world were enrolled in primary and secondary education (World Bank, The State of Education). But being in school does not guarantee that students are learning.

In 2010, over 90 percent of primary school age children around the world were enrolled in primary and secondary education (World Bank, The State of Education). But being in school does not guarantee that students are learning. There are many barriers to student learning, with no obvious solutions: Students may not have access to schools or resources, but does hiring additional teachers or providing school supplies help improve student learning? Rigid and overly ambitious curricula may not match the learning levels or needs of students, but are there more effective curricula or teaching methods?

In our cost-effectiveness analysis (CEA) of randomized evaluations of 29 programs, we find that the impacts of numerous strategies to improve student learning vary considerably. They also incur drastically different costs, and some programs therefore achieve learning gains with much greater cost-effectiveness than others.

When access to education is extremely limited, getting children into school can lead to large learning gains.

Opening new schools in rural areas of Afghanistan (4, see our Cost-Effectiveness Analysis below) had the greatest impact on test scores of any program in this analysis and was also quite cost-effective. In areas with high enrollment rates, however, it is less clear that increasing attendance on its own is a cost-effective strategy.

Motivating students to go to school and learn can be very cost-effective.

Incentivizing students by giving scholarships to the best performing children (3, see CEA below) is a cost-effective strategy to increase both children's time in school as well as their subsequent test scores. In Malawi, providing cash transfers conditional on school attendance (2) did increase test scores, but was less cost-effective than some other approaches. Unconditional transfers (1) increased attendance but not test scores.

There is little evidence that simply increasing the number of teachers or teaching resources improves learning.

Providing additional teachers to reduce class sizes had no effect on student test scores in India (9) or Kenya (6). Non-teacher inputs, such as flip charts (8) or textbooks, (7a) similarly had no impact on average test scores. Even programs in which schools are given discretionary grants to purchase the inputs they feel the students need most have little impact, if any, on student learning. One grant program in India (28) had a positive impact on test scores after the first year, but this impact was offset by a reduction in household education spending. After the second year, any impact of the program had disappeared.

Teaching children according to their actual learning levels is the most consistently effective at improving learning, and is also very cost-effective.

If a school has more than one class per grade, then reassigning students to classes by initial learning level—often known as streaming—costs very little, improves test scores, and is therefore extremely cost-effective (20). Even if it is necessary to hire a new contract teacher to allow the class to be divided, streaming is still cost-effective (18). Providing targeted help for students (19) in the lower half of their class, as well as computer programs that allow for self-paced learning, also appear to be quite cost-effective (17).

Incentives for teachers can lead to significant learning gains if they are objectively administered and structured in such a way as to discourage "teaching to the test."

Linking teachers' salaries to their attendance—objectively monitored through daily photos of the teachers with their students—was both an effective, and cost-effective, strategy for improving student test scores (23). But when incentives are tied to student learning outcomes, there may be a danger of “teaching to the test.” An in-kind incentive program in Kenya (21) raised test scores after the second year of the program, but the improvement seems to be driven only by an increase in test preparation. One year after the program ended, any impact had disappeared. However, another program in India (29) that linked teachers' pay with their students test score performance led to test score gains that seem to represent an actual, more durable, increase in learning. Teachers’ pay was either linked to school-wide performance or the performance of the individual teacher’s own students. In the first year, the two types of incentives were equally effective. However, the individual bonuses were much more effective in the second year, and were more cost-effective than the group bonuses in both years.

Adding an extra teacher on a short-term contract can produce significant learning gains at a relatively low cost.

Many teachers in developing countries have poor incentives and absenteeism rates are high. Contract teachers—who are hired and held accountable by the local community and whose contracts can be terminated if they perform poorly—are often more likely to attend school and extend more effort when in the classroom then their civil service counterparts. Contract teachers are often paid only a fraction of the salary of civil-service teachers, which makes such programs extremely cost-effective. If we assume the contract teacher is used to replace the civil-service teacher, this intervention, in principle, saves money and therefore may be considered infinitely cost-effective (21).

Grants provided to communities as part of empowerment programs can lead to better learning.

Providing schools or communities with grants to purchase classroom materials in The Gambia and Indonesia had no lasting impact on student learning (11-12). However, when the funds are combined with programs to empower the community, students see large improvements in test scores at a low cost. Programs in Indonesia that aimed to increase the legitimacy and authority of the local school committee (which was already receiving additional funds to spend on educational materials) led to significant learning gains and was highly cost-effective (26-27).

Improving Student Learning: Cost Effectiveness of Education Programs

A cost-effectiveness analysis (CEA) calculates the ratio of the amount of effect a program achieves for a given amount of cost incurred. Here, our numbers represent the total number of standard deviations gained across any sample size per US$100 spent. The cost-effectiveness of each program is measured as the ratio of the aggregate impact of the program—the average test score improvement per student multiplied by the number of students impacted—to the aggregate cost of implementing the program. CEAs are one of many tools that can serve as starting point when making policy decisions by highlighting the types of programs that tend to be the most cost-effective.

Information About Underlying Calculations

The CEA presented above compares the impact of different programs on student test scores measured against the costs of running those programs. All of the impact estimates are drawn from randomized evaluations, and the costs of running the programs have been gathered from a variety of sources, including academic papers, conversations with field staff, and program budgets.

J-PAL has adopted a standard methodology for conducting CEA from the perspective of a policymaker who has a particular policy objective in mind and is trying to decide between a range of different options for achieving that policy objective. In order to provide comparable cost-effectiveness estimates across these options, a standard methodology must be applied to calculate both impacts and costs. Basic details of our methodology are included below.
Impacts
Impacts are measured in terms of standard deviation changes in student test scores. Standard deviations measure how much individual test scores change as a result of a program compared to the average test score of the comparison group. For example, a 0.2 SD change would move a child from the 50th to the 58th percentile. In the education literature, an increase of less than 0.1 SD is typically considered to be a small effect, while an increase of more than 0.3 SD is considered a large effect, and an increase of more than 0.5 SD would be a very large effect. (A forthcoming J-PAL publication will provide further discussion on test scores as a measure of student learning).
A ten percent discount rate is applied to impacts incurred over multiple years (after year 1) to account for how an end user of the program would trade off between the value of the services this year versus next year.
Costs
The calculations include only the incremental cost of adding a new education program, under the assumption that many of the fixed costs of running a school system will be incurred even in the absence of the program in question.
The analysis assumes that policymakers care not just about the costs incurred by their organization or government ministry, but also about costs imposed on beneficiaries and society as a whole. We therefore include the following costs, when relevant:
-- Beneficiary time when a direct requirement of the program, e.g. time involved in traveling to and attending program meetings
-- Goods or services that were provided for free (in situations where free procurement would not necessarily be guaranteed if the program were replicated in a new context)
-- The monetary value of both cash and in-kind transfers
Inflation is calculated using GDP deflators. When calculating the average inflation from the base year to the year of analysis, we assume that program costs are incurred on the first day of each year.
Costs are expressed in terms of 2011 USD, with local currencies exchanged using standard (not PPP) exchange rates.
A ten percent discount rate is applied for costs incurred over multiple years in order to adjust for the choice a funder faces between incurring costs this year, or deferring expenditures to invest for a year and then incurring costs the next year.
When converting costs to dollars in the year of analysis (in this case, 2011 USD), J-PAL applies a standard order of operations to address inflation, exchange rates, and present value: (1) local cost data is exchanged into US dollars using the exchange rate from the year the costs were incurred, (2) these costs are deflated back to the real value in base year prices using the average annual US inflation rate, (3) a ten percent discount rate is applied to take the present value for costs incurred after the first year, and (4) then the average US inflation rate is used to inflate costs forward to the year of analysis. This particular order of operations is not necessarily better than any other, the important thing is that an order be selected and consistently applied to all programs in an analysis.
For more details on the assumptions made in this analysis, please see Dhaliwal et al. 2013.
Underlying Calculations & Sensitivity Analysis
CEA does not by itself provide enough information for a policymaker to make any decisions regarding resource allocation, but it can serve as a useful starting point for assessing the efficacy of different programs and their relevance to a particular situation. When the calculations are done at a highly disaggregated level, with assumptions about key factors such as program take-up or unit costs laid out explicitly, it is easier to gain insights into which programs are likely to provide the greatest value for money in a particular situation, and the key factors to which these outcomes are most sensitive.
The analysis includes 27 education programs whose impact on student learning was rigorously measured using a randomized evaluation and where the study authors have made detailed cost information available. We wish to gratefully acknowledge the contributions of the researchers whose work we are drawing on in providing us with original cost data about the programs that were evaluated, and in working with us to develop the cost-effectiveness models. Their support and input has been essential in creating such detailed cost-effectiveness models.
The underlying calculations for the CEA of programs aimed at improving student test scores are available below. For any questions or comments, please contact Kyle Murphy.
1. Unconditional cash transfers in Malawi (Baird, McIntosh, and Özler 2011)
2. Minimum conditional cash transfers in Malawi (Baird, McIntosh, and Özler 2011)
3. Girls' merit scholarships in Kenya (Kremer, Miguel, and Thornton 2009)
4. Village-based schools in Afghanistan (Burde and Linden 2012)
5. Providing earnings information in Madagascar (Nguyen 2008)
6. Reducing class size in Kenya (Duflo, Dupas, and Kremer 2012)
7. Textbooks in Kenya (Glewwe, Kremer, and Moulin 2009)
8. Flipcharts in Kenya (Glewwe et al. 2004)
9. Reducing class size in India (Banerjee et al. 2007)
10. Building/improving libraries in India (Borkum, He, and Linden 2013)
11. School committee grants in Indonesia (Pradhan et al. 2012)
12. School committee grants in the Gambia (Blimpo and Evans 2011)
13. Adding computers to classrooms in Colombia (Barrera-Osorio and Linden 2009)
14. One Laptop per Child program in Peru (Cristia et al. 2012)
15. Diagnostic feedback for teachers in India (Muralidharan and Sundararaman 2012)
16. Read-a-Thon in the Philippines (Abeberese, Kumler, and Linden 2007)
17. Individually-paced computer-assisted learning in India (Banerjee et al. 2007)
18. Extra contract teacher + streaming by achievement in Kenya (Duflo, Dupas, and Kremer 2011; 2012)
19. Remedial education in India (Banerjee et al. 2007)
20. Streaming by achievement (Duflo, Dupas, and Kremer 2011)
21. Contract teachers in Kenya (Duflo, Dupas, and Kremer 2012)
22. Teacher incentives in Kenya (Glewwe, Ilias, and Kremer 2010)
23. Camera monitoring and pay incentives in India (Duflo, Hanna, and Ryan 2012)
24. Training school committees in Indonesia (Pradhan et al. 2012)
25. Grants and training for school committees in the Gambia (Blimpo and Evans 2011)
26. School committee elections and linkage to local government in Indonesia (Pradhan et al. 2012)
27. Linking school committees to local government in Indonesia (Pradhan et al. 2012)
You can also download the a full workbook of all calculations.
Please note, all calculations are a work in progress and are subject to change.