The Impact of Diagnostic Feedback for Teachers on Student Learning in India

Researchers:
Fieldwork by:
Location:
Andhra Pradesh, India
Sample:
400 schools
Timeline:
2005 - 2006
Partners:

Testing in a low-stakes environment can be used as a diagnostic tool to help focus teacher efforts on areas where students are weakest. Researchers used a randomized evaluation to assess whether providing low-stakes diagnostic tests and feedback to teachers led to improved student learning outcomes. The results suggest that teachers in intervention schools exerted more effort when observed in the classroom but students in these schools performed no better on independently-administered tests than students in schools that did not receive the program.

Policy issue

Over the past decade, many low- and middle-income countries have expanded primary school access, energized by initiatives such as the United Nations Millennium Development Goals, which call for achieving universal primary education by 2015. Improvements in school access, however, have not always translated into improved learning for students. While traditional approaches to improving education often focus on providing schools with more resources, such as textbooks or teacher training, there has been growing interest in directly assessing and incentivizing schools and teachers based on students' test scores. While proponents of such high-stakes testing claim that it is a necessary (if imperfect) tool for measuring school and teacher effectiveness, opponents argue that high-stakes tests induce distortions in teacher activity, such as teaching to the test, that not only reduce the validity of the test scores, but may also lead to negative learning outcomes.

A suggested alternative that may be less susceptible to these problems is the use of tests in a low-stakes environment to provide teachers and school administrators with detailed data on student performance as a diagnostic tool to understand areas of student weakness and to focus their teaching efforts better. Low-stakes testing has the potential to increase teachers’ intrinsic motivation by focusing their attention on student learning levels and improving their ability to set and work towards goals. While the idea of such low-stakes testing is promising, there is very little rigorous evidence on its effectiveness.

Context of the evaluation

Andhra Pradesh is the fifth largest state in India, with a population of over eighty million, about 70 percent of whom live in rural areas. There are over 60,000 government-run primary schools, which serve around 80 percent of rural children in Andhra Pradesh. The average rural primary school is quite small, with a total enrollment of around eighty to 100 students and an average of three teachers across grades one to five. One teacher typically teaches all subjects for a given grade and often teaches more than one grade simultaneously. All regular teachers are employed by the state and are well qualified and well paid. The average salary of regular teachers is over four times the average per capita income in Andhra Pradesh, and 85 percent of teachers in the study sample had a college degree. However, incentives for teacher attendance and performance are weak, with teacher absence rates of over 25 percent across India.

A student in a classroom in India.
A student in a classroom in India.
Photo: Robin Hayashi | J-PAL

Details of the intervention

The diagnostic feedback program was evaluated as part of a larger education research initiative across 500 schools, known as the Andhra Pradesh Randomized Evaluation Studies (APRESt), which tested several different education interventions, including teacher performance pay, contract teachers, and school grants. In the diagnostic feedback arm, researchers used a randomized evaluation to assess whether providing low-stakes feedback to teachers led to improved student learning outcomes.

100 schools from across five districts were randomly assigned to the intervention group and 100 were assigned to the comparison group. Initial tests were conducted in all schools during June and July of 2005. At the start of the following school year, local NGO staff visited the 100 schools that were selected for the feedback program to deliver the test results broken down by individual student, question, and skill level, as well as performance benchmarks for the school, district, and state. During the visit, NGO staff emphasized that the first step to improving learning outcomes was to have a good understanding of current levels of learning and that the aim of these feedback reports was to help teachers improve student learning outcomes. The intervention schools were also told that there would be another external assessment of learning conducted at the end of the school year to monitor the progress of students in the school.

Over the next six months, local NGO staff also made six unannounced visits to each school, both intervention and comparison, to collect data on process variables including student attendance, teacher attendance and activity, and classroom organization.

Results and policy lessons

Teachers who received diagnostic feedback on student learning levels seemed to perform better on measures of teaching activity when measured by classroom observations compared to teachers in the comparison schools. Teachers in the feedback schools were significantly more likely to be actively teaching, to be reading from a textbook, to be making students read from their textbook, to address questions to students, and to be actively using the blackboard. They were also more likely to assign homework and to provide guidance on homework to students in the classroom.

However, there was no difference in test scores between students in the feedback schools and the comparison schools at the end of one year of the program. This suggests that teachers in the treatment schools may have only worked harder while being observed.

Muralidharan, Karthik and Venkatesh Sundararaman. 2010. “The Impact of Diagnostic Feedback to Teachers on Student Learning: Experimental Evidence from India.” The Economic Journal 120,  no. 546 (July): 187-203. doi: 10.1111/j.1468-0297.2010.02373.