Strengthening Implementation of Computer-assisted Learning (CAL) to Improve Student Math Outcomes in India
Schools in low- and middle-income countries (LMICs) often lack the organizational structures needed to ensure students engage consistently with educational technology platforms. Researchers partnered with the Uttar Pradesh Department of Social Welfare and Khan Academy to conduct a randomized evaluation testing the impact of dedicated on-the-ground implementation personnel on middle school students’ use of the platform and their mathematics learning in India. Students in schools that received dedicated support staff used the Khan Academy platform more and scored higher on mathematics assessments than students in schools without such support.
Policy issue
Students in low- and middle-income countries (LMICs) often face gaps in foundational skills, and teachers in large classrooms struggle to provide individualized attention. Teaching at the Right Level (TaRL), which helps teachers target instruction to what students know rather than what their grade expects, has shown promise in closing these gaps. While tutoring can deliver large gains, schools often struggle to scale it because effective programs require frequent sessions that can cost thousands of dollars per student each year.
Computer-assisted learning (CAL) platforms have shown promise in addressing these gaps by allowing students to work at their own pace and receive immediate feedback, potentially replicating some benefits of tutoring at a fraction of the cost. However, CAL platforms frequently underperform at scale, not because the technology is ineffective, but because schools face substantial barriers to consistent implementation, including unreliable internet access, competing teacher priorities, and a lack of dedicated personnel to ensure students actually use the platforms. Can providing dedicated, on-the-ground implementation support for a computer-assisted learning platform increase student engagement and improve mathematics learning in under-resourced schools?
Context of the evaluation
In Uttar Pradesh, India's most populous state, only 27.9 percent of third-grade students in government schools can read at the second-grade-level, and only 31.6 percent can perform basic subtraction. Diagnostic assessments suggest that eighth graders perform, on average, four grade levels below their enrolled level.
This study took place within the residential school system operated by the Department of Social Welfare in Uttar Pradesh, India. In November 2022, the Department entered into a partnership with Khan Academy India to launch a mathematics improvement program across 105 government boarding schools serving grades 6 through 12. Under this initial program, teachers were encouraged to dedicate one to two of their six weekly mathematics sessions to the Khan Academy platform, targeting 120 minutes of practice per student per month. Khan Academy provided teacher training, technical support through WhatsApp channels, and a recognition campaign for high-performing schools.
Despite these efforts, students’ engagement with the platform remained weak. Only 44 percent of the 27,309 registered students accessed the platform even once during the entire academic year. Several barriers contributed to this shortfall: Intermittent internet connectivity and electricity disruptions, a time-intensive student rostering process that discouraged initial setup, and limited teacher buy-in. With competing demands from multiple programs and no dedicated personnel responsible for platform use, teachers routinely deprioritized Khan Academy sessions.
Details of the intervention
Researchers partnered with Khan Academy India and the Uttar Pradesh Department of Social Welfare to conduct a randomized evaluation testing the impact of dedicated implementation personnel on middle school students’ use of the Khan Academy platform and their mathematics achievements. The evaluation covered 83 residential government schools spanning more than 50 districts across Uttar Pradesh, covering students in grades 6 to 8. Researchers grouped schools geographically, and within each group, they assigned schools randomly into two groups:
1. Lab-in-charge (28 schools; 1,983 students): These schools were offered dedicated on-the-ground personnel called lab-in-charges (LICs). LICs formally integrated two Khan Academy sessions per week into each grade-section's timetable, making it mandatory for students to use the platform. During sessions, LICs trained students on basic digital literacy, monitored student behavior to prevent distractions, troubleshot connectivity and electricity issues, assigned mathematics content aligned with classroom instruction, and monitored student progress data. LICs also supported motivational campaigns, including a “streaks” initiative that rewarded consistent weekly practice with certificates, badges, medals and earphones, and facilitated student webinars and at-home practice during school breaks.
2. Comparison (55 schools; 3,552 students): These schools did not receive the LICs but they retained full access to the Khan Academy platform and received initial training and encouragement to implement 120 minutes of weekly practice, mirroring the conditions of the program in the 2023–24 school year.
The intervention ran for 31 weeks from August 2024 through February 2025. The platform automatically tracked all student activity throughout the 31-week period, capturing total practice time, the skills students worked on, and the areas in which they improved. Researchers measured earning outcomes through independently administered mathematics assessments at the start and end of the intervention.
Results and policy lessons
Students in schools that were offered dedicated implementation support used the Khan Academy platform more than students in comparison schools. This increased use translated into gains in mathematics achievement. The intervention benefited students across performance levels, grade levels, and school types.
Platform usage: Students in lab-in-charge schools averaged 47.4 minutes of Khan Academy practice per week over the 31-week intervention, compared to 7.2 minutes per week in comparison schools, a 6.6-fold difference. This increased practice occurred both during school hours, which rose by 22 minutes per week, and outside of school, including during holidays and after-school hours, which rose by 15.1 minutes per week. Students remained engaged throughout the 7-month intervention period despite holidays.
Skill building: Students in LIC schools worked on 101 more mathematics skills than students in comparison schools, who worked on only 20 skills during the same time period. They also achieved proficiency in 63 additional skills, compared with 9 skills in the comparison group, indicating progression through increasingly difficult material. Moreover, LIC students mastered 0.96 more skills per hour of practice than students in comparison schools, who mastered 2 skills per hour, suggesting that LICs helped students engage productively rather than spending unproductive time on the platform.
Mathematics achievement: At the end of the intervention, students in lab-in-charge schools scored 0.44 standard deviations higher on the mathematics assessment than students in comparison schools, equivalent to moving an average student from the 50th percentile to approximately the 67th percentile. This effect is equivalent to approximately two to three years of learning in typical schooling in LMICs. Students’ scores went up across the board, not just for certain groups of students. Children who started at all different skill levels improved. The program worked similarly well for boys and girls, in districts with both higher and lower overall living conditions, and across different grade levels.
Cost-effectiveness: The intervention cost approximately US$14 per student over seven months, or roughly US$24 annually, compared to high-dosage tutoring programs, which cost thousands of dollars per student each year. Students in the Lab-in-charges intervention achieved a .44 standard deviation increase in math scores, exceeding the average .36 standard deviation gain associated with high-dose tutoring. These results demonstrate that Lab-in-charges delivers greater improvements in student learning at a substantially lower cost.
These findings suggest that investing in dedicated on-the-ground implementation support for computer-assisted learning platforms may be a relatively low-cost approach to generating meaningful mathematics learning gains in under-resourced government schools in LMICs.
Oreopoulos, Philip, Oliver Keyes-Krysakowski, and Deepak Agarwal. (2026). “How In-School Supervised Ed-Tech Support Produces Massive Learning Gains: A Khan Academy Field Experiment in India.” NBER Working Paper No. 34683.