The Impacts of Automated Essay Scoring on Writing Skills and Access to College

Artificial intelligence has the potential of – perfectly or imperfectly – substituting time-intensive teacher tasks. However, evidence on whether and how this impacts learning is still scant. We propose a randomized evaluation of a program that uses automated essay scoring to provide personalized and instantaneous feedback to students on their writing. School teachers then receive a summary of error patterns detected by the scoring technology, which may help them tailor class contents to the students’ specific needs. The current structure of the program integrates this automated scoring component with human graders paid by the provider to enhance feedback quality, particularly in semantic features, which are harder to be assessed by automated essay scoring algorithms. Since human graders’ time increases the provider implementation costs considerably, the proposed evaluation design will test not only the impact of the essay scoring technology but also the marginal effect of these human graders, which might be important for scalability purposes and to understand the current limitations of the technology. The results will shed light on the question of whether and how this ed-tech can be used to improve learning and increase access to college in a low-quality (public schools) post-primary education context.

RFP Cycle:
Tenth Round (2018)
Location:
Brazil
Researchers:
Type:
  • Full project