Download PDFOpen PDF in browser

Investigating the validity of using automated writing evaluation in EFL writing assessment

EasyChair Preprint no. 318

10 pagesDate: July 3, 2018

Abstract

This study aims to follow an argument-based approach to validation of using automated essay evaluation (AWE) system with the example of Pigai, a Chinese AWE program, in English as a Foreign Language (EFL) writing assessment in China. First, an interpretive argument was developed for its use in the course of College English. Second, three sub-studies were conducted to seek evidence of claims related to score evaluation, score generalization, score explanation, score extrapolation and feedback utilization. Major findings are: (1) Pigai yields scores that are accurate indicators of the quality of a test performance sample; (2) Pigai yields scores that are sufficiently consistent across tasks in the same form; (3) Pigai scoring features represent the construct of interest to some extent, yet problems of construct under-representation and construct-irrelevant features still exist; (4) Pigai yields scores that are consistent with teachers’ judgments of students’ writing ability; (5) Pigai generates feedback that has a positive impact on students’ development of writing ability, but to some extent. The above results reveal that AWE can only be used as a supplement to human evaluation, but can never replace the latter in the classroom settings.

Keyphrases: Automated essay evaluation, Interpretative argument, Pigai, writing assessment

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:318,
  author = {Ying Xu},
  title = {Investigating the validity of using automated writing evaluation in EFL writing assessment},
  howpublished = {EasyChair Preprint no. 318},
  doi = {10.29007/lqw2},
  year = {EasyChair, 2018}}
Download PDFOpen PDF in browser