Download PDFOpen PDF in browser

Assessing Readability Formulas: A Comparison of Readability Formula Performance on the Classification of Simplified Texts

EasyChair Preprint no. 3851

9 pagesDate: July 13, 2020

Abstract

This study compares the performance of five different traditional and new readability formulas in the task of classifying simple Wikipedia and authentic Wikipedia articles (N = 4,000). Results indicated that a new formula, the Crowdsourced Algorithm of Reading Comprehension (CAREC) performed the best. The traditional readability formula, Flesch-Kincaid Grade Level, also showed reliable performance. The results suggest the linguistic features used in newer readability formulas are capable of reliably representing the difficulty of a text.

Keyphrases: comprehension, readability, text difficulty

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:3851,
  author = {Joon Suh Choi and Scott A. Crossley},
  title = {Assessing Readability Formulas: A Comparison of Readability Formula Performance on the Classification of Simplified Texts},
  howpublished = {EasyChair Preprint no. 3851},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser