Trends, Limitations and Open Challenges in Automatic Readability Assessment Research

Sowmya Vajjala


Abstract
Readability assessment is the task of evaluating the reading difficulty of a given piece of text. This article takes a closer look at contemporary NLP research on developing computational models for readability assessment, identifying the common approaches used for this task, their shortcomings, and some challenges for the future. Where possible, the survey also connects computational research with insights from related work in other disciplines such as education and psychology.
Anthology ID:
2022.lrec-1.574
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
5366–5377
Language:
URL:
https://aclanthology.org/2022.lrec-1.574
DOI:
Bibkey:
Cite (ACL):
Sowmya Vajjala. 2022. Trends, Limitations and Open Challenges in Automatic Readability Assessment Research. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 5366–5377, Marseille, France. European Language Resources Association.
Cite (Informal):
Trends, Limitations and Open Challenges in Automatic Readability Assessment Research (Vajjala, LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.574.pdf
Data
Newsela