Automated Essay Scoring: A Reflection on the State of the Art

Shengjie Li, Vincent Ng


Abstract
While steady progress has been made on the task of automated essay scoring (AES) in the past decade, much of the recent work in this area has focused on developing models that beat existing models on a standard evaluation dataset. While improving performance numbers remains an important goal in the short term, such a focus is not necessarily beneficial for the long-term development of the field. We reflect on the state of the art in AES research, discussing issues that we believe can encourage researchers to think bigger than improving performance numbers with the ultimate goal of triggering discussion among AES researchers on how we should move forward.
Anthology ID:
2024.emnlp-main.991
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17876–17888
Language:
URL:
https://aclanthology.org/2024.emnlp-main.991
DOI:
Bibkey:
Cite (ACL):
Shengjie Li and Vincent Ng. 2024. Automated Essay Scoring: A Reflection on the State of the Art. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17876–17888, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Automated Essay Scoring: A Reflection on the State of the Art (Li & Ng, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.991.pdf