Towards Extracting and Understanding the Implicit Rubrics of Transformer Based Automatic Essay Scoring Models

James Fiacco, David Adamson, Carolyn Rose


Abstract
By aligning the functional components derived from the activations of transformer models trained for AES with external knowledge such as human-understandable feature groups, the proposed method improves the interpretability of a Longformer Automatic Essay Scoring (AES) system and provides tools for performing such analyses on further neural AES systems. The analysis focuses on models trained to score essays based on organization, main idea, support, and language. The findings provide insights into the models’ decision-making processes, biases, and limitations, contributing to the development of more transparent and reliable AES systems.
Anthology ID:
2023.bea-1.20
Volume:
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
232–241
Language:
URL:
https://aclanthology.org/2023.bea-1.20
DOI:
10.18653/v1/2023.bea-1.20
Bibkey:
Cite (ACL):
James Fiacco, David Adamson, and Carolyn Rose. 2023. Towards Extracting and Understanding the Implicit Rubrics of Transformer Based Automatic Essay Scoring Models. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 232–241, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Towards Extracting and Understanding the Implicit Rubrics of Transformer Based Automatic Essay Scoring Models (Fiacco et al., BEA 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bea-1.20.pdf