Scalable and Explainable Automated Scoring for Open-Ended Constructed Response Math Word Problems

Scott Hellman, Alejandro Andrade, Kyle Habermehl


Abstract
Open-ended constructed response math word problems (“math plus text”, or MPT) are a powerful tool in the assessment of students’ abilities to engage in mathematical reasoning and creative thinking. Such problems ask the student to compute a value or construct an expression and then explain, potentially in prose, what steps they took and why they took them. MPT items can be scored against highly structured rubrics, and we develop a novel technique for the automated scoring of MPT items that leverages these rubrics to provide explainable scoring. We show that our approach can be trained automatically and performs well on a large dataset of 34,417 responses across 14 MPT items.
Anthology ID:
2023.bea-1.12
Volume:
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
137–147
Language:
URL:
https://aclanthology.org/2023.bea-1.12
DOI:
10.18653/v1/2023.bea-1.12
Bibkey:
Cite (ACL):
Scott Hellman, Alejandro Andrade, and Kyle Habermehl. 2023. Scalable and Explainable Automated Scoring for Open-Ended Constructed Response Math Word Problems. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 137–147, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Scalable and Explainable Automated Scoring for Open-Ended Constructed Response Math Word Problems (Hellman et al., BEA 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bea-1.12.pdf
Video:
 https://aclanthology.org/2023.bea-1.12.mp4