Zhaohui Li


2023

pdf bib
Answer-state Recurrent Relational Network (AsRRN) for Constructed Response Assessment and Feedback Grouping
Zhaohui Li | Susan Lloyd | Matthew Beckman | Rebecca Passonneau
Findings of the Association for Computational Linguistics: EMNLP 2023

STEM educators must trade off the ease of assessing selected response (SR) questions, like multiple choice, with constructed response (CR) questions, where students articulate their own reasoning. Our work addresses a CR type new to NLP but common in college STEM, consisting of multiple questions per context. To relate the context, the questions, the reference responses, and students’ answers, we developed an Answer-state Recurrent Relational Network (AsRRN). In recurrent time-steps, relation vectors are learned for specific dependencies in a computational graph, where the nodes encode the distinct types of text input. AsRRN incorporates contrastive loss for better representation learning, which improves performance and supports student feedback. AsRRN was developed on a new dataset of 6,532 student responses to three, two-part CR questions. AsRRN outperforms classifiers based on LLMs, a previous relational network for CR questions, and few-shot learning with GPT-3.5. Ablation studies show the distinct contributions of AsRRN’s dependency structure, the number of time steps in the recurrence, and the contrastive loss.

2021

pdf bib
A Semantic Feature-Wise Transformation Relation Network for Automatic Short Answer Grading
Zhaohui Li | Yajur Tomar | Rebecca J. Passonneau
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Automatic short answer grading (ASAG) is the task of assessing students’ short natural language responses to objective questions. It is a crucial component of new education platforms, and could support more wide-spread use of constructed response questions to replace cognitively less challenging multiple choice questions. We propose a Semantic Feature-wise transformation Relation Network (SFRN) that exploits the multiple components of ASAG datasets more effectively. SFRN captures relational knowledge among the questions (Q), reference answers or rubrics (R), and labeled student answers (A). A relation network learns vector representations for the elements of QRA triples, then combines the learned representations using learned semantic feature-wise transformations. We apply translation-based data augmentation to address the two problems of limited training data, and high data skew for multi-class ASAG tasks. Our model has up to 11% performance improvement over state-of-the-art results on the benchmark SemEval-2013 datasets, and surpasses custom approaches designed for a Kaggle challenge, demonstrating its generality.