Predicting the Quality of Revisions in Argumentative Writing

Zhexiong Liu, Diane Litman, Elaine Wang, Lindsay Matsumura, Richard Correnti


Abstract
The ability to revise in response to feedback is critical to students’ writing success. In the case of argument writing in specific, identifying whether an argument revision (AR) is successful or not is a complex problem because AR quality is dependent on the overall content of an argument. For example, adding the same evidence sentence could strengthen or weaken existing claims in different argument contexts (ACs). To address this issue we developed Chain-of-Thought prompts to facilitate ChatGPT-generated ACs for AR quality predictions. The experiments on two corpora, our annotated elementary essays and existing college essays benchmark, demonstrate the superiority of the proposed ACs over baselines.
Anthology ID:
2023.bea-1.24
Volume:
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
275–287
Language:
URL:
https://aclanthology.org/2023.bea-1.24
DOI:
10.18653/v1/2023.bea-1.24
Bibkey:
Cite (ACL):
Zhexiong Liu, Diane Litman, Elaine Wang, Lindsay Matsumura, and Richard Correnti. 2023. Predicting the Quality of Revisions in Argumentative Writing. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 275–287, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Predicting the Quality of Revisions in Argumentative Writing (Liu et al., BEA 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bea-1.24.pdf
Video:
 https://aclanthology.org/2023.bea-1.24.mp4