Using Transfer Learning to Automatically Mark L2 Writing Texts

Tim Elks


Abstract
The use of transfer learning in Natural Language Processing (NLP) has grown over the last few years. Large, pre-trained neural networks based on the Transformer architecture are one example of this, achieving state-of-theart performance on several commonly used performance benchmarks, often when finetuned on a downstream task. Another form of transfer learning, Multitask Learning, has also been shown to improve performance in Natural Language Processing tasks and increase model robustness. This paper outlines preliminary findings of investigations into the impact of using pretrained language models alongside multitask fine-tuning to create an automated marking system of second language learners’ written English. Using multiple transformer models and multiple datasets, this study compares different combinations of models and tasks and evaluates their impact on the performance of an automated marking system This presentation is a snap-shot of work being conducted as part of my dissertation for the University of Wolverhampton’s Computational Linguistics Masters’ programme.
Anthology ID:
2021.ranlp-srw.8
Volume:
Proceedings of the Student Research Workshop Associated with RANLP 2021
Month:
September
Year:
2021
Address:
Online
Editors:
Souhila Djabri, Dinara Gimadi, Tsvetomila Mihaylova, Ivelina Nikolova-Koleva
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
51–57
Language:
URL:
https://aclanthology.org/2021.ranlp-srw.8
DOI:
Bibkey:
Cite (ACL):
Tim Elks. 2021. Using Transfer Learning to Automatically Mark L2 Writing Texts. In Proceedings of the Student Research Workshop Associated with RANLP 2021, pages 51–57, Online. INCOMA Ltd..
Cite (Informal):
Using Transfer Learning to Automatically Mark L2 Writing Texts (Elks, RANLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.ranlp-srw.8.pdf