Universität Regensburg MaxS at GermEval 2021 Task 1: Synthetic Data in Toxic Comment Classification

Maximilian Schmidhuber


Abstract
We report on our submission to Task 1 of the GermEval 2021 challenge – toxic comment classification. We investigate different ways of bolstering scarce training data to improve off-the-shelf model performance on a toxic comment classification task. To help address the limitations of a small dataset, we use data synthetically generated by a German GPT-2 model. The use of synthetic data has only recently been taking off as a possible solution to ad- dressing training data sparseness in NLP, and initial results are promising. However, our model did not see measurable improvement through the use of synthetic data. We discuss possible reasons for this finding and explore future works in the field.
Anthology ID:
2021.germeval-1.9
Volume:
Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments
Month:
September
Year:
2021
Address:
Duesseldorf, Germany
Editors:
Julian Risch, Anke Stoll, Lena Wilms, Michael Wiegand
Venue:
GermEval
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
62–68
Language:
URL:
https://aclanthology.org/2021.germeval-1.9
DOI:
Bibkey:
Cite (ACL):
Maximilian Schmidhuber. 2021. Universität Regensburg MaxS at GermEval 2021 Task 1: Synthetic Data in Toxic Comment Classification. In Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments, pages 62–68, Duesseldorf, Germany. Association for Computational Linguistics.
Cite (Informal):
Universität Regensburg MaxS at GermEval 2021 Task 1: Synthetic Data in Toxic Comment Classification (Schmidhuber, GermEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.germeval-1.9.pdf
Data
HateXplain