WLV-RIT at GermEval 2021: Multitask Learning with Transformers to Detect Toxic, Engaging, and Fact-Claiming Comments

Skye Morgan, Tharindu Ranasinghe, Marcos Zampieri


Abstract
This paper addresses the identification of toxic, engaging, and fact-claiming comments on social media. We used the dataset made available by the organizers of the GermEval2021 shared task containing over 3,000 manually annotated Facebook comments in German. Considering the relatedness of the three tasks, we approached the problem using large pre-trained transformer models and multitask learning. Our results indicate that multitask learning achieves performance superior to the more common single task learning approach in all three tasks. We submit our best systems to GermEval-2021 under the team name WLV-RIT.
Anthology ID:
2021.germeval-1.5
Volume:
Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments
Month:
September
Year:
2021
Address:
Duesseldorf, Germany
Editors:
Julian Risch, Anke Stoll, Lena Wilms, Michael Wiegand
Venue:
GermEval
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
32–38
Language:
URL:
https://aclanthology.org/2021.germeval-1.5
DOI:
Bibkey:
Cite (ACL):
Skye Morgan, Tharindu Ranasinghe, and Marcos Zampieri. 2021. WLV-RIT at GermEval 2021: Multitask Learning with Transformers to Detect Toxic, Engaging, and Fact-Claiming Comments. In Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments, pages 32–38, Duesseldorf, Germany. Association for Computational Linguistics.
Cite (Informal):
WLV-RIT at GermEval 2021: Multitask Learning with Transformers to Detect Toxic, Engaging, and Fact-Claiming Comments (Morgan et al., GermEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.germeval-1.5.pdf
Data
Hate Speech