FBK-DH at SemEval-2020 Task 12: Using Multi-channel BERT for Multilingual Offensive Language Detection

Camilla Casula, Alessio Palmero Aprosio, Stefano Menini, Sara Tonelli


Abstract
In this paper we present our submission to sub-task A at SemEval 2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval2). For Danish, Turkish, Arabic and Greek, we develop an architecture based on transfer learning and relying on a two-channel BERT model, in which the English BERT and the multilingual one are combined after creating a machine-translated parallel corpus for each language in the task. For English, instead, we adopt a more standard, single-channel approach. We find that, in a multilingual scenario, with some languages having small training data, using parallel BERT models with machine translated data can give systems more stability, especially when dealing with noisy data. The fact that machine translation on social media data may not be perfect does not hurt the overall classification performance.
Anthology ID:
2020.semeval-1.201
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Venue:
SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1539–1545
Language:
URL:
https://aclanthology.org/2020.semeval-1.201
DOI:
10.18653/v1/2020.semeval-1.201
Bibkey:
Cite (ACL):
Camilla Casula, Alessio Palmero Aprosio, Stefano Menini, and Sara Tonelli. 2020. FBK-DH at SemEval-2020 Task 12: Using Multi-channel BERT for Multilingual Offensive Language Detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1539–1545, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
FBK-DH at SemEval-2020 Task 12: Using Multi-channel BERT for Multilingual Offensive Language Detection (Casula et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.201.pdf
Code
 ca-milla/multi-channel-bert
Data
OLID