BUT-FIT at SemEval-2020 Task 4: Multilingual Commonsense

Josef Jon, Martin Fajcik, Martin Docekal, Pavel Smrz


Abstract
We participated in all three subtasks. In subtasks A and B, our submissions are based on pretrained language representation models (namely ALBERT) and data augmentation. We experimented with solving the task for another language, Czech, by means of multilingual models and machine translated dataset, or translated model inputs. We show that with a strong machine translation system, our system can be used in another language with a small accuracy loss. In subtask C, our submission, which is based on pretrained sequence-to-sequence model (BART), ranked 1st in BLEU score ranking, however, we show that the correlation between BLEU and human evaluation, in which our submission ended up 4th, is low. We analyse the metrics used in the evaluation and we propose an additional score based on model from subtask B, which correlates well with our manual ranking, as well as reranking method based on the same principle. We performed an error and dataset analysis for all subtasks and we present our findings.
Anthology ID:
2020.semeval-1.46
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Venue:
SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
374–390
Language:
URL:
https://aclanthology.org/2020.semeval-1.46
DOI:
10.18653/v1/2020.semeval-1.46
Bibkey:
Cite (ACL):
Josef Jon, Martin Fajcik, Martin Docekal, and Pavel Smrz. 2020. BUT-FIT at SemEval-2020 Task 4: Multilingual Commonsense. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 374–390, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
BUT-FIT at SemEval-2020 Task 4: Multilingual Commonsense (Jon et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.46.pdf
Code
 cepin19/semeval2020_task4
Data
CommonsenseQA