JBNU at SemEval-2020 Task 4: BERT and UniLM for Commonsense Validation and Explanation

Seung-Hoon Na, Jong-Hyeon Lee


Abstract
This paper presents our contributions to the SemEval-2020 Task 4 Commonsense Validation and Explanation (ComVE) and includes the experimental results of the two Subtasks B and C of the SemEval-2020 Task 4. Our systems rely on pre-trained language models, i.e., BERT (including its variants) and UniLM, and rank 10th and 7th among 27 and 17 systems on Subtasks B and C, respectively. We analyze the commonsense ability of the existing pretrained language models by testing them on the SemEval-2020 Task 4 ComVE dataset, specifically for Subtasks B and C, the explanation subtasks with multi-choice and sentence generation, respectively.
Anthology ID:
2020.semeval-1.65
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
527–534
Language:
URL:
https://aclanthology.org/2020.semeval-1.65
DOI:
10.18653/v1/2020.semeval-1.65
Bibkey:
Cite (ACL):
Seung-Hoon Na and Jong-Hyeon Lee. 2020. JBNU at SemEval-2020 Task 4: BERT and UniLM for Commonsense Validation and Explanation. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 527–534, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
JBNU at SemEval-2020 Task 4: BERT and UniLM for Commonsense Validation and Explanation (Na & Lee, SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.65.pdf