NLP@JUST at SemEval-2020 Task 4: Ensemble Technique for BERT and Roberta to Evaluate Commonsense Validation

Emran Al-Bashabsheh, Ayah Abu Aqouleh, Mohammad AL-Smadi


Abstract
This paper presents the work of the NLP@JUST team at SemEval-2020 Task 4 competition that related to commonsense validation and explanation (ComVE) task. The team participates in sub-taskA (Validation) which related to validation that checks if the text is against common sense or not. Several models have trained (i.e. Bert, XLNet, and Roberta), however, the main models used are the RoBERTa-large and BERT Whole word masking. As well as, we utilized the results from both models to generate final prediction by using the average Ensemble technique, that used to improve the overall performance. The evaluation result shows that the implemented model achieved an accuracy of 93.9% obtained and published at the post-evaluation result on the leaderboard.
Anthology ID:
2020.semeval-1.72
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
574–579
Language:
URL:
https://aclanthology.org/2020.semeval-1.72
DOI:
10.18653/v1/2020.semeval-1.72
Bibkey:
Cite (ACL):
Emran Al-Bashabsheh, Ayah Abu Aqouleh, and Mohammad AL-Smadi. 2020. NLP@JUST at SemEval-2020 Task 4: Ensemble Technique for BERT and Roberta to Evaluate Commonsense Validation. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 574–579, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
NLP@JUST at SemEval-2020 Task 4: Ensemble Technique for BERT and Roberta to Evaluate Commonsense Validation (Al-Bashabsheh et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.72.pdf