Lotus at SemEval-2021 Task 2: Combination of BERT and Paraphrasing for English Word Sense Disambiguation

Niloofar Ranjbar, Hossein Zeinali


Abstract
In this paper, we describe our proposed methods for the multilingual word-in-Context disambiguation task in SemEval-2021. In this task, systems should determine whether a word that occurs in two different sentences is used with the same meaning or not. We proposed several methods using a pre-trained BERT model. In two of them, we paraphrased sentences and add them as input to the BERT, and in one of them, we used WordNet to add some extra lexical information. We evaluated our proposed methods on test data in SemEval- 2021 task 2.
Anthology ID:
2021.semeval-1.95
Volume:
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
724–729
Language:
URL:
https://aclanthology.org/2021.semeval-1.95
DOI:
10.18653/v1/2021.semeval-1.95
Bibkey:
Cite (ACL):
Niloofar Ranjbar and Hossein Zeinali. 2021. Lotus at SemEval-2021 Task 2: Combination of BERT and Paraphrasing for English Word Sense Disambiguation. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 724–729, Online. Association for Computational Linguistics.
Cite (Informal):
Lotus at SemEval-2021 Task 2: Combination of BERT and Paraphrasing for English Word Sense Disambiguation (Ranjbar & Zeinali, SemEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.semeval-1.95.pdf