cs60075_team2 at SemEval-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora

Abhilash Nandy, Sayantan Adak, Tanurima Halder, Sai Mahesh Pokala


Abstract
The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of 0.784 in sub-task 1 (single word) and 0.836 in sub-task 2 (multiple word expressions).
Anthology ID:
2021.semeval-1.87
Volume:
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Month:
August
Year:
2021
Address:
Online
Venue:
SemEval
SIGs:
SIGSEM | SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
678–682
Language:
URL:
https://aclanthology.org/2021.semeval-1.87
DOI:
10.18653/v1/2021.semeval-1.87
Bibkey:
Cite (ACL):
Abhilash Nandy, Sayantan Adak, Tanurima Halder, and Sai Mahesh Pokala. 2021. cs60075_team2 at SemEval-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 678–682, Online. Association for Computational Linguistics.
Cite (Informal):
cs60075_team2 at SemEval-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora (Nandy et al., SemEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.semeval-1.87.pdf
Code
 abhi1nandy2/CS60075-Team-2-Task-1