IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with Special Tokens, Re-Ranking, Siamese Encoders and Back Translation

Yuqiang Xie, Luxi Xing, Wei Peng, Yue Hu


Abstract
This paper introduces our systems for all three subtasks of SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning. To help our model better represent and understand abstract concepts in natural language, we well-design many simple and effective approaches adapted to the backbone model (RoBERTa). Specifically, we formalize the subtasks into the multiple-choice question answering format and add special tokens to abstract concepts, then, the final prediction of QA is considered as the result of subtasks. Additionally, we employ many finetuning tricks to improve the performance. Experimental results show that our approach gains significant performance compared with the baseline systems. Our system achieves eighth rank (87.51%) and tenth rank (89.64%) on the official blind test set of subtask 1 and subtask 2 respectively.
Anthology ID:
2021.semeval-1.22
Volume:
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Month:
August
Year:
2021
Address:
Online
Venue:
SemEval
SIGs:
SIGSEM | SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
199–204
Language:
URL:
https://aclanthology.org/2021.semeval-1.22
DOI:
10.18653/v1/2021.semeval-1.22
Bibkey:
Cite (ACL):
Yuqiang Xie, Luxi Xing, Wei Peng, and Yue Hu. 2021. IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with Special Tokens, Re-Ranking, Siamese Encoders and Back Translation. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 199–204, Online. Association for Computational Linguistics.
Cite (Informal):
IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with Special Tokens, Re-Ranking, Siamese Encoders and Back Translation (Xie et al., SemEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.semeval-1.22.pdf
Data
ReCAM