ITNLP at SemEval-2021 Task 11: Boosting BERT with Sampling and Adversarial Training for Knowledge Extraction

Genyu Zhang, Yu Su, Changhong He, Lei Lin, Chengjie Sun, Lili Shan


Abstract
This paper describes the winning system in the End-to-end Pipeline phase for the NLPContributionGraph task. The system is composed of three BERT-based models and the three models are used to extract sentences, entities and triples respectively. Experiments show that sampling and adversarial training can greatly boost the system. In End-to-end Pipeline phase, our system got an average F1 of 0.4703, significantly higher than the second-placed system which got an average F1 of 0.3828.
Anthology ID:
2021.semeval-1.59
Volume:
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
485–489
Language:
URL:
https://aclanthology.org/2021.semeval-1.59
DOI:
10.18653/v1/2021.semeval-1.59
Bibkey:
Cite (ACL):
Genyu Zhang, Yu Su, Changhong He, Lei Lin, Chengjie Sun, and Lili Shan. 2021. ITNLP at SemEval-2021 Task 11: Boosting BERT with Sampling and Adversarial Training for Knowledge Extraction. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 485–489, Online. Association for Computational Linguistics.
Cite (Informal):
ITNLP at SemEval-2021 Task 11: Boosting BERT with Sampling and Adversarial Training for Knowledge Extraction (Zhang et al., SemEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.semeval-1.59.pdf