Janko at SemEval-2023 Task 2: Bidirectional LSTM Model Based on Pre-training for Chinese Named Entity Recognition

Jiankuo Li, Zhengyi Guan, Haiyan Ding


Abstract
This paper describes the method we submitted as the Janko team in the SemEval-2023 Task 2,Multilingual Complex Named Entity Recognition (MultiCoNER 2). We only participated in the Chinese track. In this paper, we implement the BERT-BiLSTM-RDrop model. We use the fine-tuned BERT models, take the output of BERT as the input of the BiLSTM network, and finally use R-Drop technology to optimize the loss function. Our submission achieved a macro-averaged F1 score of 0.579 on the testset.
Anthology ID:
2023.semeval-1.132
Volume:
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
958–962
Language:
URL:
https://aclanthology.org/2023.semeval-1.132
DOI:
10.18653/v1/2023.semeval-1.132
Bibkey:
Cite (ACL):
Jiankuo Li, Zhengyi Guan, and Haiyan Ding. 2023. Janko at SemEval-2023 Task 2: Bidirectional LSTM Model Based on Pre-training for Chinese Named Entity Recognition. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), pages 958–962, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Janko at SemEval-2023 Task 2: Bidirectional LSTM Model Based on Pre-training for Chinese Named Entity Recognition (Li et al., SemEval 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.semeval-1.132.pdf