azaad@BND at SemEval-2023 Task 2: How to Go from a Simple Transformer Model to a Better Model to Get Better Results in Natural Language Processing

Reza Ahmadi, Shiva Arefi, Mohammad Jafarabad


Abstract
In this article, which was prepared for the sameval2023 competition (task number 2), information about the implementation techniques of the transformer model and the use of the pre-trained BERT model in order to identify the named entity (NER) in the English language, has been collected and also the implementation method is explained. Finally, it led to an F1 score of about 57% for Fine-grained and 72% for Coarse-grained in the dev data. In the final test data, F1 score reached 50%.
Anthology ID:
2023.semeval-1.303
Volume:
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
2184–2187
Language:
URL:
https://aclanthology.org/2023.semeval-1.303
DOI:
10.18653/v1/2023.semeval-1.303
Bibkey:
Cite (ACL):
Reza Ahmadi, Shiva Arefi, and Mohammad Jafarabad. 2023. azaad@BND at SemEval-2023 Task 2: How to Go from a Simple Transformer Model to a Better Model to Get Better Results in Natural Language Processing. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), pages 2184–2187, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
azaad@BND at SemEval-2023 Task 2: How to Go from a Simple Transformer Model to a Better Model to Get Better Results in Natural Language Processing (Ahmadi et al., SemEval 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.semeval-1.303.pdf