Tuning Multilingual Transformers for Language-Specific Named Entity Recognition

Mikhail Arkhipov, Maria Trofimova, Yuri Kuratov, Alexey Sorokin


Abstract
Our paper addresses the problem of multilingual named entity recognition on the material of 4 languages: Russian, Bulgarian, Czech and Polish. We solve this task using the BERT model. We use a hundred languages multilingual model as base for transfer to the mentioned Slavic languages. Unsupervised pre-training of the BERT model on these 4 languages allows to significantly outperform baseline neural approaches and multilingual BERT. Additional improvement is achieved by extending BERT with a word-level CRF layer. Our system was submitted to BSNLP 2019 Shared Task on Multilingual Named Entity Recognition and demonstrated top performance in multilingual setting for two competition metrics. We open-sourced NER models and BERT model pre-trained on the four Slavic languages.
Anthology ID:
W19-3712
Volume:
Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing
Month:
August
Year:
2019
Address:
Florence, Italy
Venues:
ACL | BSNLP | WS
SIG:
SIGSLAV
Publisher:
Association for Computational Linguistics
Note:
Pages:
89–93
Language:
URL:
https://aclanthology.org/W19-3712
DOI:
10.18653/v1/W19-3712
Bibkey:
Cite (ACL):
Mikhail Arkhipov, Maria Trofimova, Yuri Kuratov, and Alexey Sorokin. 2019. Tuning Multilingual Transformers for Language-Specific Named Entity Recognition. In Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing, pages 89–93, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Tuning Multilingual Transformers for Language-Specific Named Entity Recognition (Arkhipov et al., 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-3712.pdf
Code
 deepmipt/Slavic-BERT-NER