RuSentEval: Linguistic Source, Encoder Force!

Vladislav Mikhailov, Ekaterina Taktasheva, Elina Sigdel, Ekaterina Artemova


Abstract
The success of pre-trained transformer language models has brought a great deal of interest on how these models work, and what they learn about language. However, prior research in the field is mainly devoted to English, and little is known regarding other languages. To this end, we introduce RuSentEval, an enhanced set of 14 probing tasks for Russian, including ones that have not been explored yet. We apply a combination of complementary probing methods to explore the distribution of various linguistic properties in five multilingual transformers for two typologically contrasting languages – Russian and English. Our results provide intriguing findings that contradict the common understanding of how linguistic knowledge is represented, and demonstrate that some properties are learned in a similar manner despite the language differences.
Anthology ID:
2021.bsnlp-1.6
Volume:
Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing
Month:
April
Year:
2021
Address:
Kiyv, Ukraine
Venues:
BSNLP | EACL
SIG:
SIGSLAV
Publisher:
Association for Computational Linguistics
Note:
Pages:
43–65
Language:
URL:
https://aclanthology.org/2021.bsnlp-1.6
DOI:
Bibkey:
Cite (ACL):
Vladislav Mikhailov, Ekaterina Taktasheva, Elina Sigdel, and Ekaterina Artemova. 2021. RuSentEval: Linguistic Source, Encoder Force!. In Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing, pages 43–65, Kiyv, Ukraine. Association for Computational Linguistics.
Cite (Informal):
RuSentEval: Linguistic Source, Encoder Force! (Mikhailov et al., BSNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.bsnlp-1.6.pdf
Code
 RussianNLP/rusenteval +  additional community code
Data
SentEvalUniversal Dependencies