2024
pdf
bib
Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)
Oleg Serikov
|
Ekaterina Voloshina
|
Anna Postnikova
|
Saliha Muradoglu
|
Eric Le Ferrand
|
Elena Klyachko
|
Ekaterina Vylomova
|
Tatiana Shavrina
|
Francis Tyers
Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)
pdf
bib
abs
Of Models and Men: Probing Neural Networks for Agreement Attraction with Psycholinguistic Data
Maxim Bazhukov
|
Ekaterina Voloshina
|
Sergey Pletenev
|
Arseny Anisimov
|
Oleg Serikov
|
Svetlana Toldova
Proceedings of the 28th Conference on Computational Natural Language Learning
Interpretability studies have played an important role in the field of NLP. They focus on the problems of how models encode information or, for instance, whether linguistic capabilities allow them to prefer grammatical sentences to ungrammatical. Recently, several studies examined whether the models demonstrate patterns similar to humans and whether they are sensitive to the phenomena of interference like humans’ grammaticality judgements, including the phenomenon of agreement attraction.In this paper, we probe BERT and GPT models on the syntactic phenomenon of agreement attraction in Russian using the psycholinguistic data with syncretism. Working on the language with syncretism between some plural and singular forms allows us to differentiate between the effects of the surface form and of the underlying grammatical feature. Thus we can further investigate models’ sensitivity to this phenomenon and examine if the patterns of their behaviour are similar to human patterns. Moreover, we suggest a new way of comparing models’ and humans’ responses via statistical testing. We show that there are some similarities between models’ and humans’ results, while GPT is somewhat more aligned with human responses than BERT. Finally, preliminary results suggest that surface form syncretism influences attraction, perhaps more so than grammatical form syncretism.
pdf
bib
abs
Critical Size Hypothesis: How Model Hyperparameters Correlate with Its Linguistic Abilities
Ekaterina Voloshina
|
Oleg Serikov
Proceedings of the 2024 CLASP Conference on Multimodality and Interaction in Language Learning
In recent years, the models were tested on different probing tasks to examine their language knowledge. However, few researchers explored the very process of models’ language acquisition. Nevertheless, the analysis of language acquisition during training could shed light on the model parameters that help to acquire the language faster. In this work, we experiment with model hyperparameters and reveal that the hidden size is the most essential factor for model language acquisition.
pdf
bib
abs
Super donors and super recipients: Studying cross-lingual transfer between high-resource and low-resource languages
Vitaly Protasov
|
Elisei Stakovskii
|
Ekaterina Voloshina
|
Tatiana Shavrina
|
Alexander Panchenko
Proceedings of the Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)
Despite the increasing popularity of multilingualism within the NLP community, numerous languages continue to be underrepresented due to the lack of available resources.Our work addresses this gap by introducing experiments on cross-lingual transfer between 158 high-resource (HR) and 31 low-resource (LR) languages.We mainly focus on extremely LR languages, some of which are first presented in research works.Across 158*31 HR–LR language pairs, we investigate how continued pretraining on different HR languages affects the mT5 model’s performance in representing LR languages in the LM setup.Our findings surprisingly reveal that the optimal language pairs with improved performance do not necessarily align with direct linguistic motivations, with subtoken overlap playing a more crucial role. Our investigation indicates that specific languages tend to be almost universally beneficial for pretraining (super donors), while others benefit from pretraining with almost any language (super recipients). This pattern recurs in various setups and is unrelated to the linguistic similarity of HR-LR pairs.Furthermore, we perform evaluation on two downstream tasks, part-of-speech (POS) tagging and machine translation (MT), showing how HR pretraining affects LR language performance.
pdf
bib
abs
Probing of pretrained multilingual models on the knowledge of discourse
Mary Godunova
|
Ekaterina Voloshina
Proceedings of the 5th Workshop on Computational Approaches to Discourse (CODI 2024)
With the raise of large language models (LLMs), different evaluation methods, including probing methods, are gaining more attention. Probing methods are meant to evaluate LLMs on their linguistic abilities. However, most of the studies are focused on morphology and syntax, leaving discourse research out of the scope. At the same time, understanding discourse and pragmatics is crucial to building up the conversational abilities of models. In this paper, we address the problem of probing several models of discourse knowledge in 10 languages. We present an algorithm to automatically adapt existing discourse tasks to other languages based on the Universal Dependencies (UD) annotation. We find that models perform similarly on high- and low-resourced languages. However, the overall low performance of the models’ quality shows that they do not acquire discourse well enough.
2023
pdf
bib
abs
Are Language-and-Vision Transformers Sensitive to Discourse? A Case Study of ViLBERT
Ekaterina Voloshina
|
Nikolai Ilinykh
|
Simon Dobnik
Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023)
Language-and-vision models have shown good performance in tasks such as image-caption matching and caption generation. However, it is challenging for such models to generate pragmatically correct captions, which adequately reflect what is happening in one image or several images. It is crucial to evaluate this behaviour to understand underlying reasons behind it. Here we explore to what extent contextual language-and-vision models are sensitive to different discourse, both textual and visual. In particular, we employ one of the multi-modal transformers (ViLBERT) and test if it can match descriptions and images, differentiating them from distractors of different degree of similarity that are sampled from different visual and textual contexts. We place our evaluation in the multi-sentence and multi-image setup, where images and sentences are expected to form a single narrative structure. We show that the model can distinguish different situations but it is not sensitive to differences within one narrative structure. We also show that performance depends on the task itself, for example, what modality remains unchanged in non-matching pairs or how similar non-matching pairs are to original pairs.
pdf
bib
Proceedings of the Second Workshop on NLP Applications to Field Linguistics
Oleg Serikov
|
Ekaterina Voloshina
|
Anna Postnikova
|
Elena Klyachko
|
Ekaterina Vylomova
|
Tatiana Shavrina
|
Eric Le Ferrand
|
Valentin Malykh
|
Francis Tyers
|
Timofey Arkhangelskiy
|
Vladislav Mikhailov
Proceedings of the Second Workshop on NLP Applications to Field Linguistics
2022
pdf
bib
abs
Universal and Independent: Multilingual Probing Framework for Exhaustive Model Interpretation and Evaluation
Oleg Serikov
|
Vitaly Protasov
|
Ekaterina Voloshina
|
Viktoria Knyazkova
|
Tatiana Shavrina
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Linguistic analysis of language models is one of the ways to explain and describe their reasoning, weaknesses, and limitations. In the probing part of the model interpretability research, studies concern individual languages as well as individual linguistic structures. The question arises: are the detected regularities linguistically coherent, or on the contrary, do they dissonate at the typological scale? Moreover, the majority of studies address the inherent set of languages and linguistic structures, leaving the actual typological diversity knowledge out of scope. In this paper, we present and apply the GUI-assisted framework allowing us to easily probe massive amounts of languages for all the morphosyntactic features present in the Universal Dependencies data. We show that reflecting the anglo-centric trend in NLP over the past years, most of the regularities revealed in the mBERT model are typical for the western-European languages. Our framework can be integrated with the existing probing toolboxes, model cards, and leaderboards, allowing practitioners to use and share their familiar probing methods to interpret multilingual models. Thus we propose a toolkit to systematize the multilingual flaws in multilingual models, providing a reproducible experimental setup for 104 languages and 80 morphosyntactic features.
pdf
bib
Proceedings of the first workshop on NLP applications to field linguistics
Oleg Serikov
|
Ekaterina Voloshina
|
Anna Postnikova
|
Elena Klyachko
|
Ekaterina Neminova
|
Ekaterina Vylomova
|
Tatiana Shavrina
|
Eric Le Ferrand
|
Valentin Malykh
|
Francis Tyers
|
Timofey Arkhangelskiy
|
Vladislav Mikhailov
|
Alena Fenogenova
Proceedings of the first workshop on NLP applications to field linguistics
pdf
bib
abs
Razmecheno: Named Entity Recognition from Digital Archive of Diaries “Prozhito”
Timofey Atnashev
|
Veronika Ganeeva
|
Roman Kazakov
|
Daria Matyash
|
Michael Sonkin
|
Ekaterina Voloshina
|
Oleg Serikov
|
Ekaterina Artemova
Proceedings of the Fifth International Conference on Computational Linguistics in Bulgaria (CLIB 2022)
The vast majority of existing datasets for Named Entity Recognition (NER) are built primarily on news, research papers and Wikipedia with a few exceptions, created from historical and literary texts. What is more, English is the main source for data for further labelling. This paper aims to fill in multiple gaps by creating a novel dataset “Razmecheno”, gathered from the diary texts of the project “Prozhito” in Russian. Our dataset is of interest for multiple research lines: literary studies of diary texts, transfer learning from other domains, low-resource or cross-lingual named entity recognition. Razmecheno comprises 1331 sentences and 14119 tokens, sampled from diaries, written during the Perestroika. The annotation schema consists of five commonly used entity tags: person, characteristics, location, organisation, and facility. The labelling is carried out on the crowdsourcing platfrom Yandex.Toloka in two stages. First, workers selected sentences, which contain an entity of particular type. Second, they marked up entity spans. As a result 1113 entities were obtained. Empirical evaluation of Razmecheno is carried out with off-the-shelf NER tools and by fine-tuning pre-trained contextualized encoders. We release the annotated dataset for open access.