Irina Proskurina


2023

pdf bib
Can BERT eat RuCoLA? Topological Data Analysis to Explain
Irina Proskurina | Ekaterina Artemova | Irina Piontkovskaya
Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)

This paper investigates how Transformer language models (LMs) fine-tuned for acceptability classification capture linguistic features. Our approach is based on best practices of topological data analysis (TDA) in NLP: we construct directed attention graphs from attention matrices, derive topological features from them and feed them to linear classifiers. We introduce two novel features, chordality and the matching number, and show that TDA-based classifiers outperform fine-tuning baselines. We experiment with two datasets, CoLA and RuCoLA, in English and Russian, which are typologically different languages. On top of that, we propose several black-box introspection techniques aimed at detecting changes in the attention mode of the LM’s during fine-tuning, defining the LM’s prediction confidences, and associating individual heads with fine-grained grammar phenomena. Our results contribute to understanding the behaviour of monolingual LMs in the acceptability classification task, provide insights into the functional roles of attention heads, and highlight the advantages of TDA-based approaches for analyzing LMs.We release the code and the experimental results for further uptake.

pdf bib
Mini Minds: Exploring Bebeshka and Zlata Baby Models
Irina Proskurina | Guillaume Metzler | Julien Velcin
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning

2022

pdf bib
Acceptability Judgements via Examining the Topology of Attention Maps
Daniil Cherniavskii | Eduard Tulchinskii | Vladislav Mikhailov | Irina Proskurina | Laida Kushnareva | Ekaterina Artemova | Serguei Barannikov | Irina Piontkovskaya | Dmitri Piontkovski | Evgeny Burnaev
Findings of the Association for Computational Linguistics: EMNLP 2022

The role of the attention mechanism in encoding linguistic knowledge has received special interest in NLP. However, the ability of the attention heads to judge the grammatical acceptability of a sentence has been underexplored. This paper approaches the paradigm of acceptability judgments with topological data analysis (TDA), showing that the geometric properties of the attention graph can be efficiently exploited for two standard practices in linguistics: binary judgments and linguistic minimal pairs. Topological features enhance the BERT-based acceptability classifier scores by 8%-24% on CoLA in three languages (English, Italian, and Swedish). By revealing the topological discrepancy between attention maps of minimal pairs, we achieve the human-level performance on the BLiMP benchmark, outperforming nine statistical and Transformer LM baselines. At the same time, TDA provides the foundation for analyzing the linguistic functions of attention heads and interpreting the correspondence between the graph features and grammatical phenomena. We publicly release the code and other materials used in the experiments.