Eduard Tulchinskii
2024
Robust AI-Generated Text Detection by Restricted Embeddings
Kristian Kuznetsov
|
Eduard Tulchinskii
|
Laida Kushnareva
|
German Magai
|
Serguei Barannikov
|
Sergey Nikolenko
|
Irina Piontkovskaya
Findings of the Association for Computational Linguistics: EMNLP 2024
Growing amount and quality of AI-generated texts makes detecting such content more difficult. In most real-world scenarios, the domain (style and topic) of generated data and the generator model are not known in advance. In this work, we focus on the robustness of classifier-based detectors of AI-generated text, namely their ability to transfer to unseen generators or semantic domains. We investigate the geometry of the embedding space of Transformer-based text encoders and show that clearing out harmful linear subspaces helps to train a robust classifier, ignoring domain-specific spurious features. We investigate several subspace decomposition and feature selection strategies and achieve significant improvements over state of the art methods in cross-domain and cross-generator transfer. Our best approaches for head-wise and coordinate-based subspace removal increase the mean out-of-distribution (OOD) classification score by up to 9% and 14% in particular setups for RoBERTa and BERT embeddings respectively. We release our code and data: https://github.com/SilverSolver/RobustATD
2022
Acceptability Judgements via Examining the Topology of Attention Maps
Daniil Cherniavskii
|
Eduard Tulchinskii
|
Vladislav Mikhailov
|
Irina Proskurina
|
Laida Kushnareva
|
Ekaterina Artemova
|
Serguei Barannikov
|
Irina Piontkovskaya
|
Dmitri Piontkovski
|
Evgeny Burnaev
Findings of the Association for Computational Linguistics: EMNLP 2022
The role of the attention mechanism in encoding linguistic knowledge has received special interest in NLP. However, the ability of the attention heads to judge the grammatical acceptability of a sentence has been underexplored. This paper approaches the paradigm of acceptability judgments with topological data analysis (TDA), showing that the geometric properties of the attention graph can be efficiently exploited for two standard practices in linguistics: binary judgments and linguistic minimal pairs. Topological features enhance the BERT-based acceptability classifier scores by 8%-24% on CoLA in three languages (English, Italian, and Swedish). By revealing the topological discrepancy between attention maps of minimal pairs, we achieve the human-level performance on the BLiMP benchmark, outperforming nine statistical and Transformer LM baselines. At the same time, TDA provides the foundation for analyzing the linguistic functions of attention heads and interpreting the correspondence between the graph features and grammatical phenomena. We publicly release the code and other materials used in the experiments.