Artem Vazhentsev


2023

pdf bib
Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?
Gleb Kuzmin | Artem Vazhentsev | Artem Shelmanov | Xudong Han | Simon Suster | Maxim Panov | Alexander Panchenko | Timothy Baldwin
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Efficient Out-of-Domain Detection for Sequence to Sequence Models
Artem Vazhentsev | Akim Tsvigun | Roman Vashurin | Sergey Petrakov | Daniil Vasilev | Maxim Panov | Alexander Panchenko | Artem Shelmanov
Findings of the Association for Computational Linguistics: ACL 2023

Sequence-to-sequence (seq2seq) models based on the Transformer architecture have become a ubiquitous tool applicable not only to classical text generation tasks such as machine translation and summarization but also to any other task where an answer can be represented in a form of a finite text fragment (e.g., question answering). However, when deploying a model in practice, we need not only high performance but also an ability to determine cases where the model is not applicable. Uncertainty estimation (UE) techniques provide a tool for identifying out-of-domain (OOD) input where the model is susceptible to errors. State-of-the-art UE methods for seq2seq models rely on computationally heavyweight and impractical deep ensembles. In this work, we perform an empirical investigation of various novel UE methods for large pre-trained seq2seq models T5 and BART on three tasks: machine translation, text summarization, and question answering. We apply computationally lightweight density-based UE methods to seq2seq models and show that they often outperform heavyweight deep ensembles on the task of OOD detection.

pdf bib
LM-Polygraph: Uncertainty Estimation for Language Models
Ekaterina Fadeeva | Roman Vashurin | Akim Tsvigun | Artem Vazhentsev | Sergey Petrakov | Kirill Fedyanin | Daniil Vasilev | Elizaveta Goncharova | Alexander Panchenko | Maxim Panov | Timothy Baldwin | Artem Shelmanov
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

Recent advancements in the capabilities of large language models (LLMs) have paved the way for a myriad of groundbreaking applications in various fields. However, a significant challenge arises as these models often “hallucinate”, i.e., fabricate facts without providing users an apparent means to discern the veracity of their statements. Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of LLMs. However, to date, research on UE methods for LLMs has been focused primarily on theoretical rather than engineering contributions. In this work, we tackle this issue by introducing LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python. Additionally, it introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores, empowering end-users to discern unreliable responses. LM-Polygraph is compatible with the most recent LLMs, including BLOOMz, LLaMA-2, ChatGPT, and GPT-4, and is designed to support future releases of similarly-styled LMs.

pdf bib
Hybrid Uncertainty Quantification for Selective Text Classification in Ambiguous Tasks
Artem Vazhentsev | Gleb Kuzmin | Akim Tsvigun | Alexander Panchenko | Maxim Panov | Mikhail Burtsev | Artem Shelmanov
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Many text classification tasks are inherently ambiguous, which results in automatic systems having a high risk of making mistakes, in spite of using advanced machine learning models. For example, toxicity detection in user-generated content is a subjective task, and notions of toxicity can be annotated according to a variety of definitions that can be in conflict with one another. Instead of relying solely on automatic solutions, moderation of the most difficult and ambiguous cases can be delegated to human workers. Potential mistakes in automated classification can be identified by using uncertainty estimation (UE) techniques. Although UE is a rapidly growing field within natural language processing, we find that state-of-the-art UE methods estimate only epistemic uncertainty and show poor performance, or under-perform trivial methods for ambiguous tasks such as toxicity detection. We argue that in order to create robust uncertainty estimation methods for ambiguous tasks it is necessary to account also for aleatoric uncertainty. In this paper, we propose a new uncertainty estimation method that combines epistemic and aleatoric UE methods. We show that by using our hybrid method, we can outperform state-of-the-art UE methods for toxicity detection and other ambiguous text classification tasks.

2022

pdf bib
Uncertainty Estimation of Transformer Predictions for Misclassification Detection
Artem Vazhentsev | Gleb Kuzmin | Artem Shelmanov | Akim Tsvigun | Evgenii Tsymbalov | Kirill Fedyanin | Maxim Panov | Alexander Panchenko | Gleb Gusev | Mikhail Burtsev | Manvel Avetisian | Leonid Zhukov
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Uncertainty estimation (UE) of model predictions is a crucial step for a variety of tasks such as active learning, misclassification detection, adversarial attack detection, out-of-distribution detection, etc. Most of the works on modeling the uncertainty of deep neural networks evaluate these methods on image classification tasks. Little attention has been paid to UE in natural language processing. To fill this gap, we perform a vast empirical investigation of state-of-the-art UE methods for Transformer models on misclassification detection in named entity recognition and text classification tasks and propose two computationally efficient modifications, one of which approaches or even outperforms computationally intensive methods.

pdf bib
ALToolbox: A Set of Tools for Active Learning Annotation of Natural Language Texts
Akim Tsvigun | Leonid Sanochkin | Daniil Larionov | Gleb Kuzmin | Artem Vazhentsev | Ivan Lazichny | Nikita Khromov | Danil Kireev | Aleksandr Rubashevskii | Olga Shahmatova | Dmitry V. Dylov | Igor Galitskiy | Artem Shelmanov
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

We present ALToolbox – an open-source framework for active learning (AL) annotation in natural language processing. Currently, the framework supports text classification, sequence tagging, and seq2seq tasks. Besides state-of-the-art query strategies, ALToolbox provides a set of tools that help to reduce computational overhead and duration of AL iterations and increase annotated data reusability. The framework aims to support data scientists and researchers by providing an easy-to-deploy GUI annotation tool directly in the Jupyter IDE and an extensible benchmark for novel AL methods. We prepare a small demonstration of ALToolbox capabilities available online. The code of the framework is published under the MIT license.