2024
pdf
bib
abs
LLMs to Replace Crowdsourcing For Parallel Data Creation? The Case of Text Detoxification
Daniil Moskovskiy
|
Sergey Pletenev
|
Alexander Panchenko
Findings of the Association for Computational Linguistics: EMNLP 2024
The lack of high-quality training data remains a significant challenge in NLP. Manual annotation methods, such as crowdsourcing, are costly, require intricate task design skills, and, if used incorrectly, may result in poor data quality. From the other hand, LLMs have demonstrated proficiency in many NLP tasks, including zero-shot and few-shot data annotation. However, they often struggle with text detoxification due to alignment constraints and fail to generate the required detoxified text. This work explores the potential of modern open source LLMs to annotate parallel data for text detoxification. Using the recent technique of activation patching, we generate a pseudo-parallel detoxification dataset based on ParaDetox. The detoxification model trained on our generated data shows comparable performance to the original dataset in automatic detoxification evaluation metrics and superior quality in manual evaluation and side-by-side comparisons.
pdf
bib
abs
Of Models and Men: Probing Neural Networks for Agreement Attraction with Psycholinguistic Data
Maxim Bazhukov
|
Ekaterina Voloshina
|
Sergey Pletenev
|
Arseny Anisimov
|
Oleg Serikov
|
Svetlana Toldova
Proceedings of the 28th Conference on Computational Natural Language Learning
Interpretability studies have played an important role in the field of NLP. They focus on the problems of how models encode information or, for instance, whether linguistic capabilities allow them to prefer grammatical sentences to ungrammatical. Recently, several studies examined whether the models demonstrate patterns similar to humans and whether they are sensitive to the phenomena of interference like humans’ grammaticality judgements, including the phenomenon of agreement attraction.In this paper, we probe BERT and GPT models on the syntactic phenomenon of agreement attraction in Russian using the psycholinguistic data with syncretism. Working on the language with syncretism between some plural and singular forms allows us to differentiate between the effects of the surface form and of the underlying grammatical feature. Thus we can further investigate models’ sensitivity to this phenomenon and examine if the patterns of their behaviour are similar to human patterns. Moreover, we suggest a new way of comparing models’ and humans’ responses via statistical testing. We show that there are some similarities between models’ and humans’ results, while GPT is somewhat more aligned with human responses than BERT. Finally, preliminary results suggest that surface form syncretism influences attraction, perhaps more so than grammatical form syncretism.
2023
pdf
bib
A Computational Study of Matrix Decomposition Methods for Compression of Pre-trained Transformers
Sergey Pletenev
|
Viktoriia Chekalina
|
Daniil Moskovskiy
|
Mikhail Seleznev
|
Sergey Zagoruyko
|
Alexander Panchenko
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
2021
pdf
bib
LIORI at the FinCausal 2021 Shared task: Transformer ensembles are not enough to win
Adis Davletov
|
Sergey Pletenev
|
Denis Gordeev
Proceedings of the 3rd Financial Narrative Processing Workshop
2020
pdf
bib
abs
Language Models for Cloze Task Answer Generation in Russian
Anastasia Nikiforova
|
Sergey Pletenev
|
Daria Sinitsyna
|
Semen Sorokin
|
Anastasia Lopukhina
|
Nick Howell
Proceedings of the Second Workshop on Linguistic and Neurocognitive Resources
Linguistics predictability is the degree of confidence in which language unit (word, part of speech, etc.) will be the next in the sequence. Experiments have shown that the correct prediction simplifies the perception of a language unit and its integration into the context. As a result of an incorrect prediction, language processing slows down. Currently, to get a measure of the language unit predictability, a neurolinguistic experiment known as a cloze task has to be conducted on a large number of participants. Cloze tasks are resource-consuming and are criticized by some researchers as an insufficiently valid measure of predictability. In this paper, we compare different language models that attempt to simulate human respondents’ performance on the cloze task. Using a language model to create cloze task simulations would require significantly less time and conduct studies related to linguistic predictability.