Sergey Pletenev


2023

pdf bib
A Computational Study of Matrix Decomposition Methods for Compression of Pre-trained Transformers
Sergey Pletenev | Viktoriia Chekalina | Daniil Moskovskiy | Mikhail Seleznev | Sergey Zagoruyko | Alexander Panchenko
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

2021

pdf bib
LIORI at the FinCausal 2021 Shared task: Transformer ensembles are not enough to win
Adis Davletov | Sergey Pletenev | Denis Gordeev
Proceedings of the 3rd Financial Narrative Processing Workshop

2020

pdf bib
Language Models for Cloze Task Answer Generation in Russian
Anastasia Nikiforova | Sergey Pletenev | Daria Sinitsyna | Semen Sorokin | Anastasia Lopukhina | Nick Howell
Proceedings of the Second Workshop on Linguistic and Neurocognitive Resources

Linguistics predictability is the degree of confidence in which language unit (word, part of speech, etc.) will be the next in the sequence. Experiments have shown that the correct prediction simplifies the perception of a language unit and its integration into the context. As a result of an incorrect prediction, language processing slows down. Currently, to get a measure of the language unit predictability, a neurolinguistic experiment known as a cloze task has to be conducted on a large number of participants. Cloze tasks are resource-consuming and are criticized by some researchers as an insufficiently valid measure of predictability. In this paper, we compare different language models that attempt to simulate human respondents’ performance on the cloze task. Using a language model to create cloze task simulations would require significantly less time and conduct studies related to linguistic predictability.