Przemyslaw Kazienko
2026
Breaking the Illusion of Reasoning in Polish LLMs: Quality over Quantity of Thought
Dzmitry Pihulski | Mikołaj Langner | Jan Eliasz | Przemyslaw Kazienko | Jan Kocon | Teddy Ferdinan
Findings of the Association for Computational Linguistics: EACL 2026
Dzmitry Pihulski | Mikołaj Langner | Jan Eliasz | Przemyslaw Kazienko | Jan Kocon | Teddy Ferdinan
Findings of the Association for Computational Linguistics: EACL 2026
Recent advances in large language models (LLMs) have introduced explicit reasoning capabilities, yet the factors that truly drive their improved performance remain unclear. In this work, we disentangle the effects of reasoning quality and sequence length by fine-tuning 8B models on several Polish variants of the Mixture-of-Thoughts (MoT-PL) dataset, each representing a distinct reasoning style: *Detailed*, *Summarized*, *BabyThink*, *Lengthy*. We found that the model trained on high-quality reasoning traces achieved better average performance than all other models; neither *longer reasoning with similar quality* nor *low-quality reasoning with similar length* achieved similar gains. Qualitative and quantitative analyses further reveal that reasoning clarity, rather than verbosity, is the dominant factor driving model performance. These findings underscore the importance of reasoning content quality in LLM training and provide new insights into designing more effective reasoning-oriented datasets and models.
From Detection to Explanation: Modeling Fine-Grained Emotional Social Influence Techniques with LLMs and Human Preferences
Maciej Markiewicz | Wiktoria Mieleszczenko-Kowszewicz | Beata Bajcar | Tomasz Adamczyk | Aleksander Szczęsny | Jolanta Babiak | Przemyslaw Kazienko
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Maciej Markiewicz | Wiktoria Mieleszczenko-Kowszewicz | Beata Bajcar | Tomasz Adamczyk | Aleksander Szczęsny | Jolanta Babiak | Przemyslaw Kazienko
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
This paper investigates the capabilities of LLMs to detect and explain fine-grained emotional social influence techniques in textual dialogues, as well as human preferences for technique explanations. We present findings from our two studies. In Study 1, a dataset of 238 Polish dialogues is introduced, each annotated with detailed span-level labels. On this data, we evaluate the performance of LLMs on two tasks: detecting 11 emotional social influence techniques and identifying text spans corresponding to specific techniques. The results indicate that current LLMs demonstrate limited effectiveness in accurately detecting fine-grained emotional social influence.In Study 2, we examine various LLM-generated explanations through human pairwise preferences and four criteria: comprehensibility, cognitive coherence, completeness, and soundness, with the latter two emerging as the most influential on general human preference. All data, including human annotations, are publicly available as the EmoSocInflu dataset (https://github.com/social-influence/emo-soc-influ). Our findings highlight a critical need for further advancement in the field. As LLM-supported manipulation grows, it is essential to promote public understanding of social influence mechanisms, enabling individuals to critically recognize and interpret the subtle forms of manipulation that shape public opinion.
2024
Self-training Large Language Models through Knowledge Detection
Yeo Wei Jie | Teddy Ferdinan | Przemyslaw Kazienko | Ranjan Satapathy | Erik Cambria
Findings of the Association for Computational Linguistics: EMNLP 2024
Yeo Wei Jie | Teddy Ferdinan | Przemyslaw Kazienko | Ranjan Satapathy | Erik Cambria
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) often necessitate extensive labeled datasets and training compute to achieve impressive performance across downstream tasks. This paper explores a self-training paradigm, where the LLM autonomously curates its own labels and selectively trains on unknown data samples identified through a reference-free consistency method. Empirical evaluations demonstrate significant improvements in reducing hallucination in generation across multiple subjects. Furthermore, the selective training framework mitigates catastrophic forgetting in out-of-distribution benchmarks, addressing a critical limitation in training LLMs. Our findings suggest that such an approach can substantially reduce the dependency on large labeled datasets, paving the way for more scalable and cost-effective language model training.
2023
RWKV: Reinventing RNNs for the Transformer Era
Bo Peng | Eric Alcaide | Quentin Anthony | Alon Albalak | Samuel Arcadinho | Stella Biderman | Huanqi Cao | Xin Cheng | Michael Chung | Leon Derczynski | Xingjian Du | Matteo Grella | Kranthi Gv | Xuzheng He | Haowen Hou | Przemyslaw Kazienko | Jan Kocon | Jiaming Kong | Bartłomiej Koptyra | Hayden Lau | Jiaju Lin | Krishna Sri Ipsit Mantri | Ferdinand Mom | Atsushi Saito | Guangyu Song | Xiangru Tang | Johan Wind | Stanisław Woźniak | Zhenyuan Zhang | Qinghua Zhou | Jian Zhu | Rui-Jie Zhu
Findings of the Association for Computational Linguistics: EMNLP 2023
Bo Peng | Eric Alcaide | Quentin Anthony | Alon Albalak | Samuel Arcadinho | Stella Biderman | Huanqi Cao | Xin Cheng | Michael Chung | Leon Derczynski | Xingjian Du | Matteo Grella | Kranthi Gv | Xuzheng He | Haowen Hou | Przemyslaw Kazienko | Jan Kocon | Jiaming Kong | Bartłomiej Koptyra | Hayden Lau | Jiaju Lin | Krishna Sri Ipsit Mantri | Ferdinand Mom | Atsushi Saito | Guangyu Song | Xiangru Tang | Johan Wind | Stanisław Woźniak | Zhenyuan Zhang | Qinghua Zhou | Jian Zhu | Rui-Jie Zhu
Findings of the Association for Computational Linguistics: EMNLP 2023
Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs. Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.
PALS: Personalized Active Learning for Subjective Tasks in NLP
Kamil Kanclerz | Konrad Karanowski | Julita Bielaniewicz | Marcin Gruza | Piotr Miłkowski | Jan Kocon | Przemyslaw Kazienko
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Kamil Kanclerz | Konrad Karanowski | Julita Bielaniewicz | Marcin Gruza | Piotr Miłkowski | Jan Kocon | Przemyslaw Kazienko
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
For subjective NLP problems, such as classification of hate speech, aggression, or emotions, personalized solutions can be exploited. Then, the learned models infer about the perception of the content independently for each reader. To acquire training data, texts are commonly randomly assigned to users for annotation, which is expensive and highly inefficient. Therefore, for the first time, we suggest applying an active learning paradigm in a personalized context to better learn individual preferences. It aims to alleviate the labeling effort by selecting more relevant training samples. In this paper, we present novel Personalized Active Learning techniques for Subjective NLP tasks (PALS) to either reduce the cost of the annotation process or to boost the learning effect. Our five new measures allow us to determine the relevance of a text in the context of learning users personal preferences. We validated them on three datasets: Wiki discussion texts individually labeled with aggression and toxicity, and on Unhealthy Conversations dataset. Our PALS techniques outperform random selection even by more than 30%. They can also be used to reduce the number of necessary annotations while maintaining a given quality level. Personalized annotation assignments based on our controversy measure decrease the amount of data needed to just 25%-40% of the initial size.
2022
What If Ground Truth Is Subjective? Personalized Deep Neural Hate Speech Detection
Kamil Kanclerz | Marcin Gruza | Konrad Karanowski | Julita Bielaniewicz | Piotr Milkowski | Jan Kocon | Przemyslaw Kazienko
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
Kamil Kanclerz | Marcin Gruza | Konrad Karanowski | Julita Bielaniewicz | Piotr Milkowski | Jan Kocon | Przemyslaw Kazienko
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
A unified gold standard commonly exploited in natural language processing (NLP) tasks requires high inter-annotator agreement. However, there are many subjective problems that should respect users individual points of view. Therefore in this paper, we evaluate three different personalized methods on the task of hate speech detection. The user-centered techniques are compared to the generalizing baseline approach. We conduct our experiments on three datasets including single-task and multi-task hate speech detection. For validation purposes, we introduce a new data-split strategy, preventing data leakage between training and testing. In order to better understand the model behavior for individual users, we carried out personalized ablation studies. Our experiments revealed that all models leveraging user preferences in any case provide significantly better results than most frequently used generalized approaches. This supports our overall observation that personalized models should always be considered in all subjective NLP tasks, including hate speech detection.
2021
Personal Bias in Prediction of Emotions Elicited by Textual Opinions
Piotr Milkowski | Marcin Gruza | Kamil Kanclerz | Przemyslaw Kazienko | Damian Grimling | Jan Kocon
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop
Piotr Milkowski | Marcin Gruza | Kamil Kanclerz | Przemyslaw Kazienko | Damian Grimling | Jan Kocon
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop
Analysis of emotions elicited by opinions, comments, or articles commonly exploits annotated corpora, in which the labels assigned to documents average the views of all annotators, or represent a majority decision. The models trained on such data are effective at identifying the general views of the population. However, their usefulness for predicting the emotions evoked by the textual content in a particular individual is limited. In this paper, we present a study performed on a dataset containing 7,000 opinions, each annotated by about 50 people with two dimensions: valence, arousal, and with intensity of eight emotions from Plutchik’s model. Our study showed that individual responses often significantly differed from the mean. Therefore, we proposed a novel measure to estimate this effect – Personal Emotional Bias (PEB). We also developed a new BERT-based transformer architecture to predict emotions from an individual human perspective. We found PEB a major factor for improving the quality of personalized reasoning. Both the method and measure may boost the quality of content recommendation systems and personalized solutions that protect users from hate speech or unwanted content, which are highly subjective in nature.
Controversy and Conformity: from Generalized to Personalized Aggressiveness Detection
Kamil Kanclerz | Alicja Figas | Marcin Gruza | Tomasz Kajdanowicz | Jan Kocon | Daria Puchalska | Przemyslaw Kazienko
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Kamil Kanclerz | Alicja Figas | Marcin Gruza | Tomasz Kajdanowicz | Jan Kocon | Daria Puchalska | Przemyslaw Kazienko
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
There is content such as hate speech, offensive, toxic or aggressive documents, which are perceived differently by their consumers. They are commonly identified using classifiers solely based on textual content that generalize pre-agreed meanings of difficult problems. Such models provide the same results for each user, which leads to high misclassification rate observable especially for contentious, aggressive documents. Both document controversy and user nonconformity require new solutions. Therefore, we propose novel personalized approaches that respect individual beliefs expressed by either user conformity-based measures or various embeddings of their previous text annotations. We found that only a few annotations of most controversial documents are enough for all our personalization methods to significantly outperform classic, generalized solutions. The more controversial the content, the greater the gain. The personalized solutions may be used to efficiently filter unwanted aggressive content in the way adjusted to a given person.
Search
Fix author
Co-authors
- Jan Kocon 6
- Marcin Gruza 4
- Kamil Kanclerz 4
- Piotr Miłkowski 3
- Julita Bielaniewicz 2
- Teddy Ferdinan 2
- Konrad Karanowski 2
- Tomasz Adamczyk 1
- Alon Albalak 1
- Eric Alcaide 1
- Quentin Anthony 1
- Samuel Arcadinho 1
- Jolanta Babiak 1
- Beata Bajcar 1
- Stella Biderman 1
- Erik Cambria 1
- Huanqi Cao 1
- Xin Cheng 1
- Michael Chung 1
- Leon Derczynski 1
- Xingjian Du 1
- Jan Eliasz 1
- Alicja Figas 1
- Matteo Grella 1
- Damian Grimling 1
- Kranthi Gv 1
- Xuzheng He 1
- Haowen Hou 1
- Tomasz Kajdanowicz 1
- Jiaming Kong 1
- Bartłomiej Koptyra 1
- Mikołaj Langner 1
- Hayden Lau 1
- Jiaju Lin 1
- Krishna Sri Ipsit Mantri 1
- Maciej Markiewicz 1
- Wiktoria Mieleszczenko-Kowszewicz 1
- Ferdinand Mom 1
- Bo Peng 1
- Dzmitry Pihulski 1
- Daria Puchalska 1
- Atsushi Saito 1
- Ranjan Satapathy 1
- Guangyu Song 1
- Aleksander Szczęsny 1
- Xiangru Tang 1
- Yeo Wei Jie 1
- Johan Wind 1
- Stanisław Woźniak 1
- Zhenyuan Zhang 1
- Qinghua Zhou 1
- Jian Zhu 1
- Rui-Jie Zhu 1