Arkadiusz Modzelewski
2026
MALicious INTent Dataset and Inoculating LLMs for Enhanced Disinformation Detection
Arkadiusz Modzelewski | Witold Sosnowski | Eleni Papadopulos | Elisa Sartori | Tiziano Labruna | Giovanni Da San Martino | Adam Wierzbicki
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Arkadiusz Modzelewski | Witold Sosnowski | Eleni Papadopulos | Elisa Sartori | Tiziano Labruna | Giovanni Da San Martino | Adam Wierzbicki
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
The intentional creation and spread of disinformation poses a significant threat to public discourse. However, existing English datasets and research rarely address the intentionality behind the disinformation. This work presents MALINT, the first human-annotated English corpus developed in collaboration with expert fact-checkers to capture disinformation and its malicious intent. We utilize our novel corpus to benchmark 12 language models, including small language models (SLMs) such as BERT and large language models (LLMs) like Llama 3.3, on binary and multilabel intent classification tasks. Moreover, inspired by inoculation theory from psychology and communication studies, we investigate whether incorporating knowledge of malicious intent can improve disinformation detection. To this end, we propose intent-based inoculation, an intent-augmented reasoning for LLMs that integrates intent analysis to mitigate the persuasive impact of disinformation. Analysis on six disinformation datasets, five LLMs, and seven languages shows that intent-augmented reasoning improves zero-shot disinformation detection. To support research in intent-aware disinformation detection, we release the MALINT dataset with annotations from each annotation step.
Detecting Winning Arguments with Large Language Models and Persuasion Strategies
Tiziano Labruna | Arkadiusz Modzelewski | Giorgio Satta | Giovanni Da San Martino
Findings of the Association for Computational Linguistics: EACL 2026
Tiziano Labruna | Arkadiusz Modzelewski | Giorgio Satta | Giovanni Da San Martino
Findings of the Association for Computational Linguistics: EACL 2026
Detecting persuasion in argumentative text is a challenging task with important implications for understanding human communication. This work investigates the role of persuasion strategies - such as Attack on reputation, Distraction, and Manipulative wording - in determining the persuasiveness of a text. We conduct experiments on three annotated argument datasets: Winning Arguments (built from the Change My View subreddit), Anthropic/Persuasion, and Persuasion for Good. Our approach leverages large language models (LLMs) with a chain-of-thought framework that guides reasoning over six persuasion strategies. Results show that strategy-guided reasoning improves the prediction of persuasiveness. To better understand the influence of content, we organize the Winning Argument dataset into broad discussion topics and analyze performance across them. We publicly release this topic-annotated version of the dataset to facilitate future research. Overall, our methodology demonstrates the value of structured, strategy-aware prompting for enhancing interpretability and robustness in argument quality assessment.
2025
PCoT: Persuasion-Augmented Chain of Thought for Detecting Fake News and Social Media Disinformation
Arkadiusz Modzelewski | Witold Sosnowski | Tiziano Labruna | Adam Wierzbicki | Giovanni Da San Martino
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Arkadiusz Modzelewski | Witold Sosnowski | Tiziano Labruna | Adam Wierzbicki | Giovanni Da San Martino
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Disinformation detection is a key aspect of media literacy. Psychological studies have shown that knowledge of persuasive fallacies helps individuals detect disinformation. Inspired by these findings, we experimented with large language models (LLMs) to test whether infusing persuasion knowledge enhances disinformation detection. As a result, we introduce the Persuasion-Augmented Chain of Thought (PCoT), a novel approach that leverages persuasion to improve disinformation detection in zero-shot classification. We extensively evaluate PCoT on online news and social media posts. Moreover, we publish two novel, up-to-date disinformation datasets: EUDisinfo and MultiDis. These datasets enable the evaluation of PCoT on content entirely unseen by the LLMs used in our experiments, as the content was published after the models’ knowledge cutoffs. We show that, on average, PCoT outperforms competitive methods by 15% across five LLMs and five datasets. These findings highlight the value of persuasion in strengthening zero-shot disinformation detection.
SlavicNLP 2025 Shared Task: Detection and Classification of Persuasion Techniques in Parliamentary Debates and Social Media
Jakub Piskorski | Dimitar Dimitrov | Filip Dobranić | Marina Ernst | Jacek Haneczok | Ivan Koychev | Nikola Ljubešić | Michal Marcinczuk | Arkadiusz Modzelewski | Ivo Moravski | Roman Yangarber
Proceedings of the 10th Workshop on Slavic Natural Language Processing (Slavic NLP 2025)
Jakub Piskorski | Dimitar Dimitrov | Filip Dobranić | Marina Ernst | Jacek Haneczok | Ivan Koychev | Nikola Ljubešić | Michal Marcinczuk | Arkadiusz Modzelewski | Ivo Moravski | Roman Yangarber
Proceedings of the 10th Workshop on Slavic Natural Language Processing (Slavic NLP 2025)
We present SlavicNLP 2025 Shared Task on Detection and Classification of Persuasion Techniques in Parliamentary Debates and Social Media. The task is structured into two subtasks: (1) Detection, to determine whether a given text fragment contains persuasion techniques, and (2) Classification, to determine for a given text fragment which persuasion techniques are present therein using a taxonomy of 25 persuasion technique taxonomy. The task focuses on two text genres, namely, parliamentary debates revolving around widely discussed topics, and social media, in five languages: Bulgarian, Croatian, Polish, Russian and Slovene. This task contributes to the broader effort of detecting and understanding manipulative attempts in various contexts. There were 15 teams that registered to participate in the task, of which 9 teams submitted a total of circa 220 system responses and described their approaches in 9 system description papers.
DiNaM: Disinformation Narrative Mining with Large Language Models
Witold Sosnowski | Arkadiusz Modzelewski | Kinga Skorupska | Adam Wierzbicki
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Witold Sosnowski | Arkadiusz Modzelewski | Kinga Skorupska | Adam Wierzbicki
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Disinformation poses a significant threat to democratic societies, public health, and national security. To address this challenge, fact-checking experts analyze and track disinformation narratives. However, the process of manually identifying these narratives is highly time-consuming and resource-intensive. In this article, we introduce DiNaM, the first algorithm and structured framework specifically designed for mining disinformation narratives. DiNaM uses a multi-step approach to uncover disinformation narratives. It first leverages Large Language Models (LLMs) to detect false information, then applies clustering techniques to identify underlying disinformation narratives. We evaluated DiNaM’s performance using ground-truth disinformation narratives from the EUDisinfoTest dataset. The evaluation employed the Weighted Chamfer Distance (WCD), which measures the similarity between two sets of embeddings: the ground truth and the predicted disinformation narratives. DiNaM achieved a state-of-the-art WCD score of 0.73, outperforming general-purpose narrative mining methods by a notable margin of 16.4–24.7%. We are releasing DiNaM’s codebase and the dataset to the public.
2024
MIPD: Exploring Manipulation and Intention In a Novel Corpus of Polish Disinformation
Arkadiusz Modzelewski | Giovanni Da San Martino | Pavel Savov | Magdalena Anna Wilczyńska | Adam Wierzbicki
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Arkadiusz Modzelewski | Giovanni Da San Martino | Pavel Savov | Magdalena Anna Wilczyńska | Adam Wierzbicki
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
This study presents a novel corpus of 15,356 Polish web articles, including articles identified as containing disinformation. Our dataset enables a multifaceted understanding of disinformation. We present a distinctive multilayered methodology for annotating disinformation in texts. What sets our corpus apart is its focus on uncovering hidden intent and manipulation in disinformative content. A team of experts annotated each article with multiple labels indicating both disinformation creators’ intents and the manipulation techniques employed. Additionally, we set new baselines for binary disinformation detection and two multiclass multilabel classification tasks: manipulation techniques and intention types classification.
EU DisinfoTest: a Benchmark for Evaluating Language Models’ Ability to Detect Disinformation Narratives
Witold Sosnowski | Arkadiusz Modzelewski | Kinga Skorupska | Jahna Otterbacher | Adam Wierzbicki
Findings of the Association for Computational Linguistics: EMNLP 2024
Witold Sosnowski | Arkadiusz Modzelewski | Kinga Skorupska | Jahna Otterbacher | Adam Wierzbicki
Findings of the Association for Computational Linguistics: EMNLP 2024
As narratives shape public opinion and influence societal actions, distinguishing between truthful and misleading narratives has become a significant challenge. To address this, we introduce the EU DisinfoTest, a novel benchmark designed to evaluate the efficacy of Language Models in identifying disinformation narratives. Developed through a Human-in-the-Loop methodology and grounded in research from EU DisinfoLab, the EU DisinfoTest comprises more than 1,300 narratives. Our benchmark includes persuasive elements under Logos, Pathos, and Ethos rhetorical dimensions. We assessed state-of-the-art LLMs, including the newly released GPT-4o, on their capability to perform zero-shot classification of disinformation narratives versus credible narratives. Our findings reveal that LLMs tend to regard narratives with authoritative appeals as trustworthy, while those with emotional appeals are frequently incorrectly classified as disinformative. These findings highlight the challenges LLMs face in nuanced content interpretation and suggest the need for tailored adjustments in LLM training to better handle diverse narrative structures.
2023
DSHacker at SemEval-2023 Task 3: Genres and Persuasion Techniques Detection with Multilingual Data Augmentation through Machine Translation and Text Generation
Arkadiusz Modzelewski | Witold Sosnowski | Magdalena Wilczynska | Adam Wierzbicki
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
Arkadiusz Modzelewski | Witold Sosnowski | Magdalena Wilczynska | Adam Wierzbicki
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
In our article, we present the systems developed for SemEval-2023 Task 3, which aimed to evaluate the ability of Natural Language Processing (NLP) systems to detect genres and persuasion techniques in multiple languages. We experimented with several data augmentation techniques, including machine translation (MT) and text generation. For genre detection, synthetic texts for each class were created using the OpenAI GPT-3 Davinci language model. In contrast, to detect persuasion techniques, we relied on augmenting the dataset through text translation using the DeepL translator. Fine-tuning the models using augmented data resulted in a top-ten ranking across all languages, indicating the effectiveness of the approach. The models for genre detection demonstrated excellent performance, securing the first, second, and third positions in Spanish, German, and Italian, respectively. Moreover, one of the models for persuasion techniques’ detection secured the third position in Polish. Our contribution constitutes the system architecture that utilizes DeepL and GPT-3 for data augmentation for the purpose of detecting both genre and persuasion techniques.