Make Satire Boring Again: Reducing Stylistic Bias of Satirical Corpus by Utilizing Generative LLMs

Asli Umay Ozturk, Recep Firat Cekinel, Pinar Karagoz


Abstract
Satire detection is essential for accurately extracting opinions from textual data and combating misinformation online. However, the lack of diverse corpora for satire leads to the problem of stylistic bias which impacts the models’ detection performances. This study proposes a debiasing approach for satire detection, focusing on reducing biases in training data by utilizing generative large language models. The approach is evaluated in both cross-domain (irony detection) and cross-lingual (English) settings. Results show that the debiasing method enhances the robustness and generalizability of the models for satire and irony detection tasks in Turkish and English. However, its impact on causal language models, such as Llama-3.1, is limited. Additionally, this work curates and presents the Turkish Satirical News Dataset with detailed human annotations, with case studies on classification, debiasing, and explainability.
Anthology ID:
2025.bucc-1.4
Volume:
Proceedings of the 18th Workshop on Building and Using Comparable Corpora (BUCC)
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Serge Sharoff, Ayla Rigouts Terryn, Pierre Zweigenbaum, Reinhard Rapp
Venues:
BUCC | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19–35
Language:
URL:
https://aclanthology.org/2025.bucc-1.4/
DOI:
Bibkey:
Cite (ACL):
Asli Umay Ozturk, Recep Firat Cekinel, and Pinar Karagoz. 2025. Make Satire Boring Again: Reducing Stylistic Bias of Satirical Corpus by Utilizing Generative LLMs. In Proceedings of the 18th Workshop on Building and Using Comparable Corpora (BUCC), pages 19–35, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Make Satire Boring Again: Reducing Stylistic Bias of Satirical Corpus by Utilizing Generative LLMs (Ozturk et al., BUCC 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.bucc-1.4.pdf