Detecting Fake News in the Era of Language Models

Muhammad Irfan Fikri Sabri, Hansi Hettiarachchi, Tharindu Ranasinghe


Abstract
The proliferation of fake news has been amplified by the advent of large language models (LLMs), which can generate highly realistic and scalable misinformation. While prior studies have focused primarily on detecting human-generated fake news, the efficacy of current models against LLM-generated content remains underexplored. We address this gap by compiling a novel dataset combining public and LLM-generated fake news, redefining detection as a ternary classification task (real, human-generated fake, LLM-generated fake), and evaluating eight diverse classification models, including traditional machine learning, fine-tuned transformers, and few-shot prompted LLMs. Our findings highlight the strengths and limitations of these models in detecting evolving LLM-generated fake news, offering insights for future detection strategies.
Anthology ID:
2025.ranlp-1.119
Volume:
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Galia Angelova, Maria Kunilovskaya, Marie Escribe, Ruslan Mitkov
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
1036–1043
Language:
URL:
https://aclanthology.org/2025.ranlp-1.119/
DOI:
Bibkey:
Cite (ACL):
Muhammad Irfan Fikri Sabri, Hansi Hettiarachchi, and Tharindu Ranasinghe. 2025. Detecting Fake News in the Era of Language Models. In Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era, pages 1036–1043, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Detecting Fake News in the Era of Language Models (Sabri et al., RANLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.ranlp-1.119.pdf