Unveiling Fake News with Adversarial Arguments Generated by Multimodal Large Language Models

Xiaofan Zheng, Minnan Luo, Xinghao Wang


Abstract
In the era of social media, the proliferation of fake news has created an urgent need for more effective detection methods, particularly for multimodal content. The task of identifying fake news is highly challenging, as it requires broad background knowledge and understanding across various domains. Existing detection methods primarily rely on neural networks to learn latent feature representations, resulting in black-box classifications with limited real-world understanding. To address these limitations, we propose a novel approach that leverages Multimodal Large Language Models (MLLMs) for fake news detection. Our method introduces adversarial reasoning through debates from opposing perspectives. By harnessing the powerful capabilities of MLLMs in text generation and cross-modal reasoning, we guide these models to engage in multimodal debates, generating adversarial arguments based on contradictory evidence from both sides of the issue. We then utilize these arguments to learn reasonable thinking patterns, enabling better multimodal fusion and fine-tuning. This process effectively positions our model as a debate referee for adversarial inference. Extensive experiments conducted on four fake news detection datasets demonstrate that our proposed method significantly outperforms state-of-the-art approaches.
Anthology ID:
2025.coling-main.526
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7862–7869
Language:
URL:
https://aclanthology.org/2025.coling-main.526/
DOI:
Bibkey:
Cite (ACL):
Xiaofan Zheng, Minnan Luo, and Xinghao Wang. 2025. Unveiling Fake News with Adversarial Arguments Generated by Multimodal Large Language Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 7862–7869, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Unveiling Fake News with Adversarial Arguments Generated by Multimodal Large Language Models (Zheng et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.526.pdf