Deciphering Rumors: A Multi-Task Learning Approach with Intent-aware Hierarchical Contrastive Learning

Chang Yang, Peng Zhang, Hui Gao, Jing Zhang


Abstract
Social networks are rife with noise and misleading information, presenting multifaceted challenges for rumor detection. In this paper, from the perspective of human cognitive subjectivity, we introduce the mining of individual latent intentions and propose a novel multi-task learning framework, the Intent-Aware Rumor Detection Network (IRDNet). IRDNet is designed to discern multi-level rumor semantic features and latent user intentions, addressing the challenges of robustness and key feature mining and alignment that plague existing models. In IRDNet, the multi-level semantic extraction module captures sequential and hierarchical features to generate robust semantic representations. The hierarchical contrastive learning module incorporates two complementary strategies, event-level and intent-level, to establish cognitive anchors that uncover the latent intentions of information disseminators. Event-level contrastive learning employs high-quality data augmentation and adversarial perturbations to enhance model robustness. Intent-level contrastive learning leverages the intent encoder to capture latent intent features and optimize consistency within the same intent while ensuring heterogeneity between different intents to clearly distinguish key features from irrelevant elements. Experimental results demonstrate that IRDNet significantly improves the effectiveness of rumor detection and effectively addresses the challenges present in the field of rumor detection.
Anthology ID:
2024.emnlp-main.256
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4471–4483
Language:
URL:
https://aclanthology.org/2024.emnlp-main.256
DOI:
Bibkey:
Cite (ACL):
Chang Yang, Peng Zhang, Hui Gao, and Jing Zhang. 2024. Deciphering Rumors: A Multi-Task Learning Approach with Intent-aware Hierarchical Contrastive Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 4471–4483, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Deciphering Rumors: A Multi-Task Learning Approach with Intent-aware Hierarchical Contrastive Learning (Yang et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.256.pdf