Honglei Lyu
2025
Semantic Reshuffling with LLM and Heterogeneous Graph Auto-Encoder for Enhanced Rumor Detection
Guoyi Li
|
Die Hu
|
Zongzhen Liu
|
Xiaodan Zhang
|
Honglei Lyu
Proceedings of the 31st International Conference on Computational Linguistics
Social media is crucial for information spread, necessitating effective rumor detection to curb misinformation’s societal effects. Current methods struggle against complex propagation influenced by bots, coordinated accounts, and echo chambers, which fragment information and increase risks of misjudgments and model vulnerability. To counteract these issues, we introduce a new rumor detection framework, the Narrative-Integrated Metapath Graph Auto-Encoder (NIMGA). This model consists of two core components: (1) Metapath-based Heterogeneous Graph Reconstruction. (2) Narrative Reordering and Perspective Fusion. The first component dynamically reconstructs propagation structures to capture complex interactions and hidden pathways within social networks, enhancing accuracy and robustness. The second implements a dual-agent mechanism for viewpoint distillation and comment narrative reordering, using LLMs to refine diverse perspectives and semantic evolution, revealing patterns of information propagation and latent semantic correlations among comments. Extensive testing confirms our model outperforms existing methods, demonstrating its effectiveness and robustness in enhancing rumor representation through graph reconstruction and narrative reordering.
2023
Adversarial Text Generation by Search and Learning
Guoyi Li
|
Bingkang Shi
|
Zongzhen Liu
|
Dehan Kong
|
Yulei Wu
|
Xiaodan Zhang
|
Longtao Huang
|
Honglei Lyu
Findings of the Association for Computational Linguistics: EMNLP 2023
Recent research has shown that evaluating the robustness of natural language processing models using textual attack methods is significant. However, most existing text attack methods only use heuristic replacement strategies or language models to generate replacement words at the word level. The blind pursuit of high attack success rates makes it difficult to ensure the quality of the generated adversarial text. As a result, adversarial text is often difficult for humans to understand. In fact, many methods that perform well in terms of text attacks often generate adversarial text with poor quality. To address this important gap, our work treats black-box text attack as an unsupervised text generation problem and proposes a search and learning framework for Adversarial Text Generation by Search and Learning (ATGSL) and develops three adversarial attack methods (ATGSL-SA, ATGSL-BM, ATGSL-FUSION) for black box text attacks. We first apply a heuristic search attack algorithm (ATGSL-SA) and a linguistic thesaurus to generate adversarial samples with high semantic similarity. After this process, we train a conditional generative model to learn from the search results while smoothing out search noise. Moreover, we design an efficient ATGSL-BM attack algorithm based on the text generator. Furthermore, we propose a hybrid attack method (ATGSL-FUSION) that integrates the advantages of ATGSL-SA and ATGSL-BM to enhance attack effectiveness. Our proposed attack algorithms are significantly superior to the most advanced methods in terms of attack efficiency and adversarial text quality.