Adversarial Attack against Cross-lingual Knowledge Graph Alignment

Zeru Zhang, Zijie Zhang, Yang Zhou, Lingfei Wu, Sixing Wu, Xiaoying Han, Dejing Dou, Tianshi Che, Da Yan


Abstract
Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks. However, there is still a paucity of vulnerability analyses of cross-lingual entity alignment under adversarial attacks. This paper proposes an adversarial attack model with two novel attack techniques to perturb the KG structure and degrade the quality of deep cross-lingual entity alignment. First, an entity density maximization method is employed to hide the attacked entities in dense regions in two KGs, such that the derived perturbations are unnoticeable. Second, an attack signal amplification method is developed to reduce the gradient vanishing issues in the process of adversarial attacks for further improving the attack effectiveness.
Anthology ID:
2021.emnlp-main.432
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5320–5337
Language:
URL:
https://aclanthology.org/2021.emnlp-main.432
DOI:
10.18653/v1/2021.emnlp-main.432
Bibkey:
Cite (ACL):
Zeru Zhang, Zijie Zhang, Yang Zhou, Lingfei Wu, Sixing Wu, Xiaoying Han, Dejing Dou, Tianshi Che, and Da Yan. 2021. Adversarial Attack against Cross-lingual Knowledge Graph Alignment. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5320–5337, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Adversarial Attack against Cross-lingual Knowledge Graph Alignment (Zhang et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.432.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.432.mp4