Haiwei Zhang
2023
From Alignment to Entailment: A Unified Textual Entailment Framework for Entity Alignment
Yu Zhao
|
Yike Wu
|
Xiangrui Cai
|
Ying Zhang
|
Haiwei Zhang
|
Xiaojie Yuan
Findings of the Association for Computational Linguistics: ACL 2023
Entity Alignment (EA) aims to find the equivalent entities between two Knowledge Graphs (KGs). Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings, which prevents the direct interaction between the original information of the cross-KG entities. Moreover, they encode the relational triples and attribute triples of an entity in heterogeneous embedding spaces, which prevents them from helping each other. In this paper, we transform both triples into unified textual sequences, and model the EA task as a bi-directional textual entailment task between the sequences of cross-KG entities. Specifically, we feed the sequences of two entities simultaneously into a pre-trained language model (PLM) and propose two kinds of PLM-based entity aligners that model the entailment probability between sequences as the similarity between entities. Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information. The experiments on five cross-lingual EA datasets show that our approach outperforms the state-of-the-art EA methods and enables the mutual enhancement of the heterogeneous information. Codes are available at https://github.com/OreOZhao/TEA.
2022
MoSE: Modality Split and Ensemble for Multimodal Knowledge Graph Completion
Yu Zhao
|
Xiangrui Cai
|
Yike Wu
|
Haiwei Zhang
|
Ying Zhang
|
Guoqing Zhao
|
Ning Jiang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Multimodal knowledge graph completion (MKGC) aims to predict missing entities in MKGs. Previous works usually share relation representation across modalities. This results in mutual interference between modalities during training, since for a pair of entities, the relation from one modality probably contradicts that from another modality. Furthermore, making a unified prediction based on the shared relation representation treats the input in different modalities equally, while their importance to the MKGC task should be different. In this paper, we propose MoSE, a Modality Split representation learning and Ensemble inference framework for MKGC. Specifically, in the training phase, we learn modality-split relation embeddings for each modality instead of a single modality-shared one, which alleviates the modality interference. Based on these embeddings, in the inference phase, we first make modality-split predictions and then exploit various ensemble methods to combine the predictions with different weights, which models the modality importance dynamically. Experimental results on three KG datasets show that MoSE outperforms state-of-the-art MKGC methods. Codes are available at https://github.com/OreOZhao/MoSE4MKGC.
Search
Co-authors
- Yu Zhao 2
- Yike Wu 2
- Xiangrui Cai 2
- Ying Zhang 2
- Xiaojie Yuan 1
- show all...