Deep Reinforcement Learning for Entity Alignment

Lingbing Guo, Yuqiang Han, Qiang Zhang, Huajun Chen


Abstract
Embedding-based methods have attracted increasing attention in recent entity alignment (EA) studies. Although great promise they can offer, there are still several limitations. The most notable is that they identify the aligned entities based on cosine similarity, ignoring the semantics underlying the embeddings themselves. Furthermore, these methods are shortsighted, heuristically selecting the closest entity as the target and allowing multiple entities to match the same candidate. To address these limitations, we model entity alignment as a sequential decision-making task, in which an agent sequentially decides whether two entities are matched or mismatched based on their representation vectors. The proposed reinforcement learning (RL)-based entity alignment framework can be flexibly adapted to most embedding-based EA methods. The experimental results demonstrate that it consistently advances the performance of several state-of-the-art methods, with a maximum improvement of 31.1% on Hits@1.
Anthology ID:
2022.findings-acl.217
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2754–2765
Language:
URL:
https://aclanthology.org/2022.findings-acl.217
DOI:
10.18653/v1/2022.findings-acl.217
Bibkey:
Cite (ACL):
Lingbing Guo, Yuqiang Han, Qiang Zhang, and Huajun Chen. 2022. Deep Reinforcement Learning for Entity Alignment. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2754–2765, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Deep Reinforcement Learning for Entity Alignment (Guo et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.217.pdf
Software:
 2022.findings-acl.217.software.zip
Code
 guolingbing/rlea