How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?

Hailong Jin, Tiansi Dong, Lei Hou, Juanzi Li, Hui Chen, Zelin Dai, Qu Yincen


Abstract
Cross-lingual Entity Typing (CLET) aims at improving the quality of entity type prediction by transferring semantic knowledge learned from rich-resourced languages to low-resourced languages. In this paper, by utilizing multilingual transfer learning via the mixture-of-experts approach, our model dynamically capture the relationship between target language and each source language, and effectively generalize to predict types of unseen entities in new languages. Extensive experiments on multi-lingual datasets show that our method significantly outperforms multiple baselines and can robustly handle negative transfer. We questioned the relationship between language similarity and the performance of CLET. A series of experiments refute the commonsense that the more source the better, and suggest the Similarity Hypothesis for CLET.
Anthology ID:
2022.findings-acl.243
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3071–3081
Language:
URL:
https://aclanthology.org/2022.findings-acl.243
DOI:
10.18653/v1/2022.findings-acl.243
Bibkey:
Cite (ACL):
Hailong Jin, Tiansi Dong, Lei Hou, Juanzi Li, Hui Chen, Zelin Dai, and Qu Yincen. 2022. How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3071–3081, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing? (Jin et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.243.pdf
Data
FIGER