Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval

Houxing Ren, Linjun Shou, Jian Pei, Ning Wu, Ming Gong, Daxin Jiang


Abstract
Recent multilingual pre-trained models have shown better performance in various multilingual tasks. However, these models perform poorly on multilingual retrieval tasks due to lacking multilingual training data. In this paper, we propose to mine and generate self-supervised training data based on a large-scale unlabeled corpus. We carefully design a mining method which combines the sparse and dense models to mine the relevance of unlabeled queries and passages. And we introduce a query generator to generate more queries in target languages for unlabeled passages. Through extensive experiments on Mr. TYDI dataset and an industrial dataset from a commercial search engine, we demonstrate that our method performs better than baselines based on various pre-trained multilingual models. Our method even achieves on-par performance with the supervised method on the latter dataset.
Anthology ID:
2022.findings-emnlp.31
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
444–459
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.31
DOI:
10.18653/v1/2022.findings-emnlp.31
Bibkey:
Cite (ACL):
Houxing Ren, Linjun Shou, Jian Pei, Ning Wu, Ming Gong, and Daxin Jiang. 2022. Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 444–459, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval (Ren et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.31.pdf