Aligning Cross-lingual Sentence Representations with Dual Momentum Contrast

Liang Wang, Wei Zhao, Jingming Liu


Abstract
In this paper, we propose to align sentence representations from different languages into a unified embedding space, where semantic similarities (both cross-lingual and monolingual) can be computed with a simple dot product. Pre-trained language models are fine-tuned with the translation ranking task. Existing work (Feng et al., 2020) uses sentences within the same batch as negatives, which can suffer from the issue of easy negatives. We adapt MoCo (He et al., 2020) to further improve the quality of alignment. As the experimental results show, the sentence representations produced by our model achieve the new state-of-the-art on several tasks, including Tatoeba en-zh similarity search (Artetxe andSchwenk, 2019b), BUCC en-zh bitext mining, and semantic textual similarity on 7 datasets.
Anthology ID:
2021.emnlp-main.309
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3807–3815
Language:
URL:
https://aclanthology.org/2021.emnlp-main.309
DOI:
10.18653/v1/2021.emnlp-main.309
Bibkey:
Cite (ACL):
Liang Wang, Wei Zhao, and Jingming Liu. 2021. Aligning Cross-lingual Sentence Representations with Dual Momentum Contrast. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3807–3815, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Aligning Cross-lingual Sentence Representations with Dual Momentum Contrast (Wang et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.309.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.309.mp4
Data
SentEval