Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment

Tuan Dinh, Jy-yong Sohn, Shashank Rajput, Timothy Ossowski, Yifei Ming, Junjie Hu, Dimitris Papailiopoulos, Kangwook Lee


Abstract
Word translation without parallel corpora has become feasible, rivaling the performance of supervised methods. Recent findings have shown the improvement in accuracy and robustness of unsupervised word translation (UWT) by utilizing visual observations, which are universal representations across languages. Our work investigates the potential of using not only visual observations but also pretrained language-image models for enabling a more efficient and robust UWT. We develop a novel UWT method dubbed Word Alignment using Language-Image Pretraining (WALIP), leveraging visual observations via the shared image-text embedding space of CLIPs (Radford et al., 2021). WALIP has a two-step procedure. First, we retrieve word pairs with high confidences of similarity, computed using our proposed image-based fingerprints, which define the initial pivot for the alignment. Second, we apply our robust Procrustes algorithm to estimate the linear mapping between two embedding spaces, which iteratively corrects and refines the estimated alignment. Our extensive experiments show that WALIP improves upon the state-of-the-art performance of bilingual word alignment for a few language pairs across different word embeddings and displays great robustness to the dissimilarity of language pairs or training corpora for two word embeddings.
Anthology ID:
2022.findings-emnlp.12
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
154–168
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.12
DOI:
10.18653/v1/2022.findings-emnlp.12
Bibkey:
Cite (ACL):
Tuan Dinh, Jy-yong Sohn, Shashank Rajput, Timothy Ossowski, Yifei Ming, Junjie Hu, Dimitris Papailiopoulos, and Kangwook Lee. 2022. Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 154–168, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment (Dinh et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.12.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.12.mp4