Ayyoob ImaniGooghari
2025
How Transliterations Improve Crosslingual Alignment
Yihong Liu
|
Mingyang Wang
|
Amir Hossein Kargaran
|
Ayyoob ImaniGooghari
|
Orgest Xhelili
|
Haotian Ye
|
Chunlan Ma
|
François Yvon
|
Hinrich Schütze
Proceedings of the 31st International Conference on Computational Linguistics
Recent studies have shown that post-aligning multilingual pretrained language models (mPLMs) using alignment objectives on both original and transliterated data can improve crosslingual alignment. This improvement further leads to better crosslingual transfer performance. However, it remains unclear how and why a better crosslingual alignment is achieved, as this technique only involves transliterations, and does not use any parallel data. This paper attempts to explicitly evaluate the crosslingual alignment and identify the key elements in transliteration-based approaches that contribute to better performance. For this, we train multiple models under varying setups for two pairs of related languages: (1) Polish and Ukrainian and (2) Hindi and Urdu. To assess alignment, we define four types of similarities based on sentence representations. Our experimental results show that adding transliterations alone improves the overall similarities, even for random sentence pairs. With the help of auxiliary transliteration-based alignment objectives, especially the contrastive objective, the model learns to distinguish matched from random pairs, leading to better crosslingual alignment. However, we also show that better alignment does not always yield better downstream performance, suggesting that further research is needed to clarify the connection between alignment and performance. The code implementation is based on https://github.com/cisnlp/Transliteration-PPA.
Search
Fix data
Co-authors
- Amir Hossein Kargaran 1
- Yihong Liu 1
- Chunlan Ma 1
- Hinrich Schütze 1
- Mingyang Wang 1
- show all...