Hyperpolyglot LLMs: Cross-Lingual Interpretability in Token Embeddings

Andrea W Wen-Yi, David Mimno


Abstract
Cross-lingual transfer learning is an important property of multilingual large language models (LLMs). But how do LLMs represent relationships between languages? Every language model has an input layer that maps tokens to vectors. This ubiquitous layer of language models is often overlooked. We find that similarities between these input embeddings are highly interpretable and that the geometry of these embeddings differs between model families. In one case (XLM-RoBERTa), embeddings encode language: tokens in different writing systems can be linearly separated with an average of 99.2% accuracy. Another family (mT5) represents cross-lingual semantic similarity: the 50 nearest neighbors for any token represent an average of 7.61 writing systems, and are frequently translations. This result is surprising given that there is no explicit parallel cross-lingual training corpora and no explicit incentive for translations in pre-training objectives. Our research opens the door for investigations in 1) The effect of pre-training and model architectures on representations of languages and 2) The applications of cross-lingual representations embedded in language models.
Anthology ID:
2023.emnlp-main.71
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1124–1131
Language:
URL:
https://aclanthology.org/2023.emnlp-main.71
DOI:
10.18653/v1/2023.emnlp-main.71
Bibkey:
Cite (ACL):
Andrea W Wen-Yi and David Mimno. 2023. Hyperpolyglot LLMs: Cross-Lingual Interpretability in Token Embeddings. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1124–1131, Singapore. Association for Computational Linguistics.
Cite (Informal):
Hyperpolyglot LLMs: Cross-Lingual Interpretability in Token Embeddings (Wen-Yi & Mimno, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.71.pdf