Shimao Zhang
2024
Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners
Shimao Zhang
|
Changjiang Gao
|
Wenhao Zhu
|
Jiajun Chen
|
Xin Huang
|
Xue Han
|
Junlan Feng
|
Chao Deng
|
Shujian Huang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Recently, Large Language Models (LLMs) have shown impressive language capabilities, while most of them have very unbalanced performance across different languages. Multilingual alignment based on the translation parallel data is an effective method to enhance LLMs’ multilingual capabilities. In this work, we first discover and comprehensively investigate the spontaneous multilingual alignment of LLMs. Firstly, we find that LLMs instruction-tuned on the question translation data (i.e. without annotated answers) are able to encourage the alignment between English and a wide range of languages, even including those unseen during instruction-tuning. Additionally, we utilize different settings and mechanistic interpretability methods to analyze the LLM’s performance in the multilingual scenario comprehensively. Our work suggests that LLMs have enormous potential for improving multilingual alignment efficiently with great language generalization and task generalization.
Search
Co-authors
- Changjiang Gao 1
- Wenhao Zhu 1
- Jiajun Chen 1
- Xin Huang 1
- Xue Han 1
- show all...