Chengzhi Zhong
2026
Language Lives in Sparse Dimensions: Toward Interpretable and Efficient Multilingual Control for Large Language Models
Chengzhi Zhong | Fei Cheng | Qianying Liu | Yugo Murawaki | Chenhui Chu | Sadao Kurohashi
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Chengzhi Zhong | Fei Cheng | Qianying Liu | Yugo Murawaki | Chenhui Chu | Sadao Kurohashi
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models exhibit strong multilingual capabilities despite limited exposure to non-English data. Prior studies show that English-centric large language models map multilingual content into English-aligned representations at intermediate layers and then project them back into target-language token spaces in the final layer. From this observation, we hypothesize that this cross-lingual transition is governed by a small and sparse set of dimensions, which occur at consistent indices across the intermediate to final layers. Building on this insight, we introduce a simple, training-free method to identify and manipulate these dimensions, requiring only as few as 50 sentences of either parallel or monolingual data. Experiments on a multilingual generation control task reveal the interpretability of these dimensions, demonstrating that the interventions in these dimensions can switch the output language while preserving semantic content, and that it surpasses the performance of prior neuron-based approaches at a substantially lower cost.
2025
What Language Do Non-English-Centric Large Language Models Think in?
Chengzhi Zhong | Qianying Liu | Fei Cheng | Junfeng Jiang | Zhen Wan | Chenhui Chu | Yugo Murawaki | Sadao Kurohashi
Findings of the Association for Computational Linguistics: ACL 2025
Chengzhi Zhong | Qianying Liu | Fei Cheng | Junfeng Jiang | Zhen Wan | Chenhui Chu | Yugo Murawaki | Sadao Kurohashi
Findings of the Association for Computational Linguistics: ACL 2025
In this study, we investigate whether non-English-centric large language models, ‘think’ in their specialized language. Specifically, we analyze how intermediate layer representations, when projected into the vocabulary space, favor certain languages during generation—termed as latent languages. We categorize non-English-centric models into two groups: CPMs, which are English-centric models with continued pre-training on its specialized language, and BLMs, which are pre-trained on a balanced mix of multiple languages from scratch. Our findings reveal that while English-centric models rely exclusively on English as their latent language, non-English-centric models activate multiple latent languages, dynamically selecting the most similar one based on both the source and target languages. This also influences responses to culture difference questions, reducing English-centric biases in non-English models. This study deepens our understanding of language representation in non-English-centric LLMs, shedding light on the intricate dynamics of multilingual processing at the representational level.