Jinyang Gao
2025
Language Adaptation of Large Language Models: An Empirical Study on LLaMA2
Shumin Wang
|
Yuexiang Xie
|
Bolin Ding
|
Jinyang Gao
|
Yanyong Zhang
Proceedings of the 31st International Conference on Computational Linguistics
There has been a surge of interest regarding language adaptation of Large Language Models (LLMs) to enhance the processing of texts in low-resource languages. While traditional language models have seen extensive research on language transfer, modern LLMs still necessitate further explorations in language adaptation. In this paper, we present a systematic review of the language adaptation process for LLMs, including vocabulary expansion, continued pre-training, and instruction fine-tuning, which focuses on empirical studies conducted on LLaMA2 and discussions on various settings affecting the model’s capabilities. This study provides helpful insights covering the entire language adaptation process, and highlights the compatibility and interactions between different steps, offering researchers a practical guidebook to facilitate the effective adaptation of LLMs across different languages.
2024
When to Trust LLMs: Aligning Confidence with Response Quality
Shuchang Tao
|
Liuyi Yao
|
Hanxing Ding
|
Yuexiang Xie
|
Qi Cao
|
Fei Sun
|
Jinyang Gao
|
Huawei Shen
|
Bolin Ding
Findings of the Association for Computational Linguistics: ACL 2024
Despite the success of large language models (LLMs) in natural language generation, much evidence shows that LLMs may produce incorrect or nonsensical text. This limitation highlights the importance of discerning when to trust LLMs, especially in safety-critical domains. Existing methods often express reliability by confidence level, however, their effectiveness is limited by the lack of objective guidance. To address this, we propose CONfidence-Quality-ORDer-preserving alignment approach (CONQORD), which leverages reinforcement learning guided by a tailored dual-component reward function. This function integrates quality reward and order-preserving alignment reward functions. Specifically, the order-preserving reward incentivizes the model to verbalize greater confidence for responses of higher quality to align the order of confidence and quality. Experiments demonstrate that CONQORD significantly improves the alignment performance between confidence and response accuracy, without causing over-cautious. Furthermore, the aligned confidence provided by CONQORD informs when to trust LLMs, and acts as a determinant for initiating the retrieval process of external knowledge. Aligning confidence with response quality ensures more transparent and reliable responses, providing better trustworthiness.