Xiaoxin Chen
2024
A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models
Zhihao Wang
|
Shiyu Liu
|
Jianheng Huang
|
Wang Zheng
|
YiXuan Liao
|
Xiaoxin Chen
|
Junfeng Yao
|
Jinsong Su
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Due to the continuous emergence of new data, version updates have become an indispensable requirement for Large Language Models (LLMs). The training paradigms for version updates of LLMs include pre-training from scratch (PTFS) and continual pre-training (CPT). Preliminary experiments demonstrate that PTFS achieves better pre-training performance, while CPT has lower training cost. Moreover, their performance and training cost gaps widen progressively with version updates. To investigate the underlying reasons for this phenomenon, we analyze the effect of learning rate adjustments during the two stages of CPT: preparing an initialization checkpoint and continual pre-training based on this checkpoint. We find that a large learning rate in the first stage and a complete learning rate decay process in the second stage are crucial for version updates of LLMs. Hence, we propose a learning rate path switching training paradigm. Our paradigm comprises one main path, where we pre-train a LLM with the maximal learning rate, and multiple branching paths, each of which corresponds to an update of the LLM with newly-added training data. Extensive experiments demonstrate the effectiveness and generalization of our paradigm. Particularly, when training four versions of LLMs, our paradigm reduces the total training cost to 58% compared to PTFS, while maintaining comparable pre-training performance.
2013
Using the Web to Train a Mobile Device Oriented Japanese Input Method Editor
Xianchao Wu
|
Rixin Xiao
|
Xiaoxin Chen
Proceedings of the Sixth International Joint Conference on Natural Language Processing
Search
Co-authors
- Zhihao Wang 1
- Shiyu Liu 1
- Jianheng Huang 1
- Wang Zheng 1
- YiXuan Liao 1
- show all...