Improving Stability of Fine-Tuning Pretrained Language Models via Component-Wise Gradient Norm Clipping

Chenghao Yang, Xuezhe Ma


Abstract
Fine-tuning over large pretrained language models (PLMs) has established many state-of-the-art results. Despite its superior performance, such fine-tuning can be unstable, resulting in significant variance in performance and potential risks for practical applications. Previous works have attributed such instability to the catastrophic forgetting problem in the top layers of PLMs, which indicates iteratively fine-tuning layers in a top-down manner is a promising solution. In this paper, we first point out that this method does not always work out due to the different convergence speeds of different layers/modules. Inspired by this observation, we propose a simple component-wise gradient norm clipping method to adjust the convergence speed for different components. Experiment results demonstrate that our method achieves consistent improvements in terms of generalization performance, convergence speed, and training stability. The codebase can be found at https://github.com/yangalan123/FineTuningStability.
Anthology ID:
2022.emnlp-main.322
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4854–4859
Language:
URL:
https://aclanthology.org/2022.emnlp-main.322
DOI:
10.18653/v1/2022.emnlp-main.322
Bibkey:
Cite (ACL):
Chenghao Yang and Xuezhe Ma. 2022. Improving Stability of Fine-Tuning Pretrained Language Models via Component-Wise Gradient Norm Clipping. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4854–4859, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Improving Stability of Fine-Tuning Pretrained Language Models via Component-Wise Gradient Norm Clipping (Yang & Ma, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.322.pdf