LM-Cocktail: Resilient Tuning of Language Models via Model Merging

Shitao Xiao, Zheng Liu, Peitian Zhang, Xingrun Xing


Abstract
The pre-trained language models are continually fine-tuned to better support downstream applications. However, this operation may result in significant performance degeneration on general tasks beyond the targeted domain. To overcome this problem, we propose LM-Cocktail which enables the fine-tuned model to stay resilient in general perspectives. Our method is conducted in the form of model merging, where the fine-tuned language model is merged with the pre-trained base model or the peer models from other domains through weighted average. Despite simplicity, LM-Cocktail is surprisingly effective: the resulted model is able to achieve a strong empirical performance in the whole scope of general tasks while preserving a superior capacity in its targeted domain.
Anthology ID:
2024.findings-acl.145
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2474–2488
Language:
URL:
https://aclanthology.org/2024.findings-acl.145
DOI:
Bibkey:
Cite (ACL):
Shitao Xiao, Zheng Liu, Peitian Zhang, and Xingrun Xing. 2024. LM-Cocktail: Resilient Tuning of Language Models via Model Merging. In Findings of the Association for Computational Linguistics ACL 2024, pages 2474–2488, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
LM-Cocktail: Resilient Tuning of Language Models via Model Merging (Xiao et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.145.pdf