Finding the Dominant Winning Ticket in Pre-Trained Language Models

Zhuocheng Gong, Di He, Yelong Shen, Tie-Yan Liu, Weizhu Chen, Dongyan Zhao, Ji-Rong Wen, Rui Yan


Abstract
The Lottery Ticket Hypothesis suggests that for any over-parameterized model, a small subnetwork exists to achieve competitive performance compared to the backbone architecture. In this paper, we study whether there is a winning lottery ticket for pre-trained language models, which allow the practitioners to fine-tune the parameters in the ticket but achieve good downstream performance. To achieve this, we regularize the fine-tuning process with L1 distance and explore the subnetwork structure (what we refer to as the “dominant winning ticket”). Empirically, we show that (a) the dominant winning ticket can achieve performance that is comparable with that of the full-parameter model, (b) the dominant winning ticket is transferable across different tasks, (c) and the dominant winning ticket has a natural structure within each parameter matrix. Strikingly, we find that a dominant winning ticket that takes up 0.05% of the parameters can already achieve satisfactory performance, indicating that the PLM is significantly reducible during fine-tuning.
Anthology ID:
2022.findings-acl.115
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1459–1472
Language:
URL:
https://aclanthology.org/2022.findings-acl.115
DOI:
10.18653/v1/2022.findings-acl.115
Bibkey:
Cite (ACL):
Zhuocheng Gong, Di He, Yelong Shen, Tie-Yan Liu, Weizhu Chen, Dongyan Zhao, Ji-Rong Wen, and Rui Yan. 2022. Finding the Dominant Winning Ticket in Pre-Trained Language Models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1459–1472, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Finding the Dominant Winning Ticket in Pre-Trained Language Models (Gong et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.115.pdf
Data
GLUEQNLI