Mimi Xie
2022
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Shaoyi Huang
|
Dongkuan Xu
|
Ian Yen
|
Yijue Wang
|
Sung-En Chang
|
Bingbing Li
|
Shiyang Chen
|
Mimi Xie
|
Sanguthevar Rajasekaran
|
Hang Liu
|
Caiwen Ding
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit. However, under the trending pretrain-and-finetune paradigm, we postulate a counter-traditional hypothesis, that is: pruning increases the risk of overfitting when performed at the fine-tuning phase. In this paper, we aim to address the overfitting problem and improve pruning performance via progressive knowledge distillation with error-bound properties. We show for the first time that reducing the risk of overfitting can help the effectiveness of pruning under the pretrain-and-finetune paradigm. Ablation studies and experiments on the GLUE benchmark show that our method outperforms the leading competitors across different tasks.
Search
Co-authors
- Shaoyi Huang 1
- Dongkuan Xu 1
- Ian Yen 1
- Yijue Wang 1
- Sung-En Chang 1
- show all...
Venues
- acl1