BERT-of-Theseus: Compressing BERT by Progressive Module Replacing

Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, Ming Zhou


Abstract
In this paper, we propose a novel model compression approach to effectively compress BERT by progressive module replacing. Our approach first divides the original BERT into several modules and builds their compact substitutes. Then, we randomly replace the original modules with their substitutes to train the compact modules to mimic the behavior of the original modules. We progressively increase the probability of replacement through the training. In this way, our approach brings a deeper level of interaction between the original and compact models. Compared to the previous knowledge distillation approaches for BERT compression, our approach does not introduce any additional loss function. Our approach outperforms existing knowledge distillation approaches on GLUE benchmark, showing a new perspective of model compression.
Anthology ID:
2020.emnlp-main.633
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7859–7869
Language:
URL:
https://aclanthology.org/2020.emnlp-main.633
DOI:
10.18653/v1/2020.emnlp-main.633
Bibkey:
Cite (ACL):
Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. BERT-of-Theseus: Compressing BERT by Progressive Module Replacing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7859–7869, Online. Association for Computational Linguistics.
Cite (Informal):
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing (Xu et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.633.pdf
Video:
 https://slideslive.com/38938938
Code
 JetRunner/BERT-of-Theseus
Data
CoLAGLUEMRPCMultiNLIQNLISSTSST-2