Analyzing Redundancy in Pretrained Transformer Models

Fahim Dalvi, Hassan Sajjad, Nadir Durrani, Yonatan Belinkov


Abstract
Transformer-based deep NLP models are trained using hundreds of millions of parameters, limiting their applicability in computationally constrained environments. In this paper, we study the cause of these limitations by defining a notion of Redundancy, which we categorize into two classes: General Redundancy and Task-specific Redundancy. We dissect two popular pretrained models, BERT and XLNet, studying how much redundancy they exhibit at a representation-level and at a more fine-grained neuron-level. Our analysis reveals interesting insights, such as i) 85% of the neurons across the network are redundant and ii) at least 92% of them can be removed when optimizing towards a downstream task. Based on our analysis, we present an efficient feature-based transfer learning procedure, which maintains 97% performance while using at-most 10% of the original neurons.
Anthology ID:
2020.emnlp-main.398
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4908–4926
Language:
URL:
https://aclanthology.org/2020.emnlp-main.398
DOI:
10.18653/v1/2020.emnlp-main.398
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.398.pdf
Video:
 https://slideslive.com/38939360
Code
 fdalvi/analyzing-redundancy-in-pretrained-transformer-models
Data
GLUEMRPCPenn TreebankQNLISST