Transkimmer: Transformer Learns to Layer-wise Skim

Yue Guan, Zhengyi Li, Jingwen Leng, Zhouhan Lin, Minyi Guo


Abstract
Transformer architecture has become the de-facto model for many machine learning tasks from natural language processing and computer vision. As such, improving its computational efficiency becomes paramount. One of the major computational inefficiency of Transformer based models is that they spend the identical amount of computation throughout all layers. Prior works have proposed to augment the Transformer model with the capability of skimming tokens to improve its computational efficiency. However, they suffer from not having effectual and end-to-end optimization of the discrete skimming predictor. To address the above limitations, we propose the Transkimmer architecture, which learns to identify hidden state tokens that are not required by each layer. The skimmed tokens are then forwarded directly to the final output, thus reducing the computation of the successive layers. The key idea in Transkimmer is to add a parameterized predictor before each layer that learns to make the skimming decision. We also propose to adopt reparameterization trick and add skim loss for the end-to-end training of Transkimmer. Transkimmer achieves 10.97x average speedup on GLUE benchmark compared with vanilla BERT-base baseline with less than 1% accuracy degradation.
Anthology ID:
2022.acl-long.502
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7275–7286
Language:
URL:
https://aclanthology.org/2022.acl-long.502
DOI:
10.18653/v1/2022.acl-long.502
Bibkey:
Cite (ACL):
Yue Guan, Zhengyi Li, Jingwen Leng, Zhouhan Lin, and Minyi Guo. 2022. Transkimmer: Transformer Learns to Layer-wise Skim. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7275–7286, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Transkimmer: Transformer Learns to Layer-wise Skim (Guan et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.502.pdf
Code
 chandlerguan/transkimmer
Data
GLUEIMDb Movie ReviewsQNLI